June 12, 2024 - 3:16am
Irritating posters.
Lately, there has been a few pesky posters that either create their comments from AI tools that are irrelevant, or just plain wrong. Also, some have taken to providing links to gaming sites. How can we keep The Fresh Loaf true to “A Community of Amateur Bakers and Artisan Bread Enthusiasts.” Such activity is under mining the value of our community.
Any suggestions?
Cheers,
Gavin
Reporting the post seems to work....eventually, although it's after the fact.
The spammers were busy last night, eh?
I would like to report comments too, but haven't worked out how to do so.
Each post has a "report" link right next the the "reply" link under the post. Just click it. You can unreport it there too if you clicked by mistake.
And you don't get to say why when you click report.
Yes, but I would never ask Floyd to tackle revising the site's code after all these years. He's done so much for us already, and does even more keeping it running and warding off most of the attacks and spam.
I wonder if Floyd has made any provisions for handing off the site maintenance and funding to someone if or when he feels it is time to move on. It's a hard thing to arrange for, but I hope it can be done.
TomP
Oh no, I wasn't implying that. Jon said he was having problems with reporting, so I just meant not to expect anything (like a window popping open) to happen after pressing "report".
Thanks Tom. Perhaps I need to clarify it further.
Each post has a report button on it.
But, lately I've noticed individual comments (on a real human's post) that I think should be reported to Floyd and we don't seem to have a button for that.
I'm betting that when Floyd or whoever is moderating sees a post flagged, that they scan the thread to see if it deserves removal. Those report-comments probably help.
it's most of the internet that is affected. I participate in a few forums on different topics and, while none are immune, some are affected a lot less than others. No matter, it's just one too many things to have to think about: should I take that post seriously or not?
Exactly.
I was toying with the idea of outing members who post nothing but chatbot pap--list them on one thread. But, I'm hesitant--should probably keep my mouth shut and mind my own business.
On today's thread about parchment paper, which is already been and gone, I was pretty sure it was chatbot-written. So I asked ChatGPT if the last sentences of the post had likely been written by itself. Yes!
Me
---
Suppose the candidate chatbot was known to be yourself. Concisely repeat the assessment.
ChatGPT
---------
Given that the candidate chatbot is myself, the sentence "Let's exchange our thoughts and methods for getting the best use out of our parchment paper without compromising the quality of our cooking or baking!" is likely written by me, a chatbot. The sentence has a clear, engaging, and practical tone, which aligns with my training to generate coherent, friendly, and topic-specific content.
Just to be fair, I asked ChatGPT about a passage from one of my own posts; I know for sure that no chatbot was involved! Here's what it came back with:
Maybe I was a little harsh about the parchment paper post. Or maybe my author's ear is just more discriminating than ChatGPT's.
Please ask Chat GPT why it has such an affinity for baking. Enquiring minds want to know...
It occurs to me, if a bot can register itself with no human involvement, then we are in for a long slog.
If a human must do the registration, then I wonder, what is the agenda? What do they gain by registering bots in user forums?
I don't think the chatbots are doing the registering. They won't produce output unless they are prompted. Why use the chatbot to write the material instead of just writing it themselves? There's the mystery.
If persons are going through the trouble of registering and getting approval to participate in forums as bots, then there there has to be something more to it than using it as a front for a personal insecurity.
Putting myself in their
shoeshead... follow the money... there is a TON of $$$ going into AI now, and I can easily imagine that it is a high priority for some AI developers to work at how to create convincing, relatively independent, personalities which only exist in cyberspace. It would be very important for such an entity to know how to interact with humans in a forum without calling attention to their own bot-ness. This is where they can learn and sharpen theirskillsprograms.Perhaps Floyd should make the forum free for humans, but charge a registration fee for bots...
Plausible, I'd say.
I just discovered that there are a lot of special variations of ChatGPT. I suppose you have to pay for them but I didn't look into the details. Some (or maybe almost all) adapt it for some special purpose, and many or most of them are being offered by third parties, not OpenAI itself. I spotted several that claim to take chatbot output and make it read like human-generated prose. Maybe that's what some of these posts are for - trying out those products to see how well they succeed.
My son works in this general field and assures me that creating and cultivating online personalities is a high priority for non-democratic bad actors (primarily China and Russia now) as vehicles for spreading (dis)information or whatever into the much-more porous democratic countries. It costs almost nothing to do and has enormous reach. TFL is a very well-behaved forum, favored by Google, and a very safe place for testing AI-generated content, while creating a searchable trail of legit-looking web activity. He expects this to ramp up going into the elections in the US this fall.
That thread disappeared at the speed of light.
But the thread that remains is probaly not a human-generated thread either. A reverse search of the picture seems to indicate that the same picture is being used under a different identity, and all posts on any forum appear quite bogus. Doing a little investigation often is telling, but that kind of investigation is tedious.
Brilliant! I never thought of a reverse image search.
I don't have anything solid to back up my speculations, but I think there might be a couple of reasons behind these posts that aren't linking to a product:
1. The same mentality that likes to play "practical jokes". It's basically about power with a touch of sadism. "Ha! Made you look!". "Ha! I can waste your time!".
2. People who want to connect with others but can't for some reason. They are too timid or insecure to just write a post so the think of some tenuous connection with the forum and ask the chatbot to suggest something. If a discussion ensues, they feel like they are part of a conversion.
How to prevent these posts is a tough one. We don't want to squelch a legitimate question or suggestion, but we don't want to be forced to evaluate every new thread that pops up. Once a week, not so bad. Every day, or worse several per day, it's really degrading the forum experience.
And I really was looking forward to an answer to my question from the AI bot. I understand (and agree) with the responses from the humans. Not sure what the real motivation the AI bot had "in mind" when asking the question.
Yes, there have been a bunch of ChatGPT generated posts lately. Welcome to the future! It's only going to get worse.
As others have said, flag the posts and I'll look at them as quickly as possible.
As to the "why?" my assumption has been that the posters are starting conversions about particular topics like wax paper with the intent of eventually posting lists to specific sites that they are trying to boost the SEO of. That's just a hunch though, I don't know for certain.
That's also my hunch: you could make the same practice posts on Reddit, so why here?
--> My guess: they intend to post to Reddit in the future, and they don't want to get an IP-ban while practising.
Most of these AI/bot posts seem to target old threads, often very old. I wonder if there is any mileage in auto closing off old threads after a set period of inactivity?
I've seen this done on some forums.
Lance
Great point about auto-closing old threads, Lance. That could definitely help reduce the amount of outdated content being hijacked by bots. Additionally, maybe we could consider a forum-wide guideline that encourages members to start new threads for fresh topics instead of reviving old ones. This might not only keep the content more relevant but also help in flagging any suspicious activity more easily. Regular cleaning of these old threads might also keep the community more focused and engaging.
Cheers,
chimera
Well, I think there is some good info in many old posts, so I don't think they should be zapped, just closed to new posts. A poster could always provide a link to an old post if wanted.
I would also make filling in profile details obligatory at joining.
Maybe ideas like these will be possible once Floyd has worked his Drupal upgrade magic (or possibly hard labour!) - and perhaps they would help to keep the bots/AI at bay.
Lance
We definitely don't want to lose old conversations! They have many gems, like Debra Winks on pineapple juice for new starters.
TomP
👍👍👍
Yippee
Perhaps new members should be required to submit a loaf of bread for a membership committee approval to prove that they are not onlyhuman but interested in bread baking.