Well, that didn't take long. We're starting to see almost-believable autogenerated text being used so spam our issue tracker.

There's a legitimate chance that ChatGPT and their ilk are going to kill participatory open source. How can you keep any forums open to the public, when anyone can just pour an arbitrary amount of generated garbage into them?

How do you tell a smart but green contributor who's still learning the language from a thousand bots spewing averacitous trash?

This is going to do to authentication what spam did to email: force it towards centralization so that it can be defended enough to stay useful at all.

Show thread
Follow

@mhoye I don't understand. Running a trained model is way cheaper than training... But still expensive.What's the incentive for an elaborate, resource-hungry spamming?

Also, I thought that already today classifiers are good enough at recognising generated text vs human-written one? :think:

@mhoye can you link the spam in question? How do you know it's automated? Malicious?

What's the difference between people using Google translate abd GPT to express themselves in a foreign language?

@jonn @mhoye I don't know about the recognition issue... I was reading one article which asserted that an algorithm being adopted by some educational establishments has a 9% false positive rate in identifying AI generated text. I don't know what the negative rate was. There might be contexts where that is acceptable, useful even, but by default it leaves one in ten students having to defend their work falsely charged with using AI

Sign in to participate in the conversation
Doma Social

Mastodon server of https://doma.dev.