to be fair, one wouldn’t really expect an account named @KinderHappyLand to generate inappropriate or offensive contentSometimes the best way to detect malicious or deceptive use of AI tools is to look for silly mistakes on the part of the humans piloting the tools in question. For example, the popular large language model-based chatbot ChatGPT spits out error messages such as “I’m sorry, I cannot generate inappropriate or offensive of content” when given certain prompts. When these error messages show up in a large corpus of content that someone is trying to pass off as human-written material, they indicate that other (seemingly authentic) text in that corpus is likely also output from the large language model that produced the error. the accounts in the spam network have an abnormal creation date distribution (highlighted in pink)Here’s a look at a Twitter spam network that, based on the presence of ChatGPT error messages in some of the tweets it posts, appears to be using ChatGPT to generate tweet content. The error messages on their own are not sufficient to identify the set of accounts that make up the network, however, for the following reasons:other spam networks may be tweeting the same error messagesChatGPT errors have become something of a meme among human usersmost of the accounts in the network have only tweeted a few times and have yet to tweet an error messageConveniently, the accounts in this network recently followed a set of large accounts en masse (mostly US and European political…I'm sorry, this spam network cannot generate inappropriate or offensive content