Crisis Mode It’s no secret that OpenAI’s gangbusters chatbot ChatGPT has become the bane of educators trying to get their pupils to turn in some honest work. Cheating will never go away, but chatbots just make it more tempting and easy than ever. The thing with being a cheat, though, is that a good one has to be careful to cover their tracks — something that some lazy students relying on an AI that does all the work for them appear not to be bothering with. “I had answers come in that said, ‘I am just an AI language model, I don’t have an opinion on that,'” Timothy Main, a writing professor at Conestoga College in Canada, told The Associated Press. “I’ve caught dozens,” he said. “We’re in full-on crisis mode.” No Easy Answers So far, a one-size-fits-all deterrent has eluded educators. And thus ensues a game of cat-and-mouse. One option that gained some headway in the months following the chatbot boom was AI detectors. Yet once more people used them, it was obvious that they weren’t reliable, in many cases flagging human-composed prose as AI-generated. OpenAI released its own detection tool in February, and failed so dismally that it was scrapped months later. This more or less leaves the question down to each educator’s own judgment. Last semester, Main logged 57 cases of students cheating — half of them involved AI. But Main says that AI plagiarism can be harder to weed out, since the text it spits out is unique. Another…Idiot Students Are Submitting Answers Saying "I Am an AI Language Model"