Paper Retracted When Authors Caught Using ChatGPT to Write It

Red Handed A paper published in the journal Physica Scripta last month became the subject of controversy after Guillaume Cabanac, a computer scientist and integrity investigator, noticed that the ChatGPT query to “Regenerate Response” had been copied into the text, seemingly by accident. Now, the authors have fessed up to using the chatbot to help draft the article, becoming the latest testament to generative AI’s worrying inroads into academia. “This is a breach of our ethical policies,” Kim Eggleton, head of peer review and research integrity at IOP Publishing, which publishes Physica Scripta, told Nature. The paper has now been retracted for not declaring its use of the chatbot — which, depending on your point of view, is either a storm in a teacup or a sign of the future of academia. Peer Review Paladin Since 2015, Cabanac has undertaken a sort of crusade to uncover other published papers that aren’t upfront about their use of AI tech, which back then was little more than a curiosity. As computers have gone from spitting out veritable gibberish to convincing, human-like compositions, the fight has gotten harder. But this has only steeled the resolve of Cabanac, who’s helped uncover hundreds of AI-generated manuscripts. “He gets frustrated about fake papers,” Cyril Labbé, a fellow computer scientist and Cabanac’s partner in crime-fighting, told Nature last year. “He’s really willing to do whatever it takes to prevent these things from happening.” Those careful to cover their tracks won’t leave behind obvious clues like, “as an AI…Paper Retracted When Authors Caught Using ChatGPT to Write It

Leave a Reply

Your email address will not be published. Required fields are marked *