Humans and ChatGPT mirror mutual language patterns – here’s how

ChatGPT is a hot topic at my university, where faculty members are deeply concerned about academic integrity, while administrators urge us to “embrace the benefits” of this “new frontier.”  It’s a classic example of what my colleague Punya Mishra calls the “doom-hype cycle” around new technologies. Likewise, media coverage of human-AI interaction – whether paranoid or starry-eyed – tends to emphasize its newness. In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him. In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT. When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought – of course, I’m not a robot. On the other hand, when my email client suggests a word or phrase to complete my sentence or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say? Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human? AI chatbots are new, but public debates over language change are not. As a linguistic…Humans and ChatGPT mirror mutual language patterns – here’s how

Leave a Reply

Your email address will not be published. Required fields are marked *