CAPTCHAs Becoming Useless as AI Gets Smarter, Scientists Warn

AI technology is becoming so advanced that researchers argue in a new paper that there needs to be a better way to verify that a person online is human and not an AI bot. The researchers, from Ivy League universities and companies including OpenAI and Microsoft, propose a “personhood credential” (PHC) system for human verification in a yet-to-be-peer-reviewed paper to replace existing processes like CAPTCHAs. But to anybody concerned about privacy and mass surveillance, that’s a hugely imperfect solution that offloads the burden of responsibility onto end users — a common tactic in Silicon Valley. “A lot of these schemes are based on the idea that society and individuals will have to change their behaviors based on the problems introduced by companies stuffing chatbots and large language models into everything rather than the companies doing more to release products that are safe,” surveillance researcher Chris Gilliard told The Washington Post. In the paper, the researchers proposed the PHC system because they’re concerned that “malicious actors” will leverage AI’s mass scalability and its propensity to convincingly ape human actions online to flood the web with non-human content. Chief among their concerns: AI’s ability to spit out “human-like content that expresses human-like experiences or points of view”; digital avatars that look, move and sound like real humans; and AI bots’ increasing skillfulness at mimicking “human-like actions across the Internet” such as “solving CAPTCHAs when challenged.” That’s why the idea of PHCs is so attractive, the researchers argue. An organization that offers digital services,…CAPTCHAs Becoming Useless as AI Gets Smarter, Scientists Warn

Leave a Reply

Your email address will not be published. Required fields are marked *