We Interviewed the Engineer Google Fired for Saying Its AI Had Come to Life

Last summer, former Google engineer and AI ethicist Blake Lemoine went viral after going on record with The Washington Post to claim that LaMDA, Google’s powerful large language model (LLM), had come to life. Lemoine had raised alarm bells internally, but Google didn’t agree with the engineer’s claims. The ethicist then went to the press — and was fired by Google shortly thereafter. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine told WaPo at the time. “I know a person when I talk to it.” The report made waves, sparking debate in academic circles as well as the nascent AI business. And then, for a while, things died down. How things have changed. The WaPo controversy was, of course, months before OpenAI would release ChatGPT, the LLM-powered chatbot that back in late November catapulted AI to the center of public discourse. Google was sent into a tailspin as a result, and Meta would soon follow; Microsoft would pull the short-term upset of the decade thus far by emerging as a major investor in OpenAI; crypto scammers and YouTube hustlers galore would migrate to generative AI schemes more or less overnight; experts across the world would start to raise concerns over the dangers of a synthetic content-packed internet. As the dust settles, we decided to catch up with Lemoine to talk about the state of the AI industry, what Google might still…We Interviewed the Engineer Google Fired for Saying Its AI Had Come to Life

Leave a Reply

Your email address will not be published. Required fields are marked *