Pathological Liar Google asked around 80,000 of its employees to test its still-unreleased Bard AI chatbot before it released it to the public last month, Bloomberg reports. And the reviews, as it turns out, were absolutely scathing. The AI was a “pathological liar,” one worker concluded, according to screenshots obtained by Bloomberg. Another tester called it out for being “cringe-worthy.” A different employee was even told potentially life-threatening advice on how to land a plane or go scuba diving. In a February note, which was seen by nearly 7,000 workers, another employee called Bard ‘worse than useless: please do not launch.” In short, it was a complete disaster — yet, as we all know, the company decided to launch it anyways, labeling it as an “experiment” and adding prominent disclaimers. Catch Up Google’s decision was likely a desperate move to catch up with the competition, with OpenAI racing ahead with its highly popular ChatGPT, despite the tech being in a seemingly underdeveloped state. According to Bloomberg, Google employees tasked with figuring out the safety and ethical implications of the company’s new products were told to stand aside as AI tools were being developed. “AI ethics has taken a back seat,” former Google manager and president of the Signal Foundation Meredith Whittaker told Bloomberg. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.” The tech giant, however, maintains that AI ethics remain a top priority. Yet two Google employees told Bloomberg that AI ethics reviews…Google Staff Warned Its AI Was a "Pathological Liar" Before They Released It Anyway