Last month, Tesla CEO Elon Musk’s lawyers argued that 2016 recordings of him making big promises about the car’s Autopilot software could have been deepfaked. While the judge didn’t buy the lawyers’ arguments and ordered Musk to testify under oath, the stunt illustrates a broader trend. As generative AI-powered tools make it easier than ever before to synthesize the voices and faces of public figures, lawyers are trying to seize the opportunity to undermine the very foundation of a shared factual reality. That has experts deeply worried, NPR reports. The phenomenon could end up influencing the beliefs of jurors and even the general population. Hany Farid, a digital forensics expert and professor at the University of California, Berkeley, told NPR that he’s worried about a future in which evidence of “police violence, human rights violations, a politician saying something inappropriate or illegal” is dismissed because of the possibility that it was digitally faked. “Suddenly there’s no more reality,” he said. Musk’s lawyers weren’t the first to argue that deepfakes were being invoked in court. Two defendants who were present at the January 6 riot attempted to claim videos showing them at the Capitol could’ve been AI manipulated, according to NPR. Insurrectionist Guy Reffitt argued that audiovisual evidence implicating him were deepfakes, arguments that were later dismissed by a judge, with Reffitt being found guilty. But that may not always be the case as US law is woefully ill equipped for this kind of argumentation. “Unfortunately, the law does not provide a clear…Reality Is Melting as Lawyers Claim Real Videos Are Deepfakes