Lawyer in Huge Trouble After He Used ChatGPT in Court and It Totally Screwed Up

LawBot ChatGPT’s propensity to make stuff up strikes again — and this time, it’s gotten a lawyer in deep trouble. As described in an early May affidavit, an attorney representing a man suing an airline for an alleged injury admitted he used the AI chatbot to do research for his client’s case. Which was why, in his legal brief, he cited a bunch of court cases — all with official-sounding names like “Martinez v. Delta Air Lines” and “Varghese v. China Southern Airlines” — that never actually happened, and hence do not exist. The attorney, Steven Schwartz of Manhattan’s Levidow, Levidow & Oberman law firm, told the court that it was the first time in his more than three-decade career that he’d used ChatGPT, so per the New York Times, “was unaware of the possibility that its content could be false.” Schwartz even told the judge, P. Kevin Castel, that he had asked ChatGPT to verify its sources. The chatbot apparently told him the cases were real, the NYT reports. Yes, that’s right. An experienced attorney used ChatGPT in court — and is now in huge trouble after it fabricated entire swathes of legal precedent. Bogus Schwartz told the court that he “greatly regrets” using ChatGPT to do his research for the case “and will never do so in the future without absolute verification of its authenticity.” Judge Castel, however, doesn’t seem swayed, and in his May 4 order he in no uncertain terms described the gravity of the situation. “The Court…Lawyer in Huge Trouble After He Used ChatGPT in Court and It Totally Screwed Up

Leave a Reply

Your email address will not be published. Required fields are marked *