Lawyers accuse ChatGPT of tricking them into citing wrong case

Lawyers accuse ChatGPT of tricking them into citing wrong case law

NEW YORK — Two apologetic attorneys responding to a disgruntled judge in Manhattan federal court on Thursday accused ChatGPT of tricking them into including fictitious legal research in a court filing.

Schwartz explained that he used the groundbreaking program to search for legal precedent supporting a client’s case against Colombian airline Avianca over an injury sustained on a flight in 2019.

The chatbot, which fascinates the world with its production of essay-like responses to user prompts, proposed several cases involving airline accidents that Schwartz couldn’t find using methods his law firm used to use.

Schwartz told US District Judge Fr. Kevin Castel that he “underlies the misconception that this website is getting these cases from a source that I do not have access to.”

He said he “failed miserably” to do research to ensure the citations were accurate.

“I didn’t understand that ChatGPT could fabricate cases,” Schwartz said.

Microsoft has invested around $1 billion in OpenAI, the company behind ChatGPT.

Its success in demonstrating how artificial intelligence could transform the way people work and learn has sparked fears in some. Hundreds of industry leaders signed a letter in May warning that “Cramping the risk of AI extinction should be a global priority alongside other societal risks such as pandemics and nuclear war.”

Judge Castel appeared both baffled and disturbed by the unusual incident and disappointed that the attorneys did not act quickly to correct the incorrect legal citation when the issue was first brought to the attention of Avianca’s attorneys and the court . Avianca pointed out the incorrect case law in a file filed in March.

The judge confronted Schwartz with a legal case invented by the computer program. It was first described as a woman’s lawsuit against an airline for wrongful death, which then morphed into a lawsuit against a man who missed a flight to New York and had to pay additional costs.

“Can we agree that’s legal gibberish?” Castel asked.

Schwartz said he mistook the confusing account because excerpts were taken from different parts of the case.

When Castel finished his questioning, he asked Schwartz if he had anything else to say.

“I want to sincerely apologize,” Schwartz said.

He added that he suffered personally and professionally from the mistake, and felt “embarrassed, humiliated and utterly remorseful.”

He said that he and the company he worked for – Levidow, Levidow & Oberman – took precautions to ensure something similar didn’t happen again.

LoDuca, another attorney working on the case, said he trusts Schwartz and has not adequately reviewed the paperwork he compiled.

After the judge read portions of a cited case to show how easy it was to see it was “nonsense,” LoDuca said, “I never realized it was a sham case.”

He said the result “pains me to the core”.

Ronald Minkoff, an attorney for the law firm, told the judge the filing was “the result of negligence and not bad faith” and should not carry any penalties.

He said lawyers have a history of struggling with technology, particularly new technology, “and it’s not getting any easier.”

“Mr. Schwartz, someone who doesn’t do much federal research, decided to use this new technology. “He thought he was dealing with a standard search engine,” Minkoff said. “He was playing with live ammunition.”

Daniel Shin, associate professor and associate research director at William & Mary Law School’s Center for Legal and Court Technology, said he presented the Avianca case last week during a conference attended by dozens of participants from state and federal courts in person and online in the United States, including the Federal Court in Manhattan.

He said the issue caused shock and confusion at the conference.

“We’re talking about the Southern District of New York, the federal district that handles big cases, from 9/11 to all the big financial crimes,” Shin said. “This was the first documented case of potential professional misconduct by a lawyer using generative AI.”

He said the case shows that the lawyers may not have understood how ChatGPT works because it is prone to hallucinations and talks about fictional things in a way that sounds realistic but isn’t.

“It highlights the dangers of deploying promising AI technologies without knowing the risks,” Shin said.

The judge said he would decide on sanctions at a later date.