Mata v. Avianca, Inc. | |
---|---|
![]() Seal of the court | |
Court | United States District Court for the Southern District of New York |
Full case name | Roberto Mata, Plaintiff, v. Avianca, Inc., Defendant |
Decided | June 22, 2023 |
Docket nos. | 1:22-cv-01461 |
Citation | 678 F. Supp. 3d 443 |
Holding | |
Attorneys sanctioned for using fake case law citations generated by ChatGPT. | |
Court membership | |
Judge sitting | P. Kevin Castel |
Mata v. Avianca, Inc. was a U.S. District Court for the Southern District of New York case in which the Court dismissed a personal injury case against the airline Avianca and issued a $5,000 fine to the plaintiffs' lawyers who had submitted fake precedents generated by ChatGPT in their legal briefs. [1]
In February 2022, Mata filed a personal injury lawsuit in the U.S. District Court for the Southern District of New York against Avianca, alleging that he was injured when a metal serving cart struck his knee during an international flight. The plaintiff's lawyers used ChatGPT to generate a legal motion, which contained numerous fake legal cases involving fictitious airlines with fabricated quotations and internal citations. [2] [3] [4]
Avianca's lawyers notified the Court that they had been "unable to locate" a few legal cases cited in the legal motion. The Court could not locate the cases either and ordered the plaintiff's lawyers to provide copies of the cited legal cases. Mata's lawyers provided copies of documents purportedly containing all but one of the legal cases, after ChatGPT assured that the cases "indeed exist" and "can be found in reputable legal databases such as LexisNexis and Westlaw." [1] [5]
In May 2023, Judge P. Kevin Castel dismissed the personal injury case against Avianca and ordered the plaintiff's attorneys to pay a $5,000 fine. [1]
Judge Castel noted numerous inconsistencies in the opinion summaries, describing one of the legal analyses as "gibberish." [6] Judge Castel held that Mata's lawyers had acted with "subjective bad faith" sufficient for sanctions under Federal Rule of Civil Procedure Rule 11. [1]
In July 2024, the American Bar Association issued its first formal ethics opinion on the responsibilities of lawyers using generative AI (GAI). The 15-page opinion outlines how the Rules of Professional Conduct apply to the use of GAI in the practice of law. [7] [8]
Experts caution that lawyers cannot reasonably rely on the accuracy, completeness, or validity of content generated by GAI tools. [9]
Due to the continued usage of GAI in the practice of law, Mata has been described as a landmark case by legal professionals, [10] [11] as it is frequently cited by courts in cases where usage of GAI during the course of proceedings leads to the creation and citation of nonexistent caselaw. [12] [13] [14]