Lawyer Used ChatGPT for Legal Research Got Into Trouble at Court

By:

Published on:

In a recent incident that unfolded in the United States, the limits of artificial intelligence were exposed when a lawyer turned to ChatGPT for legal research and unknowingly submitted false information. The case, which involved a personal injury lawsuit against an airline, shed light on the importance of exercising caution and critical thinking when relying on AI tools for legal investigations.

The lawsuit centred around a man’s claims of personal injury against an airline. The defence team representing the plaintiffs submitted legal briefs that cited several previous court decisions in an effort to establish a precedent for their claims. However, it was later discovered that the referenced cases were entirely fabricated, leading to a surprising turn of events.

Presiding Judge Kevin Castel termed the situation as “unprecedented” and demanded an explanation from the plaintiffs’ attorneys, who had filed the misleading briefs.

During the course of the investigation, it was revealed that a colleague of the lead attorney, Steven Schwartz, had turned to ChatGPT for legal research. In a written statement, Schwartz expressed deep regret, admitting that he had never used AI in legal research before and was unaware of the possibility of false information generated by the AI model.

The screenshot attached to the application showed a conversation between Schwartz and ChatGPT, where the AI was asked to verify the authenticity of a specific case, Varghese v. China Southern Airlines Co Ltd. ChatGPT falsely confirmed the existence of the case, citing the ability to search legal literature databases such as LexisNexis and Westlaw. Subsequent investigations, however, revealed that the case did not exist, casting doubt on the reliability of other cases provided by ChatGPT.

As a result of the incident, attorneys Peter LoDuca and Steven Schwartz from the law firm Levidow, Levidow & Oberman, who were directly involved in the case, have been summoned to a disciplinary hearing scheduled for June 8. During the hearing, they will be required to provide an explanation for their actions.

This incident has sparked a significant debate within the legal community regarding the responsible and appropriate use of AI tools in legal research. Many experts argue that comprehensive guidelines and training programs are necessary to ensure lawyers understand the limitations of AI and exercise critical judgment when utilizing such technologies.

The incident serves as a cautionary tale, reminding legal professionals and researchers alike not to place blind trust in AI systems. While AI tools can undoubtedly provide valuable assistance, they must be used judiciously and in conjunction with human expertise and scrutiny. The legal community now faces the challenge of formulating clear guidelines and ethical frameworks to prevent similar occurrences in the future, preserving the integrity of legal research and proceedings.

Vishak
Vishak
Vishak is a skilled Editor-in-chief at Code and Hack with a passion for AI and coding. He has a deep understanding of the latest trends and advancements in the fields of AI and Coding. He creates engaging and informative content on various topics related to AI, including machine learning, natural language processing, and coding. He stays up to date with the latest news and breakthroughs in these areas and delivers insightful articles and blog posts that help his readers stay informed and engaged.

Related Posts:

Leave a Reply

Please enter your comment!
Please enter your name here