A New York attorney, Steven Schwartz, is facing criticism for using ChatGPT, an AI language model, for legal research in a lawsuit against Avianca Airlines. The case involves a passenger, Robert Mata, who claims to have been injured by a serving cart during a flight in 2019. However, inconsistencies and factual errors in the case documentation caught the attention of the judge presiding over the case.
Schwartz has admitted to using ChatGPT for his legal research, stating that it was his first time using the AI tool and he was unaware of the potential for false content. In an affidavit, he expressed regret for relying on the AI model without verifying its authenticity.
The judge described six of the submitted cases as "bogus judicial decisions" with fake quotes and citations. It was discovered that some of the referenced cases did not exist, and there was a mix-up of docket numbers in one filing. This raised concerns about the reliability of ChatGPT and the need for human oversight and verification.
The incident sparked a broader discussion about the integration of AI, particularly ChatGPT, in various industries. While the intelligence of ChatGPT is rapidly advancing, doubts remain about its ability to completely replace human workers. Syed Ghazanfer, a blockchain developer, expressed support for ChatGPT but highlighted that it lacks the communication skills necessary to fully understand and meet complex requirements.
As the case unfolds, the incident serves as a cautionary tale highlighting the importance of human involvement and due diligence when utilizing AI tools like ChatGPT in professional settings.
Also read - Stablecoin Market Shift: Tether Surges, USDC Struggles, Regulatory Impact Felt