Last Updated on June 27, 2023 by
The lawyers, Steven Schwartz and Peter LoDuca from the law firm Levidow, Levidow & Oberman, were fortunate as the federal judge ruled against imposing sanctions that could have had severe consequences for their careers. ChatGPT And Fake Lawsuits The outcome proved favorable for them, surpassing any expectation that even an AI chatbot like ChatGPT And Fake Lawsuits could have generated.
Instead of imposing severe sanctions, Judge P. Kevin Castel opted to give the lawyers a relatively light punishment—a $5,000 fine—for their misconduct. The judge found that the attorneys acted in “bad faith” by providing inconsistent explanations and initially lying to the court while defending their erroneous legal filing, which referenced six non-existent cases.
ChatGPT And Fake Lawsuits
As part of the ruling, Castel directed the lawyers to inform the judges mentioned in their flawed filing about the fictitious cases attributed to them, despite the fact that the judges themselves are real. The judge deemed the subsequent apologies from the attorneys to be sufficient and did not see the need for further penalties.
While acknowledging that he had no qualms with the use of AI in the legal field, Castel emphasized that the lawyers had a duty to ensure the accuracy of their research, which they failed to fulfill.
“Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” the judge said. “But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings.”
Despite avoiding severe consequences, Schwartz and LoDuca, along with their law firm, are contemplating the possibility of filing an appeal.
“We respectfully disagree with the finding that anyone at our firm acted in bad faith,” Levidow, Levidow & Oberman said in a statement. “We have already apologized to the Court and our client. We continue to believe that in the face of what even the Court acknowledged was an unprecedented situation, we made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”
The series of events unfolded when a client approached the law firm with a knee injury claim against an airline. Schwartz, taking on the case, utilized ChatGPT And Fake Lawsuits for legal research. The AI chatbot purportedly provided six relevant previous cases, which were included in the filing. LoDuca, who had the necessary admission to represent clients in federal courts, approved the filing.
Regrettably, it was later revealed that ChatGPT And Fake Lawsuits had fabricated the mentioned cases entirely. In an attempt to evade admission of their heavy reliance on the AI chatbot and failure to scrutinize its findings, the two attorneys offered explanations and arguments.
Regarding the underlying case filed against the airline, the judge dismissed it due to the expiration of the statute of limitations.