Lawyer Uses AI Tool in Court Case, Fails to Verify Cited Cases, Sanctioned for $59,000.
A former lawyer and her law firm have been sanctioned nearly $60,000 for improperly using artificial intelligence (AI) tool ChatGPT in a court case. The case involves the Chicago Housing Authority (CHA), which was sued by two families whose children were poisoned by lead paint in their homes.
The lawyer, Danielle Malaty, used ChatGPT to generate a legal motion, but failed to verify the accuracy of the cited cases. As a result, the CHA's opposing counsel argued that some of the citations were fictional and had no basis in law. The court eventually ruled in favor of the families, awarding them more than $24 million in damages.
Malaty was fired from her law firm after discovering the error. However, the firm Goldberg Segalla was sanctioned for $49,500 and its lawyer Larry Mason was fined $10,000. The sanctions were imposed by Cook County Circuit Judge Thomas Cushing, who criticized Goldberg Segalla's conduct as a "serious failure" of professionalism.
The CHA was not penalized in this case, but the agency has taken steps to address its own lead exposure issues and improve its safety standards for residents. The agency has also implemented new measures to educate its lawyers on proper AI use policies and to prevent similar mistakes in the future.
In other news, a jury recently ruled that two families whose children were poisoned by lead paint should receive an additional $8 million in attorney's fees. The CHA will be responsible for paying this amount, which brings the total damages awarded to the agency to over $32 million.
The case highlights the importance of verifying the accuracy of cited cases and sources when using AI tools in legal work. It also underscores the need for lawyers and law firms to prioritize professionalism and take responsibility for their mistakes.
Meanwhile, Goldberg Segalla has been required to implement "firm-wide measures" to re-educate its attorneys on its AI use policy. The firm has also established "preventative measures" to prevent similar mistakes in the future. However, some critics have expressed concerns that these measures may not be enough to ensure compliance with court standards.
The case also raises questions about the role of AI tools in legal work and their potential impact on accuracy and fairness. As AI technology continues to evolve and become more widely used in law firms and courts, it is likely that we will see more cases like this in the future.
A former lawyer and her law firm have been sanctioned nearly $60,000 for improperly using artificial intelligence (AI) tool ChatGPT in a court case. The case involves the Chicago Housing Authority (CHA), which was sued by two families whose children were poisoned by lead paint in their homes.
The lawyer, Danielle Malaty, used ChatGPT to generate a legal motion, but failed to verify the accuracy of the cited cases. As a result, the CHA's opposing counsel argued that some of the citations were fictional and had no basis in law. The court eventually ruled in favor of the families, awarding them more than $24 million in damages.
Malaty was fired from her law firm after discovering the error. However, the firm Goldberg Segalla was sanctioned for $49,500 and its lawyer Larry Mason was fined $10,000. The sanctions were imposed by Cook County Circuit Judge Thomas Cushing, who criticized Goldberg Segalla's conduct as a "serious failure" of professionalism.
The CHA was not penalized in this case, but the agency has taken steps to address its own lead exposure issues and improve its safety standards for residents. The agency has also implemented new measures to educate its lawyers on proper AI use policies and to prevent similar mistakes in the future.
In other news, a jury recently ruled that two families whose children were poisoned by lead paint should receive an additional $8 million in attorney's fees. The CHA will be responsible for paying this amount, which brings the total damages awarded to the agency to over $32 million.
The case highlights the importance of verifying the accuracy of cited cases and sources when using AI tools in legal work. It also underscores the need for lawyers and law firms to prioritize professionalism and take responsibility for their mistakes.
Meanwhile, Goldberg Segalla has been required to implement "firm-wide measures" to re-educate its attorneys on its AI use policy. The firm has also established "preventative measures" to prevent similar mistakes in the future. However, some critics have expressed concerns that these measures may not be enough to ensure compliance with court standards.
The case also raises questions about the role of AI tools in legal work and their potential impact on accuracy and fairness. As AI technology continues to evolve and become more widely used in law firms and courts, it is likely that we will see more cases like this in the future.