A New York federal judge has set a new standard for lawyers using artificial intelligence (AI) to draft court filings, ruling that the repeated misuse of AI by one attorney constitutes an abuse of power. Steven Feldman, who used multiple AI programs to review and cross-check citations in his filings, had attempted to justify his actions as an effort to improve the efficiency of legal research.
However, judge Katherine Polk Failla was not convinced, labeling Feldman's methods "redolent of Rube Goldberg" and stating that verifying case citations should never be a job left to AI. She found Feldman to have failed to fully accept responsibility for his mistakes, despite repeated warnings from the court.
In her ruling, Failla terminated the case and entered default judgment for the plaintiffs, ordering Feldman's client to disgorge profits and return any stolen goods in their remaining inventory. The judge also imposed sanctions on Feldman, including an injunction preventing additional sales of stolen goods and refunding every customer who bought them.
Feldman had claimed that AI did not write his filings, but Failla accused him of dodging the truth and failing to provide accurate information about his research methods. She noted that he had failed to make eye contact with her during the hearing and left without clear answers.
The ruling highlights the growing concern among judges about lawyers using AI to draft court filings, which can lead to fake citations, misrepresentations, and other forms of misconduct. Failla's decision is seen as a stern warning to attorneys who attempt to abuse this technology to evade their professional obligations.
Failla emphasized that "most lawyers simply call conducting legal research an ordinary part of the job," and that Feldman was not excused from his professional obligation by relying on emerging technology. She concluded that Feldman had repeatedly and brazenly violated Rule 11, which requires attorneys to verify the cases they cite, despite multiple warnings.
The ruling has sparked a wider debate about the role of AI in legal research and the need for greater transparency and accountability among lawyers who use this technology.
However, judge Katherine Polk Failla was not convinced, labeling Feldman's methods "redolent of Rube Goldberg" and stating that verifying case citations should never be a job left to AI. She found Feldman to have failed to fully accept responsibility for his mistakes, despite repeated warnings from the court.
In her ruling, Failla terminated the case and entered default judgment for the plaintiffs, ordering Feldman's client to disgorge profits and return any stolen goods in their remaining inventory. The judge also imposed sanctions on Feldman, including an injunction preventing additional sales of stolen goods and refunding every customer who bought them.
Feldman had claimed that AI did not write his filings, but Failla accused him of dodging the truth and failing to provide accurate information about his research methods. She noted that he had failed to make eye contact with her during the hearing and left without clear answers.
The ruling highlights the growing concern among judges about lawyers using AI to draft court filings, which can lead to fake citations, misrepresentations, and other forms of misconduct. Failla's decision is seen as a stern warning to attorneys who attempt to abuse this technology to evade their professional obligations.
Failla emphasized that "most lawyers simply call conducting legal research an ordinary part of the job," and that Feldman was not excused from his professional obligation by relying on emerging technology. She concluded that Feldman had repeatedly and brazenly violated Rule 11, which requires attorneys to verify the cases they cite, despite multiple warnings.
The ruling has sparked a wider debate about the role of AI in legal research and the need for greater transparency and accountability among lawyers who use this technology.