Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Two federal judges have publicly acknowledged that their staff employed artificial intelligence tools to draft court orders this summer, resulting in significant errors. The admissions originated from U.S. District Judge Julien Xavier Neals of New Jersey and U.S. District Judge Henry Wingate of Mississippi, responding to inquiries from Senator Chuck Grassley, a Republican from Iowa and chair of the Senate Judiciary Committee.
Senator Grassley criticized the recent court orders as laden with mistakes, highlighting the need for diligence in judicial decisions. The judges, in letters revealed on Thursday by Grassley’s office, indicated that the flawed rulings were not subjected to the comprehensive review process customary in their chambers before being published.
Both judges emphasized that they have since implemented measures designed to enhance the review of court orders prior to their public release.
Judge Neals detailed in his correspondence that a draft ruling from June 30 in a securities lawsuit was erroneously released and subsequently withdrawn once brought to his attention. He explained that a law school intern utilized OpenAI’s ChatGPT for legal research without proper authorization, a clear breach of both the chamber’s and law school’s policies.
He stated, “My chamber’s policy prohibits the use of GenAI in the legal research for, or drafting of, opinions or orders.” Neals further added that in the past, this policy had been conveyed verbally to his staff, including interns, but he has since made it a written directive applicable to all law clerks and interns.
Judge Wingate also disclosed in his correspondence that a law clerk leveraged Perplexity as a foundational drafting tool to consolidate publicly accessible information on case dockets. Wingate admitted that the release of a draft decision on July 20 was due to “a lapse in human oversight.”
He remarked, “This was a mistake. I have taken steps in my chambers to ensure this mistake will not happen again.” Wingate previously replaced the flawed order in a civil rights case but did not provide an explanation, merely indicating it contained “clerical errors.”
In his statement, Senator Grassley acknowledged the judges’ willingness to own up to their mistakes, remarking, “Honesty is always the best policy. I commend Judges Wingate and Neals for acknowledging their mistakes and I’m glad to hear they’re working to make sure this doesn’t happen again.” He underscored the judiciary’s responsibility to uphold the rights of litigants and ensure that the deployment of generative AI does not hinder fair treatment under the law.
Grassley stressed that more authoritative and lasting policies governing AI’s use in the judiciary are urgently needed. He cautioned against allowing laziness, apathy, or overdependence on artificial intelligence to impair the judicial system’s integrity and commitment to factual accuracy.
The recent incidents involving Judges Neals and Wingate mirror wider scrutiny surrounding the use of AI tools in legal proceedings. Across the nation, courts have grappled with issues stemming from alleged misuse of AI in court filings. In response to these concerns, judges have enacted fines and other punitive measures against lawyers implicated in similar situations over the past few years.
The emergence of AI technology in legal contexts brings significant challenges, revealing a pressing need for the judiciary to establish comprehensive frameworks governing AI usage. This discourse around AI’s role in the courtroom will only intensify as legal practitioners navigate the effects of technology on their responsibilities.
As the legal field continues to evolve with the integration of artificial intelligence, both judges and legal practitioners must remain vigilant. The balance between leveraging technological advancements and maintaining the integrity of judicial processes is critical. Establishing clear and firm guidelines will prove essential in supporting equitable treatment under the law and protecting litigants’ rights.
Only by learning from these recent instances of AI-related errors can the judiciary fortify its commitment to accuracy and fairness. The proactive measures taken by Judges Neals and Wingate signal a broader recognition of these challenges within the legal community.
The ongoing dialogue regarding AI in the judiciary will undoubtedly shape future practices, as the system seeks to adapt to the realities of modern technology while ensuring justice prevails.
Reuters contributed to this report.