
After two federal judges handed down orders riddled with errors after using artificial intelligence (AI) to develop those orders, the federal judiciary released guidance on how AI should be used and procured securely.
In a letter to Senate Judiciary Committee Chair Chuck Grassley, R-Iowa, the Administrative Office of the U.S. Courts (AO) Director Judge Robert Conrad said that an advisory AI Task Force comprised of judges, court executives, and IT and chambers staff was created earlier this year to “thoroughly and effectively – but responsibly – address AI as a transformative force,” and distributed guidance across court systems in July.
The task force is responsible for identifying issues that AI poses to judicial processes and recommends new or updated policies.
“With the increasing use of AI platforms such as OpenAI’s ChatGPT and Google Gemini, and integration of AI functions in legal research tools, AI use has become more common in the legal landscape,” Conrad wrote.
“AI presents a host of opportunities and potential benefits for the judicial branch, as well as concerns around maintaining high ethical standards, preserving the integrity of judicial opinions, safeguarding sensitive Judiciary data, and ensuring the security of the Judiciary’s IT systems,” he added.
Earlier this month, Grassley wrote to U.S. District of Mississippi Judge Henry Wingate and U.S. District of New Jersey Judge Julien Xavier Neals after reports that the judges used generative AI (GenAI) this summer to write court orders that contained alleged “serious factual inaccuracies.”
Those errors included names of plaintiffs and defendants that were not parties in the case, misquoting statutory text, attributing quotes to defendants that the defendants claimed they never made, and misstating the outcomes of other court decisions used to make the final ruling.
Judge Conrad assured Grassley that since last year, the AO has provided judges, court executives, and other staff with federal resources on the impact of AI on judiciary-related work, and that it has identified potential risks of using the technology.
Guidance created by the task force is voluntary and covers “general, non-technical suggestions,” and “recommendations around oversight of and accountability for AI use, confidentiality and security of Judiciary data, and AI education, among other areas,” Conrad’s letter said.
The guidance recommends that users review and independently verify all AI content, and it reminds users that they are accountable for all work done with the assistance of AI.
Notably, Conrad said that the guidance allows for courts to experiment with AI tools. It does not recommend using AI to make decisions on cases and “recommends that users exercise extreme caution” if using AI to address “novel legal questions.”
To help build more permanent guidance, the AI task force is looking at creating an online AI information sharing site for the judiciary to exchange information on local uses of AI, rules, orders, policies, and guidance that may be used to help create safe and responsible ways to use AI.