The Department of Commerce’s National Institute of Standards and Technology (NIST) announced this week that its AI Safety Institute (AISI) has formed the Testing Risks of AI for National Security (TRAINS) Taskforce, bringing together partners from across the Federal government to identify and manage the national security implications of AI.
The Nov. 20 announcement came on the first day of the inaugural convening of the International Network of AI Safety Institutes in San Francisco.
The taskforce will enable coordinated research and testing of advanced AI models across critical national security and public safety domains, such as radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, conventional military capabilities, and more.
The TRAINS Taskforce is chaired by the AISI and includes initial representation from the Department of Defense, including the Chief Digital and Artificial Intelligence Office and the National Security Agency; the Department of Energy and ten of its National Laboratories; the Department of Homeland Security, including the Cybersecurity and Infrastructure Security Agency; and the National Institutes of Health.
“Every corner of the country is impacted by the rapid progress in AI, which is why establishing the TRAINS Taskforce is such an important step to unite our federal resources and ensure we’re pulling every lever to address the challenges of this generation-defining technology,” said Secretary of Commerce Gina Raimondo. “The U.S. AI Safety Institute will continue to lead by centralizing the top-notch national security and AI expertise that exists across government in order to harness the benefits of AI for the betterment of the American people and American business.”
NIST said each member of the new taskforce will lend their unique subject matter expertise, technical infrastructure, and resources. Members will collaborate on the development of new AI evaluation methods and benchmarks, as well as conduct joint national security risk assessments and red-teaming exercises.
The NIST AISI was founded as part of President Biden’s landmark executive order on AI.
The Commerce Department released NIST’s AISI strategic vision in May – after setting its top executive leadership in February – outlining the institute’s plans to, among other activities, conduct testing of advanced models and systems to assess potential and emerging risks; develop guidelines on evaluations and risk mitigations; and perform and coordinate technical research.
Congress is working hard to codify NIST’s AISI, which could be in jeopardy if the incoming Trump-Vance administration does away with Biden’s AI executive order.
State, Commerce Launch International Network on AI Safety
The departments of Commerce and State co-hosted the inaugural convening of the International Network of AI Safety Institutes on Nov. 20 and 21 – launching a new global effort to advance the science of AI safety and enable cooperation on research, best practices, and evaluation.
The United States will serve as the inaugural chair of the International Network of AI Safety Institutes, whose initial members include Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States.
According to NIST, the convening was structured as a technical working meeting that addressed three high-priority topics: managing risks from synthetic content; testing foundation models; and conducting risk assessments for advanced AI systems.
“By bringing together the leading minds across governments, industry, academia, and civil society, we hope to kickstart meaningful international collaboration on AI safety and innovation, particularly as we work toward the upcoming AI Action Summit in France in February and beyond,” NIST said in a press release.
All ten initial members signed a joint statement ahead of the summit this week, promising to “encourage a general understanding of and approach to AI safety globally, that will enable the benefits of AI innovation to be shared amongst countries at all stages of development.”
At the summit, they announced more than $11 million in global research funding commitments to address the international network’s new joint research agenda on mitigating risks from synthetic content. The U.S. Agency for International Development is dedicating $3.8 million to strengthen research on synthetic content risk mitigation.
Additionally, the U.S. AISI released its first synthetic content guidance report, which identifies a series of voluntary approaches to address risks from AI-generated content like child sexual abuse material, impersonation, and fraud.
Finally, during the convening, the International Network of AI Safety Institutes completed its first-ever pilot joint testing exercise on Meta’s Llama 3.1 405B to test across three topics: general academic knowledge, ‘closed-domain’ hallucinations, and multi-lingual capabilities.