The impact of Artificial Intelligence (AI) on the world will extend far beyond narrow national security applications. Federal officials spoke about moving beyond those narrow AI applications to gain strategic advantage and the importance of justified trust when deploying AI systems on September 27, during a webinar hosted by the Software Engineering Institute at Carnegie Mellon University.

AI systems must be developed and fielded with justified confidence, according to a 2021 report by the National Security Commission on AI (NSCAI). Suppose AI systems do not work as designed or are unpredictable in ways that can have significant negative consequences. In that case, leaders will not adopt them, operators will not use them, Congress will not fund them, and the American people will not support them. As part of the report, the Commission produced a detailed framework highlighting five issue areas and recommendations to guide AI’s responsible development and fielding across the national security community.

“We recognize that establishing justified confidence in AI system is the critical issue in seeing AI systems deployed widespread. And for that, we need robust and reliable AI, testing evaluation verification and validation, we need leadership among the different institutions that will be deploying the AI, we need rules of the road in accountability in governance, and we also need to develop patterns for human and AI interaction and teaming is a very critical point,” Dr. Steve Chien, Commissioner at the NSCAI, said.

Additionally, according to Dr. Jane Pinelis, chief of Test and Evaluation of AI/ML at the DoD Joint AI Center (JAIC), one of the challenges when deploying AI systems is when people talk about building trust in AI-enabled systems.

“This is problematic because we don’t want to build blind trust. We want to build very, very, very, well-informed trust about where the system was tested, what it was good at, what it was not good at, and where it wasn’t tested,” Pinelis said.

This is especially important, Pinelis added, because there are various stakeholders with AI-enabled systems, and they need various levels of assurance. At the JAIC, they advocate for human system integration which is integral to its test plans. And as a part of this process, they support including different types of users with various levels of experience.

The government needs AI systems that augment and complement human understanding and decision-making, Chein added, so that the complementary strengths of humans and AI can be leveraged as an optimal team. Achieving this remains a challenge. Therefore, the Commission offered the following recommendations.

  • Focus more federal R&D investments on advancing AI security and robustness.
  • Consult interdisciplinary groups of experts to conduct risk assessments, improve documentation practices, and build overall system architectures to limit the consequences of system failure.
  • Pursue a sustained, multidisciplinary initiative through national security research labs to enhance human-AI teaming.
  • Clarify policies on human roles and functions, develop designs that optimize human-machine interaction, and provide ongoing and organization-wide AI training.
  • DoD should tailor and develop TEVV policies and capabilities to meet the changes needed for AI as AI-enabled systems grow in number, scope, and complexity in the Department.
  • National Institute of Standards and Technology (NIST) should provide and regularly refresh a set of standards, performance metrics, and tools for qualified confidence in AI models, data, and training environments, and predicted outcomes.
  • Appoint a full-time, senior-level Responsible AI lead in each department or agency critical to national security and each branch of the armed services.
  • Create a standing body of multidisciplinary experts in the National AI Initiative Office.
  • Adapt and extend existing accountability policies to cover the full lifecycle of AI systems and their components.
  • Establish policies that allow individuals to raise concerns about irresponsible AI development and institute comprehensive oversight and enforcement practices.

 

Read More About
Recent
More Topics
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags