A new report out July 3 from MITRE is offering recommendations for the incoming administration on establishing a comprehensive and effective regulatory framework for AI security and safety over the next four years and beyond.
Notably, the organization offered guidance for the Federal government to create a National AI Center of Excellence (NAICE) that would lead in conducting cutting-edge applied research and development in AI.
“With each new presidential term comes the opportunity to reassess and enhance our approach to rapidly advancing technologies,” the five-page report says. “In the realm of AI, it will be essential for the administration to stay informed about the current state of AI, its potential impacts, and the importance of advancing a sensible regulatory framework for AI assurance.”
“While current policy and legislative activities have begun to address the need for AI regulation, more progress is needed to ensure the proper application and use of this technology, balancing security, ethical considerations, and public trust,” MITRE wrote.
The report, “Assuring AI Security and Safety through AI Regulation,” lays out nine recommendations for the incoming administration:
- Bridge the gap: Enhance communication and collaboration between policymakers and those implementing AI strategies.
- Develop AI assurance plans: Collaborate with stakeholders in a repeatable AI assurance process to ensure that the use of AI within their specific contexts meets necessary safety and performance standards.
- Promote the recently established AI Information Sharing and Analysis Center (AI-ISAC) to accelerate the sharing of real-world assurance incidents.
- Understand adversary use: Support an at-scale AI Science and Technology Intelligence (AI S&TI) apparatus to monitor adversarial AI tradecraft.
- Establish system auditability: Issue an executive order that mandates AI system auditability.
- Promote practices for AI principles alignment and refine regulatory and legal frameworks for AI systems with increasing agency.
- Strengthen critical infrastructure: Direct Federal agencies to review and strengthen government critical infrastructure plans.
- Develop guidelines that allow for flexibility in AI governance implementation across different agencies.
- Bring it all together: Create a NAICE that promotes and coordinates these priorities.
MITRE’s guidance was accompanied by a timeline for each of the nine recommendations, offering the incoming administration specific milestones for its first year in office.
The organization noted that the AI work will be ongoing, and the next administration should “continuously monitor the development and use of AI and propose regulatory updates as needed, based on the effectiveness of the AI assurance process, AI assurance infrastructure, and AI Assurance Plans.”
MITRE said the NAICE should play a key role in this process and offered a blueprint for a national network of AI assurance labs – modeled after the organization’s AI Assurance and Discovery Lab opened in March.
According to MITRE, the AI assurance labs should cover R&D in seven key areas: health; communications; defense; manufacturing; energy; finance; and transportation.