The Office of Management and Budget (OMB) today released its new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, with highpoints of the guidance including orders to appoint a Chief AI Officer at Federal agencies, and to adopt a lengthy list of safeguards for agencies to follow while developing applications of AI tech.

OMB Director Shalanda Young sent the proposed policy memo to heads of all executive departments and agencies on Nov. 1. The guidance aims to establish AI governance structures in Federal agencies, advance responsible AI innovation, and manage risks from government uses of AI.

The draft guidance comes on the heels of President Biden’s long-awaited AI executive order (EO). As part of the government’s priority to lead by example and provide a model for the responsible use of the technology, Vice President Harris announced OMB’s draft guidance today ahead of the UK AI Summit in London.

The agency is seeking public comments on the 26-page draft policy for the use of AI in the Federal government by Dec. 5.

The proposed guidance also builds on the Blueprint for an AI Bill of Rights and the AI Risk Management Framework by mandating a set of minimum evaluation, monitoring, and risk mitigation practices derived from these frameworks and tailoring them to context of the Federal government, OMB said.

“By prioritizing safeguards for AI systems that pose risks to the rights and safety of the public – safeguards like AI impact assessments, real-world testing, independent evaluations, and public notification and consultation – the guidance would focus resources and attention on concrete harms, without imposing undue barriers to AI innovation,” the agency said.

Notably, the guidance does not cover AI when it is used as a component of a national security system. As spelled out in the White House’s AI EO issued earlier this week, the National Security Council and White House chief of staff are required to develop a National Security Memorandum to ensure the military and intelligence community use AI safely, ethically, and effectively in their missions.

Some agencies also have existing AI use guidelines in place – such as the Department of Defense’s (DoD) Responsible AI Strategy and Implementation Pathway and its Autonomy in Weapon Systems Directive, as well as the Office of the Director of National Intelligence’s Principles of AI Ethics for the Intelligence Community.

Strengthening AI Governance

The first section of the draft document calls for the head of all agencies to designate a Chief AI Officer (CAIO) within 60 days of issuance of the memo. The CAIO would have the responsibility to advise agency leadership on AI, coordinate and track the agency’s AI activities, advance the use of AI in the agency’s mission, and oversee the management of AI risks.

The document spells out more than a dozen explicit responsibilities of the CAIOs at each agency. Specifically, the AI lead would be responsible for actions like advising on the resourcing requirements and workforce skillsets necessary for applying AI to the agency’s mission and advocating within their agency and to the public on the opportunities and benefits of AI.

OMB noted that agencies which already have CAIOs must reevaluate if they need to provide that individual with additional authority.

The guidance also calls for agencies to establish internal mechanisms for coordinating the efforts of the many existing officials responsible for issues related to AI. As part of this, large agencies would be required to establish AI Governance Boards – chaired by the Deputy Secretary and vice-chaired by the CAIO.

The governance boards will be required to meet no less than quarterly, OMB said, and must be made up of senior officials responsible for IT, cybersecurity, data, procurement, customer experience, among others.

Agencies will also be required to post publicly on their respective websites a plan to achieve consistency with this guidance within 180 days of its issuance. OMB said it will provide full templates for these compliance plans.

Additionally, as already required via the Advancing American AI Act, each agency – except for the DoD and the Intelligence Community – must continue to submit annually an inventory of its AI use cases to OMB and subsequently post a public version on the agency’s website. The DoD must still submit its use cases to OMB annually, but they will not be released publicly.

Advancing Responsible AI Innovation

Within one year, each agency must develop and release publicly on its website a strategy for identifying and removing barriers to the responsible use of AI and achieving enterprise-wide advances in AI maturity.

According to OMB, the strategy should include provisions like a current assessment of the agency’s AI maturity, a plan to manage the risks from the use of AI, and a current assessment of the agency’s AI workforce capacity and projected AI workforce needs.

This section of the document also calls on agencies to remove unnecessary barriers to the responsible use of AI, including those related to insufficient IT infrastructure, inadequate data and data sharing, gaps in the agency’s AI workforce, and cybersecurity approval processes that are poorly suited to AI systems.

“Agencies should create internal environments where those developing and deploying AI have flexibility and do not face hindrances that divert limited resources and expertise away from the AI innovation and risk management,” the draft document reads.

OMB notes that agencies should also explore the use of generative AI in the agency, with adequate safeguards and oversight mechanisms.

Managing Risks from the Use of AI

In an effort to ensure that agencies establish safeguards for safety- and rights-impacting uses of AI and provide transparency to the public, the OMB draft guidance would mandate the implementation of specific safeguards for uses of AI that impact the rights and safety of the public starting on Aug. 1, 2024.

The memo offers more than 10 activities described as “safety-impacting,” including the functioning of dams, emergency services, or electrical grids; the transport, safety, design, or development of hazardous chemicals; and access to or security of government facilities, among others.

The memo also offers more than 10 activities described as “rights-impacting,” including many uses involved in health, education, employment, housing, Federal benefits, law enforcement, immigration, and more.

Starting on Aug. 1, the draft guidance would mandate the implementation of specific safeguards for uses of AI that impact the rights and safety of the public including:

  • Complete an AI impact assessment;
  • Test the AI for performance in a real-world context;
  • Independently evaluate the AI;
  • Conduct ongoing monitoring and establish thresholds for periodic human review;
  • Mitigate emerging risks to rights and safety;
  • Ensure adequate human training and assessment;
  • Provide appropriate human consideration as part of decisions that pose a high risk to rights or safety;
  • Provide public notice and plain-language documentation through the AI use case inventory;
  • Take steps to ensure that the AI will advance equity, dignity, and fairness;
  • Consult and incorporate feedback from affected groups;
  • Conduct ongoing monitoring and mitigation for AI-enabled discrimination;
  • Notify negatively affected individuals;
  • Maintain human consideration and remedy processes; and
  • Maintain options to opt-out where practicable.

All agencies that are not elements of the Intelligence Community are required to implement minimum practices to manage risks from rights-impacting and safety-impacting AI. By Aug. 1, agencies must stop using any safety-impacting or rights-impacting AI that is not compliant with the minimum practices.

Agencies may request an extension or a waiver of these minimum practices from OMB, providing detailed justifications. Additionally, the guidance notes that agencies are not required to follow minimum practices for evaluations of potential vendors or commercial capabilities for procurement, evaluation of a particular AI application because the AI provider is the target of a regulatory enforcement, or for research and development.

After finalization of the proposed guidance, OMB will also develop a means to ensure that Federal contracts align with its recommendations to manage risks from rights-impacting and safety-impacting AI.

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags