The Biden-Harris administration is currently developing an executive order and plans to pursue bipartisan legislation to help America “lead the way in responsible innovation” of artificial intelligence (AI), according to a fact sheet released today.

Before it releases its AI policy, the White House announced today that seven leading AI companies have committed – on a voluntary basis – to “move toward safe, secure, and transparent development of AI technology.”

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI have all agreed to three commitments: ensuring products are safe before introducing them to the public, building systems that put security first, and earning the public’s trust.

“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” the fact sheet reads. “To make the most of AI’s potential, the Biden-Harris administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”

“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI,” it says. “As the pace of innovation continues to accelerate, the Biden-Harris Administration will continue to remind these companies of their responsibilities and take decisive action to keep Americans safe.”

Today, the seven leading AI companies have committed to:

  • Internal and external security testing of their AI systems before their release to guards against some of the most “significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects”;
  • Sharing information across industry and with governments, civil society, and academia on managing AI risks;
  • Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights;
  • Facilitating third-party discovery and reporting of vulnerabilities in their AI systems;
  • Developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system;
  • Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use;
  • Prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy; and
  • Develop and deploy advanced AI systems to help address society’s greatest challenges, “from cancer prevention to mitigating climate change.”

President Joe Biden will meet with the top executives of these AI companies today to discuss the voluntary commitments.

“As we advance this agenda at home, the administration will work with allies and partners to establish a strong international framework to govern the development and use of AI,” the fact sheet says. “It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.”

A White House official said on a press briefing with reporters that there is no timeline for President Biden’s AI executive order and bipartisan legislation, but that we should expect to see something soon.

The fact sheet published today also notes that the Office of Management and Budget will soon release draft policy guidance for Federal agencies to “ensure the development, procurement, and use of AI systems is centered around safeguarding the American people’s rights and safety.”

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags