The Biden-Harris administration is launching a two-year competition that will leverage AI to protect the United States’ most important software – such as code that helps run the internet and critical infrastructure – senior White House officials announced at the opening of the Black Hat USA Conference in Las Vegas today.
Led by the Defense Advanced Research Projects Agency (DARPA), the “AI Cyber Challenge” (AIxCC) will call on competitors across the United States to identify and fix software vulnerabilities using AI.
This competition will include collaboration with four top AI companies – Anthropic, Google, Microsoft, and OpenAI – who are lending their expertise and making their cutting-edge technology available for this challenge.
AIxCC will feature $18.5 million in prizes – fully funded by DARPA – and will “drive the creation of new technologies to rapidly improve the security of computer code, one of cybersecurity’s most pressing challenges,” the White House said.
“In an increasingly interconnected world, software undergirds everything from public utilities to our financial systems,” DARPA’s Information Innovation Office Program Manager, Perri Adams, said during a press call with reporters. Adams designed and conceived AIxCC. “But as the software enables modern life and drives productivity, it also creates an expanding attack surface for malicious actors. This includes critical infrastructure, which is especially vulnerable to cyberattacks given the challenges of securing sprawling software systems.”
“Cyber defenders are tasked with protecting, really, a daunting maze of technology. And today, they don’t have the tools capable of security at this scale,” she said. “Thus, we’ve seen in recent years hackers exploiting the state of affairs, posing a serious national security risk. Despite these vulnerabilities, we believe modern advances may provide a path towards solving this. The recent gains in AI, when used responsibly, have remarkable potential for securing our code.”
DARPA will begin the two-year long challenge in the spring of 2024, hosting an open competition in which the competitor that best secures vital software will win millions of dollars in prizes. To ensure broad participation and a level playing field for AIxCC, DARPA will also make available up to $1 million each for seven small businesses who want to compete.
Teams will participate in a qualifying event next spring, where the top scoring teams – up to 20 – will be invited to participate in the semifinal competition at the DEF CON 2024 conference. Of these, the top scoring teams – up to five – will receive $2 million each to continue advancing their tools for a year before they move to the final phase of the competition, to be held at DEF CON 2025.
The top three scoring competitors in the final round will receive monetary prizes – with first place receiving $4 million – to “build a system that can rapidly defend critical infrastructure code from attack,” Adams said.
Rob McHenry, DARPA’s deputy director, said during the press call with reporters that the AIxCC is modeled after the defense agency’s Grand Challenge for unmanned vehicles – which McHenry said jumpstarted the field of self-driving cars and demonstrated the “game changing potential” of machine learning.
The AIxCC will use DARPA’s prize challenge, McHenry said, which is a “great tool for forming new technology ecosystems, especially for groups that aren’t traditional defense contractors.”
“The AI Cyber Challenge is an exciting new effort that uses DARPA’s challenge authority and our convening power to both push forward emerging AI capabilities and address the known risk to our critical infrastructure,” he said. “In the AI Cyber Challenge, our goal is to again create this kind of new ecosystem with a diverse set of creative cyber competitors, empowered by the country’s top AI firms, all pointed at new ways to secure the software infrastructure that underlies our society.”
Today’s announcement is part of a broader commitment by the Biden-Harris administration to ensure that the power of AI is harnessed to address the nation’s great challenges, and that AI is developed safely and responsibly to protect Americans from harm and discrimination.
Last month, the Biden-Harris Administration announced it had secured voluntary commitments from seven leading AI companies to manage the risks posed by the technology.
Earlier this year, the administration announced a commitment from several AI companies to participate in an independent, public evaluation of large language models at DEF CON 2023. This exercise, which starts later this week and is the first-ever public assessment of the tools, will help advance safer, more secure, and more transparent AI development.
The White House’s Director of the Office of Science and Technology Policy, Arati Prabhakar, will travel to Las Vegas to participate in the public evaluation of generative AI.
“President Biden has been clear: AI is the most powerful technology of our time, and we have to get it right for the American people. That means managing it first and it means harnessing its tremendous potential,” Prabhakar said during the press call.
“All of these efforts are to achieve safe and effective AI for our future. And you’ll continue to see more work to this and, for example, later this week, I’ll be at the AI Village at DEF CON, which is going to be hosting the first ever independent public evaluation of multiple large language models,” she continued.
“We’ll have thousands of people over two and a half days red teaming, leading AI models to see how they stack up to the Blueprint for an AI Bill of Rights,” she said. “That effort will responsibly disclose the outcomes of the red teaming to the AI companies so that they can continue to boost the safety of their systems.”
The White House is also currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible AI innovation.