Conrad Stosz has permanently moved to the National Institute of Science and Technology’s (NIST) AI Safety Institute (AISI) as the head of policy.

Previously the director of AI at the White House’s Office of Management and Budget (OMB), Stosz joined the NIST AISI on a temporary assignment in April. He announced on Aug. 15 that the position became full-time.

“Thrilled to share that I have started a new position as Head of Policy for the U.S. AI Safety Institute in the Department of Commerce, wrapping two wonderful years leading on federal AI policy from the White House Office of Management and Budget,” Stosz wrote on his LinkedIn. “What started out as a temporary assignment to the AISI has officially become a full-time gig.”

“Excited to be working with a team of amazing scientists to push the boundaries of AI testing and evaluation and help ensure that AI is safe, secure, and trustworthy for everyone,” he wrote.

“I am also tremendously grateful to have had this opportunity over the last two years to help shape the U.S. response to recent advances in AI, including the chance to help shape Executive Order 14110 and its implementation,” he said. “In particular, I am grateful to the wonderful team that made OMB’s AI policy for federal agencies M-24-10 possible, creating the first concrete requirements for protecting the public when the government uses AI in high-risk ways.”

Stosz began his work at OMB in July 2022 as a policy advisor. Nine months later, in March 2023, he moved up to serve as the White House’s director of AI where he spearheaded the Biden-Harris administration’s work on key technology documents including the October 2023 AI EO and the March 2024 AI guidance for Federal agencies.

Prior to joining the White House, Stosz spent just under four years in AI policy roles at the Defense Department and on Capitol Hill. Before he joined the Federal government, he spent more than six years in industry.

The NIST AISI was founded as part of President Biden’s landmark executive order on AI.

The Commerce Department released NIST’s AISI strategic vision in May – after setting its top executive leadership in February – outlining the institute’s plans to, among other activities, conduct testing of advanced models and systems to assess potential and emerging risks; develop guidelines on evaluations and risk mitigations; and perform and coordinate technical research.

The NIST AISI released its first set of draft guidance last month, offering best practices for developers of AI foundation models to manage the risks that their models will be deliberately misused to cause harm.

The Director of NIST’s AISI, Elizabeth Kelly, highlighted the work the institute will begin over the next year.

“We are excited to begin testing of frontier models prior to deployment, and I think we’re in a good position to begin that testing in the months ahead, because of the commitments that we’ve gotten from the leading companies,” Kelly said.

“We’re really excited for the launch of the AI Safety Institute [global] network and for this convening in November to bring together not just the allies and partners who have stood up safety institutes or similar entities for the technical conversations on things like benchmarks and capabilities risk mitigations, but also to bring together the broader civil society, academia, industry, technical experts, and I think the fact that we’re hosting in San Francisco really speaks to the U.S. leadership role here and how we want to continue to continue to maintain and grow that,” she added.

The legislation to codify NIST’s AISI – the Future of AI Innovation Act – was approved by the Senate Committee on Commerce, Science, and Transportation on July 31.

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags