The Department of Defense (DoD) is investing $2.4 million over two years to prototype a deepfake detection capability with London-based smart home company Hive to counter the rising threat of synthetic media, the Defense Innovation Unit (DIU) said in a Dec. 5 announcement.

Under the two-year contract, Hive will develop a capability to help the DoD address artificial intelligence (AI)-generated content with precision and speed. According to DIU, the deepfake detector technology will enable the department to prevent adversaries from using deepfakes “for their deception, fraud, disinformation and other malicious operations.”

In the last few years, the rise of generative AI has made given synthetic media – deepfakes – a “realistic-looking” makeover, “posing a significant threat to the DoD, especially as U.S. adversaries use deepfakes for malicious activities,” DIU said.

“As synthetic multimedia content proliferates, the DoD needs detection and attribution capabilities that can keep pace with the rapidly evolving tools, techniques, and models used to create highly convincing and challenging-to-detect manipulated multimedia,” according to DIU.

“This prototype has the potential to enable the DoD to detect and counter AI deception at scale, maintaining our nation’s information advantage in an increasingly complex digital battlefield,” Capt. Anthony Bustamante, DIU project manager and cyber warfare operator, said in a statement.

According to Bustamante, the prototype represents a significant step forward in strengthening the department’s information advantage “as we combat sophisticated disinformation campaigns and synthetic media threats.”

The prototype builds on the department’s ongoing efforts to counter threats posed by deepfakes and manipulated multimedia content.

For example, earlier this year the Defense Advanced Research Projects Agency (DARPA) launched efforts to bolster defenses against manipulated media deepfakes. Through the Semantic Forensics program, DARPA is investing in research to detect, attribute, and characterize manipulated and synthesized media.

“Our work with deepfake identification technology will give the DoD the ability to take decisive action against AI-generated content – a crucial capability for our national security,” said Bustamante.

Additionally, the tools and methodologies used in this initiative are adaptable, which means the prototype could be utilized to safeguard civilian institutions against similar disinformation, fraud, and deception.

Read More About
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags