Deepfake images created with artificial intelligence (AI) technologies pose a growing threat to women and children and require Federal action to rein them in, the chair and ranking member of a key congressional subcommittee said on March 12.

The fake AI-generated content represents “a new frontier” for Federal law enforcement, said Rep. Nancy Mace, R-S.C., chair of the House Oversight and Accountability Committee’s Cybersecurity, IT, and Government Innovation Subcommittee.

“With the advent of deepfakes and AI and technology, it’s not just real videos you have to worry about, it’s the fake ones that can be created out of thin air,” Rep. Mace during a subcommittee hearing on “Addressing Real Harm Done by Deepfakes.”

At the hearing, which she called to examine how deepfakes of intimate images are often victimizing women and children, Mace pointed to a bill she recently introduced to combat deepfake pornography.

Rep. Gerry Connolly, D-Va., the committee’s ranking member, cited a 2023 study finding that pornography makes up 98 percent of all online deepfake videos – and 99 percent of people targeted in deepfake pornography are women.

Calling it appropriate that the hearing was held during Women’s History Month, Rep. Connelly said that “recent technological advancements in artificial intelligence have opened the door for bad actors with very little technical knowledge to create deepfakes cheaply and easily.”

The congressman called on lawmakers to support “federal research and development of new tools for the detection and deletion of deepfake content.”

The U.S. Government Accountability Office (GAO, in a report released this week on combatting deepfakes, defined them as “videos, audio, or images that seem real but have been manipulated with AI.”

GAO said deepfakes have been used to try to influence elections and that their malicious use could “spread disinformation, undermine national security, and empower harassers.”

Some of the potential harassment caused by deepfakes and other generative AI tools has spread to children, a witness told the subcommittee.

John Shehan, Senior Vice President of the Exploited Children Division & International Engagement at the National Center for Missing & Exploited Children, said the center last year received 4,700 CyberTipline reports about sexually exploitative content created or altered with generative AI technology.

“We are deeply concerned to see how offenders are already widely adopting generative AI tools to exploit children,” Shehan said, calling the problem “unregulated.”

The hearing – the second the subcommittee has held on deepfakes in recent months – comes amid growing concern over the technology among lawmakers of both parties, who have introduced a number of bills to regulate it.

Technology companies recently vowed to crack down on the use of harmful AI-generated content, including deepfakes, meant to deceive voters in the 2024 elections.  The pact the companies signed at the Munich Security Conference came after voters received deepfake robocalls of President Biden ahead of the New Hampshire primary.

National security agencies are also worried about deepfakes, with the National Security Agency (NSA), the Federal Bureau of Investigation (FBI), and the Cybersecurity and Infrastructure Security Agency (CISA) in September releasing an information sheet on deepfake threats to organizations.

“Threats from synthetic media, such as deepfakes, present a growing challenge for all users of modern technology and communications,” the document said.

It recommended that organizations take steps to defend against deepfake threats, such as “implementing a number of technologies to detect deepfakes and determine media provenance, including real-time verification capabilities, passive detection techniques, and protection of high priority officers and their communications.”

Read More About
Recent
More Topics