Generative artificial intelligence (AI) technologies have taken the news cycle by a storm over the last several months, and it’s time for lawmakers to begin considering Federal privacy laws to protect sensitive information collected by generative AI systems.

That was a top policy consideration issued from a June 13 report by the Government Accountability Office (GAO) examining the opportunities and challenges in this technology that is surging in popularity.

“Use of generative AI, such as ChatGPT and Bard, has exploded to over 100 million users due to enhanced capabilities and user interest. This technology may dramatically increase productivity and transform daily tasks across much of society,” GAO’s report says. “Generative AI may also spread disinformation and presents substantial risks to national security.”

The agency explains that generative AI is a technology that can create content, including text, images, audio, or video, and has potential applications across a wide range of fields, including education, government, medicine, and law.

Some of the opportunities that the tool presents, GAO says, include: rapidly aggregating a wide range of content that quickens access to ideas and knowledge; automation of a wide variety of administrative or other repetitive tasks; and enhancement in the productivity of many industries.

However, the watchdog agency also offered five challenges that generative AI presents to its users and the nation:

  • Trust and oversight concerns: generative AI systems can respond to harmful instructions, which could increase the speed and scale of real-world harms;
  • False information: the tool may produce “hallucinations” – erroneous responses that seem credible. Additionally, a user could utilize AI to purposefully and quickly create inaccurate or misleading text, thus enabling the spread of disinformation;
  • Economic issues: the systems could be trained on copyrighted, proprietary, or sensitive data, without the owner’s or subject’s knowledge;
  • Privacy risks: specific technical features of generative AI systems may reduce privacy for users, including minors. Additionally, if a user enters personally identifiable information, that data could be used indefinitely in the future for other purposes without the user’s knowledge; and
  • National security risks: information about how and when some generative AI systems retain and use information entered into them is sparse or unavailable to many users, which poses risks for using these tools. Furthermore, when systems are publicly and internationally accessible, they could provide benefits to an adversary.

In conclusion, GAO’s report offers lawmakers advice on several high-level policy topics that would leverage generative AI’s opportunities, while still placing guardrails around its risks.

First and foremost, Congress needs to consider what privacy laws can be developed to protect sensitive information collected by generative AI systems, including information provided by minors.

The agency also suggests that policymakers consider what AI guidelines can best ensure generative AI systems are used responsibly, and if generative AI systems are following existing guidance.

Finally, GAO recommends that future policy consider what standards could be developed to evaluate the methods used to train generative AI models and ensure fairness of their responses, and how public, private, academic, and nonprofit organizations strengthen their workforce to ensure responsible use of generative AI technologies.

The United States has yet to pass legislation that would rein in Big Tech and put Americans in control of their personal data, with Congress swinging and missing several times on getting a national data privacy standard over the finish line for more than a decade.

The Congressional Research Service (CRS), which operates as a public policy research institute for members of Congress and their staffs on a nonpartisan basis, recently issued a report that called on lawmakers to look to the state of data privacy laws in the U.S. – or the lack thereof – as an important guidepost in deciding the rules of the road for AI tech developers who must train their algorithms on massive amounts of data.

The House Energy and Commerce Committee has paved the way for a national data privacy standard through the bipartisan, bicameral American Data Privacy and Protection Act (ADPPA).

The ADPPA aimed to provide consumers with fundamental data privacy rights by creating strong oversight mechanisms and establishing meaningful enforcement. Before the ADPPA, the last serious push for stronger standards dates back to 2019.

House Energy and Commerce Committee member Rep. Jay Obernolte, R-Calif., said last week that he hopes lawmakers will establish a framework that unlocks the benefits of AI while guarding against the potential harms within the next 10 years, but in the short-term, the Hill needs a concrete digital privacy law.

“We need to pass a digital privacy standard at the Federal level. That’s something that we have been working hard on in the Energy and Commerce Committee,” Rep. Obernolte said. “I am cautiously optimistic that we are going to succeed in putting a bill on the House floor this year that accomplishes that.”

He concluded, “If you look at the potential harms in the short term that malicious use of AI could lead to, the piercing of digital data privacy is among the foremost. So, this would be a meaningful step in the right direction, and I think should definitely be job one for Congress.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags