As generative artificial intelligence (GenAI) innovation and integration continues to advance, the Government Accountability Office (GAO) is flagging a wide range of risks posed by the technology that policymakers should consider going forward.   

Those risks, GAO said, run the gamut from things as basic as water supplies all the way up the tech chain to cybersecurity and data privacy concerns.   

“Generative AI uses large amounts of energy and water. Additionally, generative AI may displace workers, help spread false information, and create or elevate risks to national security,” wrote GAO in a report published on April 23.  

“The benefits and risks of generative AI are unclear, and estimates of its effects are highly variable because of a lack of available data,” GAO said, emphasizing that “the continued growth of generative AI products and services raises questions about the scale of benefits and risks.” 

One area that GAO is paying particular attention to includes environmental concerns – especially those from AI data centers which requires mass amounts of energy and water availability to cool systems. Data centers require roughly the same amount of energy required to power 80,000 to 800,000 households – with 40 to 50 percent of that energy being used to power the IT infrastructure and cooling systems requiring an addition 40 percent, according to the report. 

President Donald Trump vowed to use his power as president to support the creation of AI data centers to bolster AI innovation, with his administration most recently announcing that it had identified 16 sites on Energy Department lands to build those centers and related infrastructure.  

The president earlier this month deemed coal his energy source of choice to power those data centers – a move that has been considered controversial among some in industry and Democrats who have advocated for renewable sources.  

GAO said that policymakers should consider the effects of AI data centers on carbon emissions outputs, energy consumption, and water usage, adding that specific harms are difficult to calculate given limited existing information. 

“Although there is a lack of data, there have been proposals to consider the environmental effects of carbon emissions during infrastructure build,” GAO wrote. “For example, a particular server delivered to a data center might have a particular emission cost associated with the mining of the raw materials it incorporates, and the energy used to assemble and transport the server to the site,” the watchdog agency said.  

Despite the unknowns, GAO said that continued innovation can bolster AI efficiency and limit environmental harms, explaining that improved hardware, more efficient algorithms – such as pruning and quantization – and new cooling systems could lead to better outcomes. 

“Since data center cooling systems can account for up to 40 percent of data center energy usage, companies are exploring and applying new techniques to reduce operational costs, such as liquid cooling,” explained GAO. “Companies are exploring immersion cooling, where the computing hardware is submerged in a fluid, which removes the need for air cooling.” 

GAO said that while it is hard to tell what the difference would be between AI and GenAI in terms of environmental impacts, AI tech in general would likely require many resources. 

The report says risks posed to humans could come from five different areas: lack of accountability; unintentional bias; unsafe systems; cybersecurity concerns; and a lack of data privacy. 

Some areas are harder than others to assess, the report explained, noting that catching unsafe systems and limiting their impacts is harder to predict and find due to limited knowledge. 

“Assessing the safety of a generative AI system is inherently challenging,” said GAO. “These systems largely remain ‘black boxes,’ meaning even the designers do not fully understand how the systems generate outputs. Without a deeper understanding, developers and users have a limited ability to anticipate safety concerns and can only mitigate problems as they arise.” 

This can also then create greater risk or harm from a lack of accountability, GAO added. 

“Adding to the black box factor is a lack of information on the source of a generative AI systems training data, known as data provenance,” the report reads. “Although many companies investigate and report on system behavior, often documented in model or system cards, they often provide limited information on the training data … Without information on the data used to train these models, it is difficult to evaluate the training, which hinders independent research on model behavior and limits transparency.” 

GAO said that recommendations for policymakers include expanding efforts to improve data collection and reporting; encouraging innovation to reduce environmental effects; supporting the use of available AI frameworks to inform GenAI use and software development processes; and expanding efforts to share best practices and establish standards. 

The report adds that policymakers could choose to do nothing and maintain the status quo. Taking that course could allow technical innovations to address some challenges without additional resources. GAO added, however, that that current efforts policymakers are taking may not address challenges currently posed by GenAI. 

Read More About
Recent
More Topics
About
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags