Amid rapid development of artificial intelligence (AI) and machine learning (ML) capabilities, a race against China to lead in AI, and increasing Federal funding, the question of whether AI will be widely adopted among Federal agencies has been resolved.

Federal spending on AI totaled $3.3 billion in 2022, a record high, and the Biden administration’s fiscal year 2024 budget seeks billions of dollars more. Looking toward 2025, the White House in August told Federal agencies to prioritize AI research and development in their fiscal year 2025 budget requests.

Now, agency leaders are looking ahead to broader questions: How – and where – should agencies further integrate AI and ML technologies? Amid debate about generative AI and whether it protects civil liberties and privacy, how can this integration proceed responsibly and with appropriate guardrails, enabling AI use only for the public good?

“Federal agencies are enthusiastic about AI,” said George George, a solution architect at REI Systems, which works with Federal agencies to optimize business processes and innovate with AI and ML. To get the most value from them, George advised agencies to “assess and clearly identify their goals; identify applications that enhance customer, citizen, and staff experience, addressing pain points with intelligent automation; and ensure governance that is enforced with the appropriate guardrails to ensure ethical and responsible use of AI.”

AI Use Is Growing Rapidly in Government

In the trenches of Federal agencies, the move toward AI is accelerating.

Under a 2020 presidential executive order calling for the use of trustworthy AI in the Federal government, agencies are required to disclose inventories with information about some of their AI use cases.

Since June 2022, more than 20 departments and agencies have posted examples, ranging from a chatbot that helps answer questions posed by customers at the Department of Commerce’s International Trade Administration, to a function that detects mismatched addresses for benefit recipients at the Department of Labor.

At the Department of State, AI is being used to collect and analyze news reports from about 70 overseas embassies, while the technology team at the Environmental Protection Agency is using machine learning to aid in records management.

Military services are becoming avid users of AI and machine learning technologies, with the Department of Defense designating AI as a top modernization area and starting to integrate AI into its warfighting capabilities. The intelligence community is discussing whether to adopt an “AI first” approach, as the Central Intelligence Agency uses AI tools such as natural language processing and computer vision.

George, of REI Systems, cited other successful implementations, such as the Internal Revenue Service using AI and ML to identify tax fraud patterns, and national security use cases that include the Department of Homeland Security and the National Security Agency applying both technologies to detect cybersecurity threats with predictive algorithms.

Policymakers Debate AI Regulation, Guardrails

The growing use of AI and ML in Federal operations has raised questions about whether the technologies should be subject to greater regulation and guardrails – especially generative AI, which uses machine learning to create highly realistic content such as text, audio, and video. A recent Government Accountability Office report said generative AI has exploded to more than 100 million users and presents vast opportunities for creating a variety of content and automating administrative or other repetitive tasks.

The launch late last year of OpenAI’s ChatGPT in particular generated excitement over the potential applications of generative AI, but also concern about its risks, such as misinformation and bias.

“Obviously, everybody’s excited about this. You can see it in the press, we can feel it on the staff in the Pentagon. People want access to these tools to be able to improve their workflows and develop new capabilities,” Lt. Col. Joseph Chapa, chief responsible AI officer for the U.S. Air Force, said at an August event, while also cautioning about generative AI’s potential for misinformation, as well as security concerns.

The White House is working on a national AI strategy, while members of Congress have called for various forms of AI regulation, though such efforts remain mostly at the starting gate.

To examine the potential opportunities and challenges of generative AI, the National Institute of Standards and Technology in June established a working group. That followed the White House’s 2022 blueprint for an “AI bill of rights” to protect civil rights, civil liberties, and privacy.

Most recently, the Department of Defense (DoD) in August launched a task force that will develop recommendations on how the DoD can responsibly use generative AI.

“The DoD has an imperative to responsibly pursue the adoption of generative AI models while identifying proper protective measures and mitigating national security risks that may result from issues such as poorly managed training data,” Craig Martell, the DoD’s chief digital and AI officer, said at the launch. “We must also consider the extent to which our adversaries will employ this technology and seek to disrupt our own use of AI-based solutions.”

Imran Chaudri, chief architect for healthcare and life sciences at MarkLogic, which works with Federal agencies to simplify complex data, said generative AI systems pose “some key problems that need to be navigated very carefully.” These include “hallucinations in which they confidently provide plausible but incorrect data 15 to 20 percent of the time. In human terms, these are like memory errors, creativity, or lies,” Chaudri said. “Other AI model issues include biases based on bad data, reasoning errors, knowledge cutoffs, and struggling at specific tasks.”

Obstacles Can be Overcome, Especially With Greater Use of AI

Experts said the issues surrounding the use of AI and ML can be overcome through a combination of regulation, training, and other safety measures.

“Generative AI, like any other tool, can be used for both good and nefarious purposes,” acknowledged MarkLogic’s Chaudri. To ensure responsible use, he said agencies should beef up AI safety training, including through models known as RLHF (reinforcement learning from human feedback) and RLAIF (reinforcement learning from AI feedback).

“Federal agencies can take steps to implement guardrails that ensure AI is only used for good, such as to protect civil liberties and privacy,” said REI Systems’ George, who called for regulation, ethical guidelines, data privacy protection laws, and bias mitigation methods.

“Agencies should catalogue and label their data so the data can be scanned and evaluated with algorithmic transparency that prioritizes fairness, transparency, and accountability,” George said. He added that effective guardrails “can help AI systems stay within legal boundaries, mitigating risks and ensuring smooth operations. Guardrails also facilitate human oversight in AI systems, reinforcing the concept of AI as a tool to assist, not replace, human decision-making.”

In the end, some experts said that a good way to ensure responsible and effective use of AI is to simply use it more. Since “the generative space has been booming,” Chaudri said, Federal agencies should consider adding generative AI to areas such as knowledge management of private data, conversational services / customer communications, and content design and generation.

“The Federal government holds oceans of private data,” he said, which means that “making sense of it via knowledge management, intelligent synthesis, natural language exploration and discovery of this private data will improve the government’s capabilities and efficiency.”

Read More About
About
MeriTalk Staff
Tags