As federal agencies push artificial intelligence (AI) beyond pilots, scaling the technology requires a hard look at existing workflows – particularly whether they are “elastic” or “inelastic” – and a willingness to redesign how humans and machines work together, said Dan Tadross, head of public sector at Scale AI.

In a recent interview with MeriTalk, Tadross said agencies that successfully move from experimentation to operations clearly understand the problem they are trying to solve and how AI fits into the existing process.

“I think the first thing is choosing the right problem,” Tadross said. “If you understand the problem that you’re trying to solve, and you know how the current workflow exists and what the output of that workflow should be, that’s what usually ends up leading to a successful AI pilot that would lead to production.”

Without that clarity, he warned, agencies risk deploying technology into processes that are poorly defined or ill-suited for probabilistic systems.

From experimentation to operational impact

Tadross said government interest in AI reflects mounting pressure to make faster, more informed decisions in an increasingly complex environment.

“The U.S. government … is at this really interesting inflection point,” Tadross said. “We have been operating in the same way, which is we just keep throwing more people at the problem.”

That approach is becoming unsustainable with the size and complexity of today’s digital environment, he warned.

Some agencies are now applying AI in targeted ways to accelerate analysis, particularly in workflows that involve ingesting and triaging large amounts of information. One example Tadross shared is supporting “commanders’ critical information requirements,” where AI systems can process open source and classified data and flag items for closer human review.

“That system is flagging to the user, ‘Hey, these are things that you probably need to take a better, closer look at,’” Tadross said.

The elastic vs. inelastic framework

At the center of Tadross’ advice for federal chief information officers (CIOs) and mission leaders is distinguishing between what he called elastic and inelastic workflows. Elastic workflows are those where demand consistently outpaces human capacity, he explained.

“No matter how many people you throw at it, there’s always more information coming in,” Tadross said.

He cited radiology as an example, saying there are “always more people getting scans, X-rays, MRIs, than there are people to look at them.” When clinicians are augmented with AI, “all of a sudden, what you get is a direct impact on the customer.”

On the other hand, he said inelastic workflows – particularly those involving high-stakes or life-or-death decisions – demand greater caution.

“For the immediate future, the elastic workflows – where it’s going to be an agent plus the human kind of working in tandem with each other – are going to lead to the best outcomes, rather than just wholesale, giving the agent full autonomy and handing it the keys to the car,” Tadross said.

Why some AI pilots stall

Tadross said many stalled AI initiatives share a common root cause: insufficient scoping.

Federal agencies must define what success and failure look like before deploying AI, he said, adding, “The other part is understanding what does good, and what does bad, look like in that workflow.”

That clarity enables teams to iterate and improve systems over time, a process he described as “hill climbing that agent,” to determine whether performance is improving.

“Because if you can’t measure it, then you can’t really know if you’re making progress,” Tadross said.

By contrast, he said deploying a model without aligning it to a specific workflow often leads to disappointment.

“If you just throw an AI solution at every problem, chances are a lot of those problems were not the right problem to solve with an AI solution,” Tadross said.

Separating hype from reality

Tadross also pushed back on claims that AI will eliminate the need for human workers.

“You need human judgment in the system,” he said. The more realistic near-term shift, he said, involves adjusting training and processes so that humans can effectively incorporate AI outputs.

At the same time, Tadross said some use cases are being underestimated.

“There are some workflows that probably can be done with an agent today that you probably just don’t need a human to spend 10 minutes of their time working on that specific task, and you can … free up their cognitive bandwidth to work on more complex things,” he said.

Tadross said Scale sets itself apart through its mission-focused public sector team, which includes veterans of the intelligence community, the Pentagon, and other federal civilian agencies.

That experience, combined with evaluating systems from the data layer up, “makes us a little bit different,” he said. Scale can match the right model to the right workflow and iteratively refine it, Tadross said, rather than throwing “an LLM at the project without understanding the use case.”

Advice for CIOs

For CIOs trying to move quickly, but responsibly, with AI, Tadross said curiosity and discipline will set them apart.

“Be extra curious about the technology as you dig into it,” he said, adding, “Just being curious about the technology already sets you apart.”

He also recommended “understanding what the workflow is and what types of problems you have,” as well as not waiting for perfection.

“The technology is not perfect, but it doesn’t have to be perfect to be useful,” he said, continuing, “It doesn’t have to be perfect to get an immediate return on investment.”

Finally, Tadross said that deployment alone is not enough when it comes to AI.

“None of this technology is the ‘set it and forget it’ kind of solution,” Tadross said. “It’s not enough for us to deploy technology – the people and the process need to adjust in order to best leverage that technology.”

Read More About
Recent
More Topics
About
Grace Dille
Grace Dille is MeriTalk's Assistant Managing Editor covering the intersection of government and technology.
Tags