While most Federal agencies are at least dipping toes into the artificial intelligence (AI) pool, new MeriTalk research finds some are struggling to incorporate the technology more broadly into operations.

Recent recommendations from the National Security Commission on Artificial Intelligence (NSACI) highlight AI’s importance for government, suggesting Federal leaders double AI research and development spending each year – targeting $32 billion by fiscal 2026.

MeriTalk recently connected with Larry Brown, Ph.D., solutions architect manager, Public Sector, NVIDIA, to discuss how agencies can use the AI momentum to operationalize the technology, especially at the edge; what Federal AI leaders are doing differently; and what is needed to go from proof of concept to large-scale deployment.

MeriTalk: Nearly 90 percent of survey respondents said operationalizing Federal AI is the cornerstone of a digital-first government. What makes AI so important?

Larry Brown: AI has become a dramatic, enabling technology for so many core capabilities. The fundamental problem is we don’t have enough people to do the job when it comes to the critical analysis that is so important to Federal missions – whether it’s analyzing a full-motion video, monitoring cyber intrusions, or detecting fraud.

AI gives our software faster and more advanced reasoning capabilities. We can do things we could not do before – for example keep up with cyber threats, and take action to prevent breaches in a timely manner.

MeriTalk: Almost two-thirds of respondents said their agency is struggling to take localized AI pilot programs and incorporate them into overall IT operations. What are the biggest challenges associated with growing beyond the pilot stage, and how can agencies overcome the challenges?

Brown: There is a lot of conversation about the best way to scale up pilots, so I am not surprised this came up. Teams have to be tactical as they define AI initiative goals.

The most successful projects are the ones that have a very tight scope and a clearly defined set of inputs, outputs, and success metrics. In planning a pilot, teams should include input from partners, infrastructure specialists, application software support teams, and people with data science expertise.

There are several other keys to operationalizing AI. Executive support is very important – both on the business and IT leadership level. And, you need a technical champion, a senior executive to support any required technology transformation. AI often requires new expertise with a heavy focus on data science. We also typically see a requirement for new infrastructure capabilities that span beyond traditional computing platforms.

MeriTalk: What steps should agencies consider as they work to establish a foundation for widespread AI integration?

Brown: There are several steps to consider. First, AI maturity – are we doing any traditional machine learning or advanced analytics? Do we have enough data scientists on staff? What AI, machine learning, and data science tools are we using today? Then, what types of applications are we working in? Are we using video, audio, or cyber analytics?

Finally, agencies can evaluate their compute infrastructure for AI. This is probably the most overlooked requirement and consideration. It is not just about the application side of the equation. If engineers and data scientists are using laptops from 10 years ago, they do not have the right infrastructure. If an organization has centralized computing resources, but the resources are a few years old and designed to run web servers, that is not going to work.

Having a high-performance compute capability is one of the “legs of the stool” that organizations need to build to enable the data scientists. That said, organizations early in their journey may not necessarily need a high-performance compute environment and may instead focus on accelerating compute capabilities and overall AI maturity.

MeriTalk: Almost half of respondents said they are doing AI at the edge – and the vast majority said the government should do more AI at the edge. Why is AI at the edge so important?

Brown: For most, the edge represents the farthest place one would imagine putting remote computing capabilities. The environment is often non-traditional (think about an airplane or small vehicle) and/or rugged. There are typically space, power, and internet access constraints.

The challenge is that the government works at the edge, in harsh and less-than-ideal environments. Mission examples include delivering healthcare to remote communities, providing humanitarian aid, conducting covert operations, and much more.

The Federal community is excited about AI because of its application at the edge. But, we need to remember that extensive development and R&D is needed. AI at the edge requires a significant backend or data center base compute infrastructure. Agencies need engineers, data scientists, and developers who develop the initial algorithms, plus cybersecurity personnel and systems to keep data secure.

In my mind, organizations never really have a pure edge scenario or a pure data center scenario, at least not in the public sector. There is always a heavy dependence on the back and forth between the data center and the edge.

MeriTalk: Some of the biggest challenges with AI at the edge are data center security, power consumption/availability, and systems management expertise. How do you see agencies working to overcome these challenges?

Brown: There are a series of technologies around trust and authentication important for developing and deploying software at the edge. A root of trust, for example, is a chain of custody between the datacenter and edge environments. A number of solutions from Dell Technologies and NVIDIA help agencies address these considerations and more.

Organizations can now get high-performance computing capabilities compressed down to devices the size of a credit card. NVIDIA makes Jetson GPU modules that have extensive built-in security. These units enable secure AI and advanced computing at the edge. NVIDIA EGX is an accelerated computing platform that enables agencies to deliver end-to-end performance, management, and software-defined infrastructure on NVIDIA-Certified servers deployed in data centers, in the cloud, and at the edge.

For example, working with Dell Technologies, we centralized a GPU cluster for the United States Postal Service (USPS). USPS developers create new algorithms for a variety of purposes. Once the team develops the algorithms, they push them out to 400 post office locations around the continental United States. The GPU units are managed centrally, and you do not have to take the edge locations down to update. USPS can seamlessly switch to newer software versions.

MeriTalk: Can you provide examples of how the USPS is working toward widespread AI integration?

Brown: One of our first projects with the USPS involved detecting dangerous mail packages. The USPS has learned over time that if one “bad” package is discovered, there are often more.

USPS images the packages. If the team identifies a suspicious package, they can quickly compare an image of that package to every other package in the system. The team can identify and pull similar packages for further investigation.

We helped the USPS create an AI infrastructure solution that enables rapid and accurate image comparison. With the new system, the USPS can complete searches in hours versus days.

NVIDIA is now working with the USPS on additional AI initiatives, including zip code boundary identification and analysis, delivery route and logistics efficiency, fraud detection, and more.

MeriTalk: What are Federal AI leaders doing differently than agencies that might be struggling with AI?

Brown: Agencies that are making progress in AI adoption are going fast. There is a lot out there in terms of software tools, expertise. If you don’t have the expertise, that’s not a barrier. There is support.

Look for software that can speed implementation. NVIDIA has a rich collection of solutions that provide easy-to-digest modules for software developers, as an example.

We also see successful agencies get creative with data acquisition. Federated learning is a way for different agencies and groups to share anonymized or blind data sets to train AI models. So, an organization can share data for AI training, but not have to share the values in those fields.

From a leadership perspective, if you task an IT team with a well-scoped tactical, actionable AI proof of concept, there is no reason the project should take six months or a year. The IT team should be able to get results in weeks – it is absolutely possible today.

Read More About
Recent
More Topics