With few guiding lights available for government agencies to manage their artificial intelligence (AI) deployments, a new service just out from stackArmor is looking to fill that void in the same manner that the government’s FedRAMP (Federal Risk and Authorization Management Program) has been doing for years in certifying the security of cloud services used by Federal agencies.

The new service announced by the company last week is the Approval To Operate (ATO) for AI accelerator. It’s a governance model that the company said will help public sector and government organizations “rapidly implement security and governance controls to manage risks associated with Generative AI and General AI Systems” as defined by the National Institute of Standards and Technology (NIST).

The service will do that by extending NIST AI Risk Management Framework (RMF) risk categories to NIST SP 800-53 security controls “to accelerate the implementation of policies, procedures, plans and security controls necessary to accelerate safe AI systems adoption by public sector organizations and regulated industries,” the company explained.

“The security and compliance experts at stackArmor have developed a unique suite of AI overlays for NIST 800-53 controls that are [tied] directly to NIST AI RMF risk categories to allow agencies to authorize AI systems rapidly,” stackArmor said. The AI overlays will allow agencies to use existing governance and risk management programs like FISMA and FedRAMP instead of having to come up with entirely new ones.

Gaurav “GP” Pal, founder and CEO at stackArmor, said in an interview with MeriTalk that that while the new service is not aiming to certify security like the FedRAMP program does, it is pursuing a very similar goal of making AI services easier for Federal agencies to adopt by incorporating the requirements of current NIST policies – along with more Federal policies that will be coming – into the evaluation process.

“What the biggest part of our service is doing is connecting the dots between AI security requirements contained in the NIST AI RMF” by creating an AI policy overlay, Pal said.

“That way, we don’t believe you need to come up with a brand-new framework and regulatory mechanism, but you can enhance and extend what we currently do and incorporate additional security and governance requirements for AI within the existing practices to get out of the gate faster,” the CEO said.

While NIST has created AI standards, the Federal government from the White House on down is expected to add substantially to the AI policy landscape in the coming months. Pal said the new Approval To Operate (ATO) for AI accelerator service has the ability to expand right along with the coming policy avalanche.

“The service is designed to be very flexible to accommodate other policy requirements, be it the Cybersecurity and Infrastructure Security Agency (CISA) coming up with their own requirements for AI, or the Department of Homeland Security, or the Defense Department,” Pal said.

“We are not beholden to NIST only, we use NIST as a baseline foundation, just like today in the case of FedRAMP where the program sets the baseline standard,” he said.

“Agencies always have the ability to go in and apply additional requirements on top of that baseline, and to go in and tailor the risk based on their perception of what’s required, so our service absolutely accommodates that,” Pal said.  “We use the wording of ‘authority to operate for AI’ with the understanding that agencies will be able to tailor it and customize it to their specific mission needs.”

Finally, the stackArmor CEO said he sees robust demand from Federal agencies for the new service.

“My sense is that market demand for a service like this is extremely strong,” he said, based on the explosive interest in adoption of generative AI technologies, but also the fact that more traditional AI tech is already being embedded in numerous consumer and other applications.

“This demand is highly correlated with the actual demand for AI-driven services,” he explained.

“All of us can see that in commercial industry, generative AI and AI-driven solutions are skyrocketing through every piece of software,” Pal said. “The pace at which AI is embedded in commercial applications is breathtaking, and we think it’s just a matter of time where the same will apply to Federal and public sector workloads. Federal agencies are going to consume AI services at a very fast clip.”

“So therefore, the need will be equally strong need for securing and making sure that the AI the agencies are consuming is safe and compliant with whatever policies the government creates,” he said.

Read More About
About
John Curran
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags