Technology industry groups are raising concerns that a proposed General Services Administration (GSA) policy to standardize artificial intelligence (AI) contracting terms could conflict with federal acquisition rules and deter vendor participation. 

The draft guidance – issued last month – would require AI vendors to grant agencies an “irrevocable, royalty-free, non-exclusive license” to use their systems for the duration of a contract. 

If adopted, the guidelines would also allow agencies to integrate AI technology into existing government systems “as necessary for any lawful government purpose.” 

In comments submitted to GSA shared with MeriTalk, the Alliance for Digital Innovation (ADI) – whose members include Amazon Web Services, Google, Salesforce, Zscaler, and Palantir – said the proposal introduces major contracting challenges. 

ADI warned that the policy could effectively force vendors to create separate government-only versions of their products. 

“… the clause would require contractors to build and maintain a parallel, Government-only product distinct from their commercial product,” the group said, adding that the approach risks turning commercial procurements into bespoke development efforts. 

Specifically, ADI told GSA that multiple provisions in the draft would create compliance burdens “that are difficult, if not impossible to reconcile” with how commercial AI products are built and delivered. 

The group also cautioned that the requirements could disproportionately impact smaller and emerging AI firms that lack the resources to modify commercial offerings for government use, potentially limiting access to cutting-edge technologies. 

In its comments, the Software & Information Industry Association (SIIA) wrote, “the GSA risks creating an environment where the most advanced AI solutions are no longer accessible to the federal government.” SIIA members include Amazon, Anthropic, Google, and Oracle. 

SIIA similarly pointed to conflicts between GSA’s proposal and the Federal Acquisition Regulation (FAR). Specifically, SIIA and ADI said that the clause raises concerns over intellectual property and would create data governance and supply chain restrictions.  

SIIA added that the limited room for negotiation would force companies to forgo core commercial protections – potentially undermining the viability of their AI products. 

Those conflicts, SIIA said in its comments, are “incompatible with the shared infrastructure and global innovation models essential to modern commercial AI operations.” 

Beyond licensing terms, the proposal would require AI systems used by the federal government to prioritize “historical accuracy, scientific inquiry, and objectivity,” while remaining neutral and nonpartisan. 

Systems would be subject to automated federal evaluations for bias, truthfulness, safety, and ideological content. Vendors whose systems fail those assessments could be responsible for decommissioning costs. 

ADI said several of those requirements are difficult to operationalize, citing undefined terms such as “ideological dogmas” and unrealistic expectations for model accuracy. 

The group argued that strict “truthfulness” standards do not reflect the probabilistic nature of generative AI systems and recommended shifting to a “reasonable efforts” framework instead. 

SIIA similarly called for a more collaborative evaluation approach, proposing upfront benchmarking of models against government standards followed by shared results and joint improvements. 

To address concerns, ADI urged GSA to align its guidelines with the National Institute of Standards and Technology (NIST) AI Risk Management Framework, clarify evaluation criteria, and limit vendor liability for system performance. 

“ADI and its member companies stand ready to engage in further dialogue to develop workable solutions that protect Government interests while preserving Contractors’ ability to deliver innovative, high quality AI services at scale,” ADI said.  

SIIA wrote that it “remains committed to working with the GSA to develop a framework that ensures AI systems are secure and trustworthy while remaining firmly rooted in the commercial-first mandate that has historically driven American technological leadership.” 

The changes proposed by GSA followed a dispute between the Department of Defense and Anthropic, after the AI company declined to loosen safeguards that prohibit the use of its technology for applications such as fully autonomous weapons systems or mass domestic surveillance.  

President Donald Trump barred federal agencies from using Anthropic AI tools in response, and GSA issued its proposed guideline shortly afterwards. 

Read More About
Recent
More Topics
About
Weslan Hansen
Weslan Hansen is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags