A group of Democrats are reviving legislation that would restrict the use of biased and discriminatory artificial intelligence (AI) algorithms in making critical decisions in Americans’ lives.  

The AI Civil Rights Act, led by Sen. Ed Markey, D-Mass., has a substantial list of endorsements from civil rights, digital rights, labor, consumer, and social justice organizations. Its introduction comes as Republicans consider the inclusion of a moratorium on state-level AI laws in their defense spending bill.  

Currently, no federal regulations govern the development or deployment of AI, but 38 states adopted or enacted about 100 AI-related laws during the 2025 legislative session, according to the National Conference of State Legislatures.  

A 10-year pause on state AI regulations was floated earlier this summer as part of the One Big Beautiful Bill but was ultimately shelved after substantial debate and a final Senate vote of 99-1 to eliminate it from the final version of the bill.  

A ban on a patchwork of AI laws would stimulate innovation rather than hindering it, according to supporters of the moratorium, which is also backed by President Donald Trump. 

The AI Civil Rights Act would create a broad regulatory framework that governs AI systems used to make critical decisions affecting people’s rights, opportunities, and access to critical services. That includes actions related to employment, education, healthcare, government benefits, criminal justice, elections, insurance, and public accommodations. 

“We must address AI-powered bias and discrimination in the AI age,” Markey said in a statement. “Under the AI Civil Rights Act, America would show leadership in AI – not just technological leadership, but moral leadership. We cannot abandon our principles in reckless pursuit of technological superiority. Otherwise, we risk building a future where innovation races ahead, but justice falls behind.” 

Before releasing to the market or deploying a covered AI, a preliminary harm evaluation would be required, and if harm is plausible, a subsequent full and independent audit would be required to assess AI design and methodology, testing and training, and the model’s potential harm, among other considerations.  

Deployers would also be required to conduct annual harm checks, and if a harm has occurred or is likely to occur, an independent auditor would need to evaluate harms caused, disparate impacts, data inputs, real-world outputs vs. expected outputs, and mitigation steps.  

Other requirements under the act include public disclosures, notices on the AI’s use and risks, a public repository with all documents on AI evaluations, and appeal processes when individuals want to refute the AI’s decision. 

In addition to Markey, the bill was introduced by Sens. Cory Booker, D-N.J., and Elizabeth Warren, D-Mass., and is co-led in the House by Reps. Pramila Jayapal, D-Wash., Yvette Clarke, D-N.Y., and Ayanna Pressley, D-Mass. 

“As AI innovation grows, it is incumbent on us all to prioritize the safety, rights, and opportunity of all people – especially the Black, brown, and marginalized communities who disproportionately bear the burden of biased and discriminatory systems,” Pressley said. “We cannot allow AI to be the latest chapter in America’s history of exploiting marginalized people.” 

The AI Civil Rights Act was first introduced in September 2024, but that initial version stalled in committee before the end of the 118th Congress. 

Read More About
Recent
More Topics
About
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags