As industry and government officials look to innovate further in artificial intelligence technologies, legal uncertainty around who’s responsible when AI goes wrong may quickly become a challenge in responsible development. 

Speaking at a Center for Strategic and International Studies event on April 16, Andrew Freedman, chief strategic officer at Fathom, said that tort laws – which allow people to sue for harm caused by someone else’s actions – are emerging as a challenge to further AI innovation.  

Freedman said that while there is strong bipartisan agreement that AI needs “common sense guardrails” to help prevent situations where technology goes wrong, there’s little faith that government alone could provide them.  

“Everybody [says] well, why don’t we just let liability handle a lot of the load,” explained Freedman. “There’s a lot of great things about liability, but to rely on it to be an active guardrail for the AI industry doesn’t – won’t work.” 

He laid out a scenario in which an AI developer could face a negligence lawsuit years after building a model used in autonomous vehicles. A jury and state judge would then have to rely on “a whole bunch of facts of all the things you did and didn’t do 10 years ago as a way to foresee risk and to potentially mitigate risk” in order to decide whether the AI developer could be held at fault.  

“That’s not a great way to create good industry practices,” Freedman added.  

To address the need for standards, Fathom is pushing for what Freedman called “multi-stakeholder regulatory organizations (MROs)” which are third-party bodies empowered to certify that companies are meeting a “heightened standard of care.” 

Freedman explained that the certification could serve as a shield in tort cases, offering developers a way to demonstrate due diligence. 

“If you’re certified for being extra safe – which is an entirely optional certification – that then would count on the back end in a negligence tort claim … that you met a standard of care,” Freedman said.  

Using a certification process such as MROs would also eliminate the time lapse between Congress setting standards at the legislative level that later become outdated once signed into law. Even without new laws, tort law – which varies by state – is already on track to shape AI accountability especially as AI agents act more independently.  

“Tort law just exists so that’s just coming,” said Freedman about courts assigning AI liability in the future. 

Read More About
Recent
More Topics
About
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags