An AI industry expert called on Congress this week to pass comprehensive Federal legislation that covers a wide range of risks associated with the emerging technology, arguing that state and local government action to create AI-related laws will only stifle innovation.

“State and local regulators who don’t have the time and resources to delve into this area and understand the technology, understand the legal issues, understand the worker versus management – all of these considerations are important and that’s why I think AI cries out for a Federal uniform solution,” Bradford Newman, the leader of the AI Practice at Baker & McKenzie LLP and co-chairman of the AI Subcommittee for the American Bar Association, said on Oct. 31.

“It’s the Federal government that is uniquely positioned with its resources, and the folks who are serving in the Federal government to take the time to understand the issues we’re just exploring today,” Newman said during a Senate Committee on Health, Education, Labor, and Pensions Subcommittee on Employment and Workplace Safety hearing. “We’re going to have a better bipartisan resolution that meets all of the varying constituents’ legitimate needs if the Federal government acts versus the state and local patchwork we’re getting in every aspect of AI. I think it’s detrimental.”

Half of the states in the U.S. and the District of Columbia have proposed or enacted legislation surrounding the emerging technology, something that can pose a challenge for nationwide businesses, Newman said.

“The companies that want to use [AI] are being faced with a vexing and increasing patchwork of state and local laws, some of which are promulgated by folks who are less than informed on the technology,” Newman said. “This is creating a lot of headwinds for those who want to innovate.”

“A lot of the developers are scratching their head saying, ‘What do I have to do in California? What do I have to do in New York City? Should we be in New York City if that’s what we have to do?’ That’s the opposite of what we want as a society,” Newman said.

He continued, “We want clarity, we want efficiency, we want fairness, we want rational regulation. And we’re creating a hodgepodge of anti-competitive, anti-innovation, catch-as-catch-can all over the country and that isn’t desirable to fuel innovation.”

Newman warned that the top three national security risks of nefarious use of AI consist of state actors leveraging AI to influence domestic issues; the generation of fake voice and image by AI to interfere with elections; and promulgating cyberattacks. That is why the AI expert is pushing Congress to regulate the emerging technology sooner rather than later.

“I’m anti-regulation by DNA, but this is an area where I think the Federal government ought to act responsibly and prudently and occupy the field so there is a uniform set of rules to do this responsibly that large and small companies alike can draw from and make sure they’re on the right side of the compliance line while innovating,” Newman said.

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags