Government and policy-makers shouldn’t put up unnecessary barriers to deploying artificial intelligence (AI) over concern of any perceived risks associated with the technology. Instead, policymakers should encourage innovation while crafting targeted solutions for specific problems if they occur, according to a report by the Information Technology Innovation Foundation, a science and technology policy think tank.

There are a vast and diverse array of uses for AI, from rapidly analyzing large amounts of data to detecting abnormalities and patterns in transactions to extracting insights from datasets such as the link between a gene and a disease. AI is a field of computer science devoted to creating computer systems that perform operations characteristic of human intelligence, such as learning and decision making.

Policy debates around AI are dividing into two positions: those that want to enable innovation, and those who want to slow or stop it, according to the report “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence.” The precautionary principle is the idea that if a technological innovation carries a risk of harming the public or the environment, then those proposing the technology should bear the burden of proving it will not. If they cannot, governments should limit the use of the new technology until proven safe.

For example, stemming from fears that AI is inherently dangerous, some academics and professionals have proposed requiring some algorithms gain governmental approval before people use them. In 2017, University of Maryland computer science professor Ben Schneiderman proposed the creation of a “National Algorithms Safety Board” to independently oversee the use of “major” algorithms, such as those for auditing, monitoring, and licensing. Any time an agency or organization wanted to deploy an AI algorithm, then in Schneiderman’s plan they would first need to check with the safety board.

Attorney Andrew Tutt has a similar proposal, but his idea is to create an agency that would have the power to “prevent the introduction of certain algorithms into the market until their safety and efficacy has been proven through evidence-based, premarket trials.”

However, these views ignore the fact that many agencies already have measures in place to determine the safety of AI algorithms, according to the report. The Food and Drug Administration “is already providing oversight of algorithms in medical devices, including a device that uses AI to analyze images of the eye to detect if diabetes patients may be developing into diabetic retinopathy, which causes vision loss,” the ITIF report states.

“Even on a state level, we have seen examples of states start to regulate systems that use artificial intelligence, such as autonomous vehicles,” ITIF research assistant Michael McLaughlin, a co-author of the report, told MeriTalk. For instance, in 2014, California officials required that a human driver be behind the wheel to accompany any autonomous vehicle, or self-driving car, whenever one was tested on public roads. By 2018, as the technology improved, California dropped its human co-pilot law.

Read More About
About
Jordan Smith
Jordan Smith
Jordan Smith is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags