Making artificial intelligence (AI) explainable to the general public has come with its challenges in recent years and knowing where to start includes identifying high-consequence sectors that need future research and policymaker consideration.
Dr. Timothy Person, Chief Scientist and Managing Director for the Government Accountability Office (GAO), identified four high-consequence sectors at FCW’s Preparing Your Agency for AI and Automation forum on Wednesday. Those sectors include:
- Cybersecurity;
- Automated Vehicles;
- Criminal Justice, and;
- Financial Services.
GAO is looking closer at how AI systems can be made more secure without stifling innovation because AI can also be used as a tool for detecting and defending against cyberattacks, not just as a liability to be the victim of cyberattacks.
Automated vehicles, of course, come with their own concerns. Dr. Person brought up the question of who would be liable if an autonomous vehicle struck a citizen? These are ethical concerns already being juggled in the private and public sectors.
AI can improve the allocation of law enforcement resources in the criminal justice system, but according to GAO can raise concerns about privacy and civil rights violations. Branch Head for Information Management and Decision Architectures Branch for the IT Division of the U.S. Naval Research Laboratory Ranjeev Mittu brought up that algorithms can be trained with the same biases of those training the AI. He said that designers need to think about if they’re thinking about the complete distribution of data that we want the algorithm to learn. Determining the options for assessing accuracy and bias potential will be essential moving forward.
Similarly, AI can impact the efficiency of the financial sector by improving client services and enhancing surveillance monitoring, but there are biases that can occur here such as ensuring fair lending. Identifying the mechanisms to address ethical considerations, tradeoffs, and protections is key.