The bias and ethics of artificial intelligence application are still being worked out on a case-by-case basis, and National Institute of Standards and Technology (NIST) IT Lab Chief of Staff Elham Tabassi said yesterday that an assessment process to minimize harm is not necessarily in the near future.

“One-size-fits-all is not going to work,” she said at an October 1 Brookings event. The assessment solution would have to demonstrate that specific requirements are fulfilled and that algorithmic processes are carefully crafted to reduce harm. As Tabassi explained, defining AI requirements is difficult because issues like how bias is defined varies by use case.

Even if requirements can be defined, “You need some sort of testing mechanism, inspection, or audit some way of knowing that it does what it says,” Tabassi added. This creates another roadblock as testing mechanisms for algorithms and AI are lacking and technologists are unsure how to approach testing in a standard way.

“The intent is really good but it’s a really difficult problem to address,” Tabassi said.

She continued that even efforts to label types of AI as “high risk” can be problematic. Blanket labels are difficult to apply correctly in a space such as emerging technologies where the landscape varies and shifts.

“You may think that a face recognition algorithm is high risk,” Tabassi said as an example, “certainly its high risk for use of face recognition in law enforcement, but if I’m using face recognition to unlock my phone maybe it’s not high risk.”

Assessments and labeling can also create a false sense of confidence in the algorithms, she added. If it’s not labeled as high risk, organizations may not take the necessary precautions to keep AI ethical and bias-free.

NIST is still accepting comments on its explainable AI principles released in August. The draft document is NIST’s effort to build trust in AI systems by understanding theoretical capabilities and limitations of AI, and by improving accuracy, reliability, security, robustness, and explainability in the use of the technology.

Read More About
About
Katie Malone
Katie Malone
Katie Malone is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags