
Legislation introduced in the Senate on Thursday will require artificial intelligence systems used within healthcare settings to have an option to override decisions made by AI to maintain human judgment in health decisions.
The bill would help healthcare workers prevent incorrect decisions made by AI, which often do not have an option for human override, according to Sen. Ed Markey, D-Mass., who introduced the Right to Override Act during a Senate Health, Education, Labor, and Pensions (HELP) Committee hearing.
“Patients like humans making decisions about their healthcare, not robots or AI models,” Sen. Markey said during the hearing.
“When I speak to healthcare workers about AI infiltrating their workplaces, that is the fear and often the reality that I hear from doctors who see that untested AI systems can supercharge discrimination of putting racially-biased results, nurses who cannot override the recommendations of a clinical decision software without the permission of a supervisor who is three floors away, [and] healthcare workers who are sanctioned for exercising judgment that differs from an algorithm,” he continued.
Healthcare employers and institutions that use AI-driven clinical decision tools must ensure that feedback mechanisms exist to report incorrect or biased AI outputs, and that training is provided on AI use, bias awareness, and override procedures, according to the bill’s text.
It also requires that the identity of clinicians who use the override option is kept anonymous, puts in place whistleblower protections for those who report violations of the law, and protects those who decide to override the AI.
Dr. Russ Altman, professor of bioengineering, genetics, medicine, and biomedical data science at Stanford University, and a senior fellow at the Stanford Institute for Human-Centered Artificial Intelligence, told lawmakers during the HELP hearing that beyond assisting with clinical diagnoses, AI scribes help document notes during patient visits. They also support a patient’s understanding of complex medical terminology to better understand diagnoses, he said.
For AI that helps make clinical decisions – which Sen. Markey’s bill covers – Dr. Altman explained that AI analyzes patient data and then identifies emerging conditions, such as cancer, that may not otherwise be apparent.
“While I’m excited about these and other applications, we will only realize their full benefits if healthcare systems build teams that thoroughly evaluate these tools for clinical effectiveness, fairness, and safety,” Dr. Altman warned lawmakers. “Every healthcare organization should have a process for vetting AI software and integrating appropriately into clinical settings.”
Sen. Markey’s bill also requires every hospital or clinic that uses AI to make patient-related decisions to set up an internal committee to oversee how those systems are used.
There is currently no federal legislation passed by Congress that specifically regulates the use of AI in healthcare settings. However, use of the technology is expanding, with several federal agencies – including the Department of Veterans Affairs and the National Institutes of Health – developing and deploying their own AI systems.