In a letter sent to Department of Homeland Security (DHS) Secretary Alejandro Mayorkas last week, a group of 142 organizations are calling on DHS to suspend the use of AI technologies for immigration enforcement that do not comply with Federal requirements for responsible AI by Dec. 1.

The Office of Management and Budget (OMB) requires agencies to comply with risk management practices for AI use cases that have been identified as rights-impacting or safety-impacting. This includes whether there’s a process for testing and monitoring the AI’s performance, whether groups impacted by the AI have been consulted in its development, and how the agency is mitigating discrimination from the use of AI, among others.

The immigrant rights, racial justice, government accountability, human rights, and privacy organizations argue in their letter that “DHS’s use of AI appears to violate federal policies governing the responsible use of AI, particularly when it comes to AI used to make life-impacting decisions on immigration enforcement and adjudications.”

“The impact and potential harm of DHS use of artificial intelligence on U.S. communities is not theoretical,” the letter reads. “According to reports including the agency’s own AI inventory and Privacy Impact Assessments, DHS and its subagencies use AI technologies to make critical decisions – from whether to deport, detain, and separate families, whether to naturalize someone, to whether to protect someone from persecution or torture.”

The letter lists examples of AI tools being leveraged by three of DHS’s components: U.S. Citizenship and Immigration Services (USCIS); Immigration and Customs Enforcement (ICE); and Customs and Boarder Patrol (CBP).

According to the letter, USCIS has a “Predicted to Naturalize” AI tool that helps make decisions on citizenship eligibility. Its “Asylum Text Analytics” AI tool screens and flags asylum and withholding applications as fraudulent. Additionally, USCIS’s Fraud Detection and National Security Directorate plans to develop an AI tool to classify individuals as fraud, public safety or national security threats in the immigration adjudication process.

The groups said that ICE uses “secretive” AI technologies for decisions on detention, deportation, and surveillance. CBP uses AI for biometric surveillance and, according to the groups, “has rapidly expanded its network of AI-enabled surveillance towers, sensors, and systems to track migrants at the border.”

“The stakes are high – DHS’s latest AI tools impact millions of people in the U.S,” the letter concludes. “Given the historical discrimination, inaccuracies, and complexities of the immigration system, we have serious concerns that DHS’s AI products could exacerbate existing biases or be abused in the future to supercharge detention and deportation.”

OMB set a Dec. 16 deadline for all agencies to make their AI inventories publicly available.

According to OMB’s AI inventory guidelines, an agency chief AI officer (CAIO) may waive one or more of OMB’s required minimum risk management practices. A waiver must provide detailed justification, based upon a system-specific and context-specific risk assessment, that fulfilling the requirement would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations.

CAIOs must make a summary of each waiver and its corresponding justification publicly available on their websites.

In their letter, the 142 organizations are urging DHS’s CAIO Eric Hysen to not issue waivers for these tools.

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags