Members of the House Financial Services Committee voiced concern over the dangers of AI-fueled bias in housing services during a July 23 hearing. 

Committee members – including members of its bipartisan Working Group on Artificial Intelligence – heard testimony from six witnesses in a discussion of the benefits and risks of AI technology, with some warning about the dangers of “rushing legislation” on use of the technology.  

Committee Chairman Patrick McHenry, R-N.C., said he believes that AI cannot be put “back in the box,” and that it will become “embedded in our everyday lives.” The chairman said that Congress needs to carefully look at regulation to be at the “forefront” of AI development. 

Other committee members voiced concerns over AI’s ability to address bias in data that may affect potential lessees’ outcomes in housing, and other financial consequences, and the relationship between AI systems and existing consumer protection and anti-discrimination laws. 

Rep. Steven Horsford, D-Nev., said that predictive AI systems could result in “automated redlining” using biased or outdated data. Using AI for tenant screening also eliminates the “human appeal” of “dealing face-to-face” with potential leasers over prior criminal records or evictions that results in “the computer” deciding, Rep. Brad Sherman, D-Calif., warned. 

Witnesses offered up potential solutions and areas where further regulation could be examined in response to the lawmakers’ concerns.  

Ondrej Linda, the senior direction of personalization AI at Zillow, said that AI has the potential to decrease bias and be “more objective” if “used responsibly.” 

Lisa Rice, the president and executive officer of the National Fair Housing Alliance (NFHA), said that AI has potential to help fight discrimination if the data it uses is accurate and up to date, calling AI the “new human civil rights frontier.”  

Rice explained that tenant screening often uses data “highly correlated with race,” which should be “restrained and limited” and held to “higher standards and protocols.” Biased data used by AI systems can also impact dynamic pricing models, potentially denying housing opportunities to those using housing vouchers.  

“Bad data in means bad data out” said Linda when discussing the need for improving data quality and accuracy.  

Some of the witnesses agreed on a “principles-based” legislative approach to ensure that future AI development adheres to the Fair Housing Act and other anti-discrimination laws.  

Vijay Karunamurthy, the chief technology officer of Scale AI, recommended using AI systems that incorporate “robust and diverse perspectives,” and implementing “testing” protocols to address and understand system bias. He also highlighted the importance of human oversight. 

This week’s hearing followed a bipartisan report from a dozen committee members released last week on AI applications in financial services and housing. The report concluded that AI systems fall under anti-discrimination laws and other existing regulations. 

Read More About
About
Weslan Hansen
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags