A new report from the Treasury Department on how the financial services sector is approaching managing AI-driven cybersecurity risks reveals challenges familiar to many sectors – think workforce, data quality, and funding. But one that stands out from the rest gets down to the very basic level of understanding those risks: reaching broad agreement on what artificial intelligence means, and adopting common terms that will allow for greater understanding of the technology.

Treasury’s report released today – Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector – was ordered up by the Biden administration’s AI executive order published last October. The report was prepared by the agency’s Office of Cybersecurity and Critical Infrastructure Protection, which takes charge of the Treasury’s sector risk management agency responsibilities for the financial services sector.

The agency pledged more to come on the AI front, as it plans to “work with the private sector, other federal agencies, federal and state financial sector regulators, and international partners on key initiatives to address the challenges surrounding AI in the financial sector.”

“While this report focuses on operational risk, cybersecurity, and fraud issues, Treasury will continue to examine a range of AI-related matters, including the impact of AI on consumers and marginalized communities,” the agency said.

The new report offers up ten points that it calls “significant opportunities and challenges that AI presents to the security and resiliency of the financial services sector.”

Among them are growing AI capability gaps between large and small financial institutions including the ability to develop in-house systems and gather enough data to train models, particularly data related to financial fraud. Another is the AI workforce talent gap in general, but also a “technical competency gap” in teams that manage AI risks, including those in legal and compliance fields.

On top of that, the Treasury report points to “a lack of consistency across the sector in defining what ‘artificial intelligence’ is.”

“Financial institutions, regulators, and consumers would all benefit greatly from a common AI-specific lexicon,” the report says.

Treasury said the new report is based on interviews with 42 financial services and tech companies and reveals that “many participants stated that ‘artificial intelligence’ itself is an imprecise term and could mean many different things. There was little agreement among participants about what the label meant.”

“What everyone did agree on, however, was the need for a common lexicon to promote common understanding,” the report says. “A common lexicon would not only facilitate appropriate discussion with third parties and regulators but could help improve understanding of the capabilities AI systems may have to improve risk management or to amplify new risks.”

“Careful consideration of terminology may help address the current lack of clarity around measuring and identifying risks, especially with the rapid adoption of Generative AI,” the report says.

“For instance, one firm uses ‘augmented intelligence’ to shift the responsibility to the user by emphasizing that the system is augmenting the user’s intelligence, rather than having its own intelligence,” the report says. “Similarly, the use of ‘hallucination’ to describe false outputs by Generative AI suggests these Generative AI systems intend meaning in their outputs when what they are supplying is probabilistic semantics. This anthropomorphism may misleadingly imply intention and create a false sense of trust in a system.”

As a first step toward addressing the need for a common lexicon, the new report includes a glossary based on the National Institute of Standards and Technology’s (NIST) Risk Management Framework.

In a related finding, the report also flags the need for explainability for “black box” AI solutions.

“Explainability of advanced machine learning models, particularly generative AI, continues to be a challenge for many financial institutions,” the report says. “The sector would benefit from additional research and development on explainability solutions for black-box systems like generative AI, considering the data used to train the models and the outputs and robust testing and auditing of these models.”

“In the absence of these solutions, the financial sector should adopt best practices for using generative AI systems that lack explainability,” the report offers.

“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden Administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” said Under Secretary for Domestic Finance Nellie Liang upon release of the new report.

“Treasury’s AI report builds on our successful public-private partnership for secure cloud adoption and lays out a clear vision for how financial institutions can safely map out their business lines and disrupt rapidly evolving AI-driven fraud,” Liang said.

Read More About
About
John Curran
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags