New research from MeriTalk and RSA Conference reveals that while 80 percent of cybersecurity decision-makers say accelerating AI adoption is critical to their organization’s resilience against evolving threats, just 31 percent say their organization is using AI for cybersecurity today.

The study – which compiled quantitative data from 100 Federal and 100 private sector cybersecurity decision-makers, as well as qualitative data from five in-depth interviews with senior cyber leaders – finds that 50 percent are actively working toward AI adoption.

“AI technologies, like machine learning (ML) and natural language processing (NLP), are not new to cybersecurity, but their applications are quickly evolving from optional enhancements to strategic necessities,” said Nicole Burdette, a principal at MeriTalk.

“We are already seeing AI users improve vulnerability detection and accelerate incident response times. The challenge for the next six to 12 months will be putting the right guardrails in place so organizations can maximize AI adoption and benefits while minimizing additional risk,” Burdette added.

The good news is that the majority of cybersecurity leaders implementing or utilizing AI – 54 percent – say they’ve accelerated incident response times. Additionally, another 52 percent successfully detected a vulnerability and 50 percent proactively responded to a threat.

When it comes to the division of responsibilities, cybersecurity leaders would prefer humans keep majority ownership of strategic planning, innovation, and governance. On the other hand, they say AI can take the lead on cyber risk assessments and threat detection and response.

However, autonomy may still be years away – just one in five say they fully trust AI to automate cybersecurity decisions.

“The impact of AI across our industry is seismic, and we’re clearly in the early days of understanding and adapting to the impact on teams and tools,” said Britta Glade, vice president of content and curation at RSA Conference.

“As evidenced in this research, the days and months ahead will be critical, and organizations must clearly and purposefully evaluate and define the boundaries and guardrails for how AI will be infused into workstreams and processes,” Glade added.

Notably, the research finds that critical policy gaps still exist in the AI space. Only 28 percent of cyber leaders describe their organization’s AI governance as robust.

Additionally, less than half have documented policies for decision-making models or formal ethical or program testing guidelines, and just 40 percent report policies specific to critical infrastructure.

To get the most out of AI efforts for cybersecurity, the report offers several recommendations for organizations. These include building in AI security from inception, starting with increased human communication and collaboration, and embracing change.

The Art of Human and AI Teaming in Cybersecurity report is underwritten by Fortinet Federal and Maximus.

Read More About
Recent
More Topics
About
MeriTalk Staff
Tags