As artificial intelligence-assisted cyberattacks grow and deepfakes, data breaches, and other cybersecurity risks loom, AI will continue to be an asset to cyber defenders in the future but will also increase the capabilities of threat actors, according to a new report on the future of AI tech. 

AI’s ability to sift through data noise, predict attacks, and identify threat actor profiles has advanced cybersecurity significantly. After surveying organizations’ use of AI, a study by the Institute for Security and Technology (IST) found that AI offers defenders an advantage – if they stay ahead of the curve through continuous investment, innovation, and integration. 

“As automation becomes increasingly embedded in essential government functions and life-sustaining infrastructure services – such as food supply chains, healthcare systems, and public safety mechanisms – global leaders are recognizing that it is more imperative than ever to harness AI for protection, better understand how it may be weaponized by bad actors, and address digital security threats that contribute to systemic global risk,” said IST.  

AI’s impact on cyber defense includes content analysis, software security, and big data management. However, challenges are also growing, as AI enables adversaries to identify targets, supercharge reconnaissance, and create deepfakes. 

“Bad actors’ reconnaissance and probing efforts will be increasingly automated with AI capabilities that can provide near ubiquitous and near real-time coverage of every device exposed to the public internet,” said IST, adding that organizations should minimize attack surfaces. 

“AI-enabled reconnaissance – which is always active, and does not tire nor require real-time human supervision – requires a full reckoning of an organization’s external attack surface,” the report says.   

Other pressing concerns include agentic AI weaponization – which autonomously pursues complex goals and workflows with limited human intervention – and complex code obfuscation and deobfuscation.  

Newer threats include AI-generated polymorphic malware – code that mutates with each run while maintaining its function – making it “exceedingly difficult” to track. Malware such as BlackMamba and Deeplocker can “dynamically modify benign code” and “slip past current automated security systems that are attuned to look out for this type of behavior,” the report says.  

“BlackMamba and Deeplocker illustrate that developing AI-enabled malware proofs of concept is not out of reach,” said IST. “While current open-source evidence does not indicate widespread use of AI for malware generation by adversaries, the potential for such threats looms large.” 

Despite such risks, IST said that first-mover advantage being provided to organizations using AI “seems to be squarely with western and likeminded governments and technology firms.” 

“While the longer-term outlook is uncertain and might be dismal for poorly defended enterprises, by fully capitalizing on first-mover advantage, key ecosystem enablers and organizations they serve might establish a defensible posture that is prepared for whatever the future might bring,” the report concludes. 

Read More About
About
Weslan Hansen
Weslan Hansen
Weslan Hansen is a MeriTalk Staff Reporter covering the intersection of government and technology.
Tags