With generative AI (GenAI), civilian agencies and the military have myriad opportunities to transform how they approach innovation, efficiency, and situational awareness. However, GenAI use is a double-edged sword: For aggressive adversaries, it presents a prime opportunity to attack the massive Federal IT space. In response, Federal organizations are leveraging AI to supplement their human cybersecurity talent.

MeriTalk recently sat down with Cisco’s Christina Hausman, who has more than 20 years of cybersecurity product expertise, to discuss how GenAI is changing the cybersecurity landscape and how AI-enabled security solutions not only protect agency networks and data, but also ensure the safe and responsible use of GenAI tools.

MeriTalk: GenAI is enhancing the way many of us approach work, by automating tasks, enhancing creativity, and improving communication – and it is changing how threat actors operate. What attack methods have been most influenced by GenAI? What do you expect to see in the future?

Hausman: From a cybersecurity perspective, GenAI creates massive opportunities for both novice and sophisticated attackers. It really lowers the barrier to entry for both groups, allowing them to work faster and develop more effective attack campaigns. Attackers can use AI to generate new malware types based on zero-day vulnerabilities and bypass traditional detection methods. For sophisticated attackers, AI can make it easier and faster to do reconnaissance and examine exfiltrated data – and use that data to tailor their next attack.

In the future, GenAI and large language models will make it much harder to detect phishing attacks. At one time, phishing emails were often easy to spot because of grammar or spelling errors, but AI tools can now craft flawless emails, boosting the chances that an attempt will be successful. Deepfake audio adds another layer – imagine an employee getting a realistic-sounding audio clip from a traveling senior executive who “urgently” needs network access. AI is going to make it challenging for all of us to determine what’s legitimate and what’s not legitimate.

AI will also help attackers quickly find systems that haven’t been patched. When a new vulnerability is announced, administrators may have limited time to qualify the software patch. They always want to balance security risks against the potential for a patch to disrupt production, but with vulnerable systems, they may need to patch and pray – and quickly roll out a fix before they are fully comfortable that it won’t affect a key production resource.

MeriTalk: As you noted, attackers are using AI for faster, more powerful attacks. How do agencies adapt their own AI strategies to identify and mitigate these threats proactively?

Hausman: The Federal government’s vast attack surface makes it a prime target for not only financially motivated attacks, but also sophisticated nation-state-sponsored cyberattacks where the goal is to disrupt operations and steal classified data. Cybersecurity staffing shortages, currently seen in the public and private sectors, make AI an essential tool for managing overwhelming data volumes in the security operations center and hardening defenses against new AI-enabled attack and exfiltration methods.

Cybersecurity vendors play a crucial role in educating organizations on how AI can augment security teams and reduce noise – the huge amounts of irrelevant or low-level security alerts and data that create alert fatigue and distract analysts from detecting actual malicious activity.

AI-infused solutions can automate analysis and pinpoint attack patterns, allowing agencies to focus their resources on critical vulnerabilities and areas of highest risk. Machine learning can be implemented to proactively identify weaknesses in systems and networks, enabling rapid patching to reduce breach risks. AI can also automate risk assessments, helping IT teams prioritize security efforts based on the likelihood and potential impact of attacks.

MeriTalk: How can AI be used to streamline the implementation of security policies?

Hausman: Machine learning algorithms can run continuously to analyze network traffic, user behavior, and system logs to identify suspicious activities and automatically block malicious IP addresses. In addition, AI tools can immediately shut down compromised systems or user accounts.

We’re also seeing greater use of AI in intent-based security, which enables administrators to define security goals in plain language. The system then analyzes this intent and automatically generates configurations that enforce the desired security policies.

MeriTalk: The rapid adoption of GenAI tools like ChatGPT presents exciting opportunities for government agencies to streamline workflows and improve efficiency. However, these tools also raise concerns about potential security breaches and unintended consequences. What challenges will agencies face as Federal employees adopt and use these tools?

Hausman: Government organizations must protect massive amounts of intellectual property and sensitive information. This requires effective data loss prevention (DLP) and application visibility and control capabilities to enable administrators to discover the GenAI tools used in their environments and assess the risks these tools may pose to data security and privacy.

IT teams must also consider the limited transparency of GenAI tools – the lack of insight into how these tools reach conclusions, the data they analyze to derive a particular conclusion, and whether that data is valid. They must evaluate whether they can trust the tools’ decisions.

It is extremely important for IT and cybersecurity teams to evaluate the pros and cons of GenAI and put technology policies and processes in place to ensure its safe usage. Most organizations that deal with sensitive information have data lifecycle models; this same type of model needs to be developed around the use of GenAI.

MeriTalk: As more agencies incorporate GenAI, how does Cisco help agencies enforce policies around GenAI and prevent data loss?

Hausman: With Cisco Umbrella for Government, security administrators gain visibility and granular control over GenAI app usage in their environment. They can monitor who uses GenAI tools and the frequency of use. Once there is an understanding of GenAI use, an administrator can assess risk to the agency by monitoring that use for all data classifications of concern. Once that risk is established, Umbrella policy controls can be put in place to control GenAI application access and usage, such as a DLP policy to prevent the leakage of sensitive data through GenAI. If agencies have significant concerns about data leakage, blocking all AI usage may make sense.

Security teams can discover, block, allow, or control more than 180 GenAI applications through Umbrella for Government’s domain name system (DNS)-layer security, secure web gateway, and DLP policies.

MeriTalk: How does Umbrella for Government incorporate AI and machine learning to uncover new threats and attacks?

Hausman: Early identification of attack precursors is crucial for rapid response and attack prevention. DNS-layer security leverages machine learning to proactively identify malicious domains, acting as a first line of defense, filtering threats before they reach the network or endpoints. And by analyzing internet activity patterns, Umbrella for Government automatically identifies attacker infrastructure being staged for the next threat and blocks those domains.

Umbrella for Government security protections are powered by Cisco Talos, one of the largest non-government threat research and intelligence teams. Talos provides continuously updated protections by using AI algorithms, machine learning, and other deep learning models to analyze the wide breadth and depth of data and threat intelligence that it receives from the suite of Cisco security products and third-party partnerships.

MeriTalk: With GenAI constantly evolving, how can Federal agencies navigate the changing risk environment and scale their security posture accordingly?

Hausman: First, it’s important to develop an AI strategy that’s aligned to the agency’s mission and objectives, especially around security. Don’t utilize AI because everyone else is doing it. Determine the use case for leveraging AI and the desired outcome that the agency is trying to achieve.

Second, make sure that cybersecurity professionals are in place who understand AI and can evaluate AI use, implement policy controls for it, and continually refine those policies. Agencies also need to research vendors and solutions diligently to make sure their AI tools provide robust DLP capabilities and application visibility and controls. Continuous monitoring and multiple layers of security defenses will minimize the risks and maximize the effectiveness of GenAI tools.

Read More About
About
MeriTalk Staff
Tags