
The 2024 White House memorandum on advancing U.S. leadership in artificial intelligence (AI) underscores the vital role AI now plays in national security. Written for the vice president, cabinet officials, and agency heads, the document stresses the need to strengthen AI capabilities while ensuring the safety and security of AI systems. Protecting sensitive data, ensuring the ethical use of AI, and fostering innovation are central to maintaining a competitive edge in the global arena as U.S. adversaries also advance their AI knowledge.
In a recent interview with MeriTalk, Aaron Mulgrew, solutions architect at Everfox, examines the governmental imperative for safeguarding sensitive data amid the widespread AI adoption explored in the White House memo. Mulgrew, who works with governments around the world to secure their systems, explores AI’s transformative impact and discusses potential use cases, while highlighting adoption challenges and outlining best practices for successful AI integration.
MeriTalk: How is AI changing the technology landscape?
Mulgrew: AI is changing not only the technology landscape but the world as we know it. It can make our lives better in every conceivable way, from helping to develop new vaccines faster to making our journeys to work quicker and safer. In human terms, it’s as if someone had read all the material on a subject and could use it to give consistently good answers to virtually any question. That’s what makes AI great at spotting patterns in images or driving a car without hitting anything. It uses all available information and – unlike people – never tires.
Yet AI also introduces potential security risks. To work well, it needs vast amounts of “good” data. If trained on bad data, AI systems will give bad results, and even small amounts of carefully selected bad data can skew those results.
MeriTalk: Why is it so urgent, for national security and other purposes, for Federal agencies to increase their adoption of AI?
Mulgrew: I think we are now in the “race for AI” – meaning that AI can be an extremely powerful technology for both adversaries and allies. The Ukraine conflict has demonstrated how AI can be leveraged in national security contexts, such as analyzing imagery to support situational awareness. These use cases highlight the broader potential for AI in defense applications while also underscoring the importance of responsible adoption and governance.
As time goes on, there will inevitably be winners and losers in this race. Winners adopt AI quicker and more efficiently than others. Losers either won’t adopt the technology or won’t do it as quickly or as efficiently.
As more and more organizations within the civilian and defense sectors adopt AI, we will see an arms race between AI systems – and those that are more granular and more efficient will win.
MeriTalk: What are some important government use cases for AI?
Mulgrew: This interview could be at risk of becoming “War and Peace” if I outlined them all. But I can boil them down into three categories: defense, intelligence, and civilian government.
In defense, AI can transform situational awareness on the battlefield. A modern battlefield has a huge number of sensors and there’s a real technological problem of trying to deal with that much data. AI will help work out the operational picture at near real time, whether at a command post or headquarters, or even at the dismounted soldier level.
Another defense use case is at the edge, where a mesh network of AI systems would provide operational insights, even within a disconnected hostile environment where normal communications aren’t possible.
Intelligence collection could become much more granular with the help of AI. Using a graph-model approach, AI can provide additional context and make connections between entities that may not be obvious to a human analyst.
And for civilian government, AI can help automate the “boring stuff,” such as filing your taxes or renewing your passport. Background checks can become much more accurate through generative AI. The advent of citations embedded within the LLM decreases the risk of hallucinations dramatically. Modern Agentic workflows mean that people working in these departments no longer need to spend significant amounts of time completing manual, time consuming tasks like finding documents and working out which parts are relevant.
MeriTalk: What challenges do government organizations face as they adopt AI?
Mulgrew: There are critical ethical, moral and operational issues alongside standard legal issues especially in defense applications. The most extreme example I can think of is within the context of war where decision-making in autonomous systems requires careful safeguards to prevent unintended consequences, ensuring that AI enhances rather than replaces human oversight.
In addition, several legal questions arise regarding the use of AI and intellectual property rights. For example, can content created by an AI system be protected under IP laws? If so, who owns those rights—the human users or operators of the AI tool? Can AI models be trained on copyright works? What if an AI system generates content that is substantially similar to copyrighted material in its training data? In short, the legal landscape surrounding AI and intellectual property remains dynamic and continues to evolve as courts and policymakers address these emerging issues.
MeriTalk: What are some best practices government should look at as they adopt AI?
Mulgrew: Responsible adoption of AI through a dedicated ethics committee with the power to veto any new adoption is a good place to start. A few companies lead the way with dedicated ethics teams, such as OpenAI and Google.
Governments should also retrain staff, those in technical and non-technical roles, to make the best use of AI. I think there may be an over-emphasis on training only technical engineers. Non-technical employees will increasingly be exposed to AI systems, and they will benefit hugely by learning how to write better prompts, for example.
MeriTalk: How does Everfox help protect AI systems?
Mulgrew: Everfox protects AI systems across a wide range of use cases. Within the machine learning and AI community, there has been a lot of emphasis on ensuring that AI models don’t become compromised. Everfox solutions are designed to enhance the integrity of AI tools and methodologies by implementing security measures that reduce the risk from ingesting bad data that could corrupt their analytic engines. Everfox also provides scalable, secure access to multiple networks, which enables users to leverage AI capabilities at multiple classification levels from a single device.
For AI systems, especially those within a government cross domain context, securing the AI system itself is accomplished in several ways.
First, data feeds need to be adequately cleaned on entry to the AI system to lower the attack surface of the AI system itself. Data feeds may be a repository of images or a continuous stream of data such as a video stream from a drone. Everfox Cross Domain Solutions (CDS), such as High Speed Guard, Trusted Gateway System, and High Speed Verifier (HSV), can support any of the use cases I mentioned.
Second, CDS solutions help to ensure access and transfer to/from Generative AI at any classification level, so that decision-makers have access to all the information required to make informed, timely decisions – regardless of what classification that data and AI engine lives on.
The AI model itself also needs to be cleaned. Models are inherently insecure, with more than 80 percent containing some sort of active code. Everfox can support modern model formats such as Safetensors and Numpy to automatically clean the model as it crosses a CDS boundary.
Then, using Everfox solutions, the model can exist at a higher classification than the human prompting the model itself. In this scenario, it’s important to consider a CDS that can provide contextual checks on the model response – and doesn’t just do a syntactic evaluation of the answer.
Additionally, in a world where new threats are frequently powered by AI, robust insider risk programs will act as a crucial final line of defense, proactively identifying and mitigating internal Generative AI misuse before those issues evolve into significant security breaches. The Everfox Evershield solution is a comprehensive insider risk management platform designed to detect and mitigate potential threats from within an organization by monitoring user activity, identifying suspicious behaviors, and providing tools to investigate and respond to potential insider threats. Our solution gives analysts and investigators the right tools needed to collect, explore, and gain insight into risky behavior. Everfox solutions are designed to improve your organization’s security posture and readiness models with risk scoring, anomaly detection, and risk adaptive protection. As the threat landscape continues to evolve, our AI-enhanced technology empowers agencies to proactively manage insider threats with precision, ensuring secure and productive environments. We leverage machine learning and sentiment analysis to detect potential risks before they materialize, safeguarding organizational integrity.