
Artificial intelligence (AI) has moved beyond a theoretical concept to an operational imperative for the public sector. In the run-up to its recent conf.25 event, Cisco – through Splunk – underscored a pragmatic path forward: Put robust controls and oversight around AI systems, treat machine data as a strategic national asset, and deploy intelligent assistants that enhance efficiency and reduce operational burdens. The result, Splunk Senior Vice President and General Manager Kamal Hathi articulated, will be a profound transformation – a clear shift from human-centric workflows to agent-assisted operations across government and education.
The trust imperative: Ensuring accountable AI in public service
Public trust is paramount, and in the context of AI, it begins with comprehensive observability. Hathi emphasized that public sector organizations require deep visibility into the evolving AI technology stack – encompassing vector databases, GPU-intensive infrastructure, and large language models – alongside traditional performance baselines and stringent cost controls.
He highlighted the need for a “supervisory function for agents,” which is essentially “observability for agents.” Observability includes critical monitoring for AI hallucinations (incorrect or fabricated outputs) and optimizing token usage (“tokenomics”) to manage operational costs effectively.
For the public sector, this reframes trust as a core discipline within site reliability engineering (SRE) and SecOps. AI pipelines must be meticulously instrumented, quality and model drift continuously tracked, and service levels rigorously monitored to ensure fairness, accuracy, and compliance.
As new AI architectures are integrated into sensitive public services, Hathi’s message was clear: “enterprises [and public sector agencies] … need to know what exactly is happening,” from system performance to resource consumption, to maintain accountability and citizen confidence. This level of transparency is vital for adhering to regulatory frameworks like the National Institute of Standards and Technology AI Risk Management Framework, Cybersecurity Maturity Model Certification, FedRAMP, and ethical AI guidelines, to ensure that AI-driven decisions are auditable and trustworthy, Hathi noted.
Machine data: The new strategic asset for public sector operations
Hathi observed that the most significant untapped AI resource isn’t text; it’s the vast ocean of machine data: the logs, metrics, traces, and telemetry generated by applications, networks, and devices across public infrastructure. Rather than attempting to consolidate petabytes of diverse data into a single, monolithic lake – a costly and often impractical endeavor for the public sector – Hathi outlined a federated approach: “What we’re talking about is taking Splunk to the data. Going where the data lives.”
With this approach, data is processed at the edge to distill critical signals from noise, and data is queried across disparate “ponds and puddles” in cloud storage (e.g., S3, Snowflake), on-premises systems, and specialized agency databases – without massive, disruptive data migrations. Under this model, Hathi previewed a “machine data lake” – a virtual, unified view that spans these federated sources. He also announced a forthcoming time-series foundational model, designed for multivariate operational questions (available in the fall and intended to be fine-tuned with an agency’s private data). The Splunk-Snowflake partnership exemplifies this approach, allowing agencies to correlate operational context with machine telemetry via distributed queries, answering critical questions like “What’s the business impact of this IT incident on citizen services?” or “How will this infrastructure change affect public safety?” without the need for data duplication. The approach is particularly valuable for public sector organizations managing sensitive data across various jurisdictions and compliance requirements, Splunk said.
Practical AI: Empowerment for public sector professionals
The agentic AI revolution is poised to significantly boost efficiency in public sector IT operations by reducing alert fatigue and accelerating incident remediation. Hathi noted that with AI agents, “We take away the trivia, and we provide the professional the ability to really add value.” This means AI can auto-triage alerts, propose safe and routine fixes for common issues, and free up human experts to focus on complex strategic challenges, such as cybersecurity threats to critical infrastructure or optimizing public service delivery, all while maintaining essential human oversight and control.
Hathi described how AI assistants can create unified security operations experiences, aiding morning triage, generating comprehensive runbooks, and proactively diffusing malware threats across government networks. For incident response spanning the entire network-to-application chain, he highlighted the seamless integration with Cisco ThousandEyes and Splunk Observability Cloud. This provides public sector teams with a single, holistic view across agency-owned networks, the public internet, and critical applications, speeding identification of root causes during major service disruptions or cyberattacks affecting public services.
Crucially, Splunk maintains its multi-vendor commitment, a vital consideration for the public sector, Hathi emphasized. Many government entities operate complex hybrid estates with diverse legacy and modern systems. Splunk’s federation strategy is designed to meet them precisely where they are, accommodating various clouds, data formats, and tools, rather than forcing costly and disruptive consolidation into a single stack. This flexibility is key to maximizing existing investments and ensuring operational continuity, Hathi noted.
The bottom line: A blueprint for modern public sector operations
The conf.25 preview was not merely a list of product launches; it presented a three-pronged, strategic blueprint for making AI truly operational and beneficial for the public sector:
- Building trust: Establishing robust AI observability and governance frameworks to ensure accountability, fairness, and compliance with public sector regulations.
- Unlocking data value: Federating machine telemetry with operational and business context, rather than migrating all data, to gain actionable insights across diverse public sector data landscapes.
- Deploying practical assistants: Implementing intelligent agents that reduce operational toil and enhance the effectiveness of security, network operations centers, and SRE teams, thereby improving public service delivery and resilience.
Hathi summarized the trajectory of AI: The “entire product, from [the] ground up is AI driven,” and soon we may cease to call it “AI” at all. Instead, it will simply be recognized as the inherent way modern, resilient, and trusted public sector operations function.