Three Steps Agencies Can Take to Meet Government’s AI Requirements
By John Dvorak, chief technology officer, public sector, Red Hat
Using AI safely and effectively is an open-ended challenge and government agencies can easily get overwhelmed with trying to develop AI strategies, hire or contract teams of data scientists, onboard chief AI officers and so on. But starting to build an AI strategy doesn’t have to be overly complicated. Here’s how agencies can start to create an effective and compliant AI program in three steps, using many of the same processes and principles they’ve been using for application development.
Consider Open Source Large Language Models
Commercially available large language models (LLMs) can pose challenges for government agencies. For example, proprietary models make it difficult to know what data was used to train the model and what biases and decisions were injected into the training process. They also may require transferring data to a hosted service, challenging data privacy and security. Meanwhile, restrictive licensing and limited customization can limit how government organizations use LLMs in production systems.
Government agencies should consider using fully supported, indemnified and open source-licensed LLMs as a foundation for building and tuning their own models. LLM families like IBM’s Granite, for example, are released under an open source license and can help agencies address their data sovereignty and privacy requirements by avoiding lock-in to proprietary models while allowing them to host the models within their own environment. Further, agencies can use open source community projects such as InstructLab to capture agency skills and knowledge into multiple, focused language models that can be used as building blocks for agentic or compound AI systems or at the network edge.
Additionally, agencies should consider bringing their AI-driven applications and models to their data, employing cloud, on-prem and/or edge hosting solutions that aim to reduce transmit time and improve time-to-value. According to Gartner, 55 percent of all data analysis by deep neural networks will occur at the point of capture in an edge system by 2025. By bringing AI to the data, agencies can perform data analysis at the point of collection while maintaining sovereignty of their data collections.
Focus on Reliability and Data Quality
Reliability and data quality are core components of the Government Accountability Office’s AI Accountability Framework. Reliability and quality are achieved by using accurate and current data and placing parameters around how that data is trained to filter out biases that could lead to incorrect or misleading recommendations (e.g., AI hallucinations).
To ensure data integrity and accuracy, agencies must consistently perform data lifecycle management, including data generation, collection, cleaning, analysis and model building. Data lifecycle management effectively protects data throughout this process, making sure that data collection and analysis remains consistent and dependable.
It’s also important to continuously monitor model performance in production. Continuous monitoring allows agencies to refine these models so they can remain trustworthy, accurate and useful.
Apply DevSecOps Principles to AI Data Pipelines
DevSecOps application development has become a standard practice within government agencies. Developers, security professionals and operations managers work together to ensure code goes from concept to production securely and efficiently.
The goal is to apply the same process and principles to AI data pipelines to facilitate intelligent application development and delivery. Fortunately, AIOps and MLOps are continuing to evolve, making this an achievable objective. Agencies can already integrate models into their application development processes to bring AI-driven applications into production faster. They can also work within the same development platform, which supports collaboration and reduces the need for additional tools or training.
Building a safe and effective AI strategy can seem daunting. However, employing an open source approach that places you in control of your data, your AI models and your application development is a practical start. Consider these steps to build an effective, practical and trustworthy AI infrastructure that delivers ROI today while protecting your future investment.