Smarter Gov Tech, Stronger MerITocracy
This page is not built out yet. If you are seeing this page, please contact an administrator.

My Cup of IT: Cup Cake for Kushner?

As we tilt into this new year in the Federal IT events community, I wanted to point to the absurdity of Federal gift giving regulations.

We all know that industry’s not allowed to give a Fed food and drink worth more than $20 at one time – and no more than $50 in a year.  The General Services Administration says so right here.  Crossing that threshold constitutes bribery.

Try buying any meal for less than $25 in D.C. right now – even without the cup cakes, a simple meal has become expensive, don’t you know…?

I had to chuckle last year when we learned that Jared Kushner, the former president’s son-in-law and a key advisor during his administration, racked up a $2 billion investment from the Saudi Sovereign Wealth Fund.

That’s one helluva cup cake.

Is it not absurd that regular Federal civil servants are held to one standard, while appointed officials, when they step out of office, can accept whatever payments from whomsoever deems it in their interest to shower largesse?

Maybe it’s time to reform that $20 food and beverage limit to get in line with inflation – and maybe it’s also time to put appointees and their family members on a stricter diet?

Launching a New Era of Government Cloud Security

By Dave Levy, Vice President, Amazon Web Services

The FedRAMP Authorization Act was recently signed into law as part of the defense authorization bill, a signal that cloud technologies continue to have a permanent place in helping U.S. government agencies deploy secure and innovative solutions to accomplish their missions.

Through this legislation, policy leaders on Capitol Hill and in the Biden administration further recognize the important role that industry partners play in improving the security and resilience of government services.

Government cloud security begins with the Federal Risk and Authorization Management Program, or FedRAMP. FedRAMP is a program that standardizes security assessment, authorization, and monitoring for the use of cloud services throughout the U.S. federal government. The program was authorized in 2011 through a memorandum from the Office of Management and Budget (OMB), and the General Services Administration (GSA) established the program office for it in 2012.

Though in existence for ten years, FedRAMP had not been formally codified in legislation. In this time, we’ve seen meaningful improvements in the ways government agencies leverage cloud technology to improve how they deliver services and achieve their missions. From its adoption by the Intelligence Community to leveraging cloud technologies in its space missions, government agencies have demonstrated that cloud technologies allow them to rapidly deploy systems that are secure, resilient, and agile. Cloud technologies also allow them to do more, for less, and at a faster pace than imagined possible ten years ago.

Amazon Web Services (AWS) applauds Congress and the White House for bolstering cloud adoption and security package reuse through the FedRAMP Authorization Act, a piece of legislation led by U.S. Congressman Gerry Connolly, D-Va., to codify the FedRAMP program. With this bill signed into law as part of the National Defense Authorization Act, there is recognition for the important role that the cloud plays in securing federal systems – and the role FedRAMP plays in ensuring that security.

Safeguarding the security of our federal systems is more important now than ever. With the volume and sophistication of cybersecurity attacks increasing, coupled with evolving geopolitical security threats around the world, the U.S. Government must ensure that it is leveraging best-in-class security services to deliver its critical missions. Further, the “do once, reuse many times” ethos of FedRAMP will save money for mission teams across government as teams optimize security by leveraging existing system security packages.

Industry has a key role to play in this equation. For example, the FedRAMP Authorization Act creates the Federal Secure Cloud Advisory Committee, which would be tasked with ensuring coordination of agency acquisition, authorization, adoption, and use of cloud computing technologies. The committee will serve as a new method of formally engaging with industry partners to improve the way cloud accreditations are managed in government, and align the use of those services with agency missions and priorities. A joint group of government and industry partners such as this committee will help the FedRAMP program evolve to solve the toughest security challenges facing the U.S. government today.

Security is our top priority, and AWS has been architected to be the most flexible and secure cloud computing environment available today. Both the AWS GovCloud region, which is a region specifically designed to meet the U.S. Government’s security and compliance needs, and AWS US East-West regions have been granted FedRAMP authorizations.

AWS supports FedRAMP, as we have from the very beginning. U.S. government agencies are embracing cloud in existing programs and missions, and they are building new services with cloud technologies. Formally codifying the FedRAMP program through legislation ensures the U.S. government can leverage industry-leading cloud services, safeguard federal systems, and better support the delivery of critical citizen services in an evolving security landscape.

Dave Levy is Vice President at Amazon Web Services, where he leads its U.S. government, nonprofit and public sector healthcare businesses.

Managing IT Complexity in Federal Agencies

Despite recent progress, IT-related problems continue to hinder work at government agencies. These include data in silos across locations that complicate information-gathering and decision-making, rising cyber threats targeting employee logins, and legacy systems that don’t adapt easily to mission changes or remote work environments.

Accordingly, a recent study by Gartner found that while 72 percent of programs aimed at government IT modernization saw gains in response to the pandemic, fewer than half (45 percent) have scaled across the organizations they serve.

Fortunately, best practices and managed services can alleviate the problem. Trusted Artificial Intelligence (AI) for operations, zero-trust cybersecurity frameworks, and managed systems integration can all help, according to Aruna Mathuranayagam, chief technology officer at Leidos.

Managing IT Complexity

Mathuranayagam identifies three critical areas for federal agencies to address to streamline operations and reduce costs while increasing employee productivity and safeguarding sensitive data. These are systems integration, zero-trust cybersecurity practices, and digital user experiences.

  • Systems integration helps bridge divides across data silos without compromising security, according to Mathuranayagam. “Some of our customers are building common DevSecOps platforms so they can adopt algorithms or cloud practices quickly,” Mathuranayagam says. “The platforms are common across the different classified environments. The activities may vary within each secret classification, but they have a unified practice.” An example of where such integration can help is the National Nuclear Security Administration (NNSA), with headquarters in Washington, D.C., and eight sites across the country. Those sites have tended to manage their IT and implement cybersecurity measures in their own ways, creating silos, according to Mathuranayagam.
  • Zero trust cybersecurity represents more of an ongoing journey than a problem to be solved with off-the-shelf solutions, according to Mathuranayagam. But it is essential for safeguarding systems and data from today’s sophisticated and relentless attacks. “You have to look at how your networks are configured, how your employee authorizations are configured, how your architecture is laid out,” Mathuranayagam says. “It’s a complete transformation of your philosophy and how you have been providing security to your users, to your customers, to your stakeholders.”
  • Digital user experiences are often given less attention in IT transformations. However, they are vital for streamlined operations and productivity, according to Mathuranayagam. That’s because well-designed interfaces and workflows reduce the burden on users so they can work with minimal friction.

Bringing these focus areas together in a managed enterprise cybersecurity model will result in safer, more efficient, and less costly IT, according to Mathuranayagam. She cites Leidos as a vendor providing a unique toolset and deep experience for meeting the challenge.

Managed IT and Security from Leidos

AI plays a starring role in IT managed by Leidos. “What Leidos has built in the last ten years of research and development from supporting a wide range of customers across the Department of Defense, Department of Energy, federal civilian agencies, and the intelligence community, is something called trusted AI,” Mathuranayagam explains.

Trusted AI developed by Leidos depends on a framework known as the Framework for AI Resilience and Security, or FAIRS. The system lets organizations gather data across systems for deep analysis while enhancing security.

The benefits of using FAIRS include reduced cognitive workload through automation and the surfacing of insights from data across systems. “It can identify patterns that we as humans cannot identify in terabytes or petabytes of data across different sources,” Mathuranayagam says.

For example, AI-driven data analysis can spot trends in help tickets. Analysis of ticket types, frequency of tickets, and peak times for tickets can lighten the load for tech support teams. “You can start observing the tickets and extract patterns,” Mathuranayagam says. “And then you can write rules-based algorithms to autonomously respond to certain tickets or implement practices to eliminate a percentage of them.”

In the realm of security, although no single solution can check the “zero trust” box, trusted AI and managed services from Leidos can give agency IT leaders confidence on the journey.

Mathuranayagam explains that Leidos helps organizations understand their IT environments through complete visibility of all assets, identifying any security gaps. From there, Leidos experts help teams build multi-year roadmaps and acquire the expertise and technologies they need, all of which can aid agencies in reducing digital complexity and risk while advancing their missions.

To learn more, visit leidos.com/enabling-technologies/artificial-intelligence-machine-learning.

Agencies Must Modernize Zero Trust Approaches to Achieve Optimal Protection

By Petko Stoyanov, Global Chief Technology Officer, Forcepoint

Many Federal agencies are considering investing in zero trust network access (ZTNA) solutions. But not all ZTNA applications are equal, and it’s important agencies invest in ZTNA solutions that will allow them to align and meet the “Optimal” stage outlined in the Cybersecurity and Infrastructure Security Agency’s (CISA) Zero Trust Maturity Model guidelines.

In CISA’s view, Optimal protection includes continuous validation and inline data protection. Traditional ZTNA architectures do none of these; rather, traditional ZTNA provides encrypted tunnels, just like Virtual Private Networks (VPNs) but on an application-specific level. They do not incorporate essential elements like machine learning (ML), data-centric encryption, or real-time risk analysis, which significantly elevate agencies’ protection levels based on what they transfer.

To achieve Optimal status, agencies need more than just a renamed VPN. Just as they modernize their cybersecurity approaches, they must modernize their zero trust programs to be more dynamic, intelligent, and responsive with identity and data monitoring across all pillars.

To illustrate, let’s look at three of the five pillars of the CISA Maturity Model: Identity, Device, and Data.

Ultimately, zero trust is about enabling and controlling access from an individual to the data, continuously. The device, the network and application are the middleware enabling user to data access.

Identity

Everything starts with an identity, the “who” of the equation. We need to identify who we are before we enter a building or a computer or phone.  Agencies need to have a centralized identity solution that validates users’ credentials against a central identity directory – across both on-premises and cloud environments.

Optimal identity validation can only be achieved with ML built into the zero trust architecture. ML enables real-time analysis of the user or system attempting to access an application. It collects various bits of information – when’s the last time this person signed on, how often do they use the application, where are they signing on from, etc. – and progressively learns about users’ security postures. It continuously validates those postures to determine if a person poses a risk the minute they attempt to sign on.

Agencies with Optimal identity validation are continuously evaluating the identity across the full lifecycle of creation, permission management, and retirement.

Agencies should ask the following questions when evaluating their identity validation capabilities:

  • Do my users have a single identity across on-premises and cloud?
  • Do I continuously monitor to ensure users have the right access and not too much?
  • Do I have the ability to identify individuals that are demonstrating abnormal behavior?

Device

Agencies cannot just monitor the “who,” they must also consider the “what” – meaning, what device is being used to access data. They must be able to trust the devices that employees are using, particularly as employees continue to work remotely, and complement the use of agency-issued devices with their own personal tools.

This reality requires an advanced zero trust architecture that constantly monitors the devices that are touching the network for potential threats. The architecture must be able to immediately discern if the devices are authorized for network access, up to date on the latest virus protection operating system software, and are as security-hardened as possible. If not, the architecture must be nimble enough to block access in the moment, before the unsecured or unsanctioned device has a chance to exfiltrate data.

Agencies should ask the following questions when evaluating their device capabilities:

  • Do I check for device posture on initial connection to the agency application/data? Device Posture includes hardware details, OS level, patch level, running application, harden configuration, and the location.
  • Can I identify all identities used on a single device?
  • Can I detect a device posture change after connection to an agency application?

Data

CISA states that for an agency to achieve Optimal ZTNA levels, it’s not enough to just store data in encrypted clouds or remote environments. The data itself must be encrypted and protected.

Furthermore, agencies must be able to inventory data, analyze it, and categorize it based on certain characteristics – continuously and automatically. Some data might be highly confidential, for example, and should only be accessible to certain members of an organization. The ZTNA must be intelligent enough to learn and process changes to data classification. It must also be able to rapidly identify not only who is accessing the data, but the type of data that person is accessing – and then match the two.

Agencies should ask the following questions when evaluating their data capabilities:

  • Can I continuously discover the on-premises and cloud environments storing my data and create an inventory?
  • Do I know the category and classification of the discovered data?
  • How do I control access to the data?
  • Do I encrypt the data with my environment and when it leaves my control?

The CISA Zero Trust Maturity Model indirectly acknowledges that networks have gotten smaller and more fragmented. As network perimeters become blurrier, organizations must focus their firewalls on specific users, devices, and data points. Traditional ZTNA architectures barely evolved from VPNs won’t be enough. Agencies need a more modern ZTNA model, replete with machine learning, to achieve Optimal protection.

Five Essential Metrics for Measuring Federal Government CX

By Willie Hicks, Federal Chief Technologist at Dynatrace

Whether users are shopping on Amazon or another top company’s website, they expect certain website performance and features availability to ensure a positive customer experience (CX).

Users expect to access product links and conduct transactions seamlessly and swiftly. They expect to navigate to comprehensive product specifications or details instantly, and to enlarge images with a simple mouse click. If they have any questions or concerns, they want effective chat support in the form of a live human being or an artificial intelligence (AI)-enabled avatar.

These advancements will continue to expand as customer demands intensify for their online and in-person experiences. For example, I recently swung by a fast-food drive-through, and an AI bot took my order. I intentionally changed certain words when repeating the order to see whether I could throw off this “employee.” But I failed. The bot got my order exactly right.

Given how far private industry has advanced, one might think we should harbor the same expectations when we seek services from the federal government. But agencies cannot deliver this level of CX yet.

Fortunately, the current administration recognizes this and has taken action. In December 2021, the White House released its “Executive Order on Transforming Federal Customer Experience and Service Delivery to Rebuild Trust in Government.” The order states – in light of complex, 21st-century challenges – that the government must be held accountable for the experiences of its citizens. Additionally, the order indicates, agencies impose an annual paperwork burden on the public by requiring them to complete forms and other tasks that exceed nine billion hours.

Department leaders, therefore, must reduce this burden while delivering services that people of all abilities can navigate. Human-centered design, customer research, behavioral science, user testing, and additional engagement efforts should come together to drive CX, according to the order.

The order arrived at a time when citizen satisfaction with U.S. government services has declined over the past four straight years. Satisfaction is now at a historic low — scoring 63.4 out of 100, according to research from the American Customer Satisfaction Index (ACSI). In categorizing satisfaction levels, the ACSI found the government’s “ease of accessing and clarity of information” score declined from 71 out of 100 to 67 in two years. Website quality declined from 75 out of 100 to 70 within the same period.

Beyond the order, the Office of Management and Budget (OMB) published additional guidance that identifies key CX drivers, including service effectiveness, ease and simplicity, and efficiency and speed. As agencies focus on these criteria, they need to find a way to monitor and measure progress – especially as the vast majority of them are making a significant transition to the cloud: Nine of ten federal IT leaders say their agency now uses the cloud in some form, with 51 percent opting for a hybrid cloud model. These leaders indicate that they will migrate 58 percent of their applications to the cloud within the next 12 to 18 months.

Observability tools – which capture all interactions and transactions in an automated, intelligent manner – will enable effective monitoring whether in the cloud or on-premise. This includes user session replays, which will pinpoint the causes of negative CX. When combined, key metrics can generate an accurate CX index score.

But which metrics should your agency consider? Here are five essential ones:

  1. Transaction completion time. The unavoidable truth is all users are busy. If citizens apply for a passport, they know what they want and they want to get it done now. If they encounter delays during the form-filling process or the final “Click here to submit application or purchase” stage, it will lead to a negative CX.
  2. Page load time. This clearly influences transaction time and CX. If it takes more than five seconds to load a page, users often become frustrated and may leave the site.
  3. Rage clicks. When a website loads slowly or freezes, users sometimes click a button five or six times – e.g., rage clicking. Tools such as session recordings can measure this, too.
  4. Abandonment rate and data. Agencies need to track how often users abandon a transaction. Then, they should investigate the source of the abandoned task: Which stages of the process are slow or confusing for users, to the point where they quit in frustration? Once IT pros have identified those stages, they can prioritize what to address first.
  5. Conversion rate. This goes up as the abandonment rate goes down. How often do users cross the finish line (with a completed transaction or query) when they interact with an agency? The higher the conversion rate, the more satisfied the customer.

Observability tools provide AI-enabled monitoring, which automatically tracks and provides visibility into these five metrics, among many others. AI will also auto-remediate issues that metrics expose. This allows agencies to measure progress as they focus on better CX.

The federal government doesn’t sell hamburgers, smartphones, or streaming content. It delivers noble services for citizens daily. But, as with major companies in private industry, we should always seek to measure and continuously improve public-sector CX.

Ensuring positive experiences is all about awareness, proactivity, and instinct regarding user needs. The federal government’s increased commitment to improving the digital CX is heartening, but agencies will never know how they are doing if they can’t measure system performance.

That’s why agencies should consider the five metrics summarized here as a starting point for measuring the success of users’ online interactions. They must arrive at a logical breakdown of what customers need and use comprehensive metrics to improve systems at every transaction level. With automatic and intelligent observability, agencies gain a holistic view of their technology environments that’s required to deliver seamless customer experiences.

Unlocking the Benefits of 5G and Beyond

By Michael Zurat, Senior Solutions Architect, General Dynamics Information Technology (GDIT)

5G technology has the potential to be transformative for businesses and consumers, as well as for the U.S. government. 5G can provide the enabling connectivity to drive digital transformation from a paperless digital Internal Revenue Service, to enabling the Postal Service to outperform Amazon, to providing zero latency situational awareness to our first responders and warfighters at the point of need.

The Department of Defense (DoD) understands that leveraging both purpose-built private 5G (P5G) and operating over commercial wireless networks managed by mobile network operators (MNOs) in support of their mission is critical to keep pace and provide our warfighters with technical superiority on the battlefield. This is evidenced by recent new projects as part of its Innovate Beyond 5G program and recent DARPA programs.

These include: a new industry-university partnership to jumpstart 6G systems; research on open radio access networks (ORAN); security and scalability for spectrum sharing and; efforts to increase resiliency and throughput for wireless tactical communications for the warfighter, and DARPA’s efforts to research securely operating over MNO networks globally.

As 5G technology becomes more widely available and as we look ahead to what’s next, it’s important for agencies of all types to consider both the risks and opportunities associated with 5G and other new wireless networking technologies – and the drastic changes the technology introduces at all levels of wireless architecture from network core, to edge compute, to radio, and RF spectrum.

First, the Risk

Spectrum is a finite resource. Like land, they’re not making more of it, and new players emerging in the space have only increased demand for it. The recent announcement that the Federal Communications Commission intends to continue to release spectrum capacity for commercial uses will put the DoD and its early-stage experimentation at the mercy of commercial mobile network operators unless the department invests heavily in next-generation wireless technology. In the meantime, it will be important for the government to adequately manage spectrum capacity for critical civilian and defense needs and work closely with industry and radio manufactures to ensure all devices meet the same standards for interference or RF filtering and shielding.

Another inherent risk: Playing catch up. Many other countries – including some near peers – are farther ahead than we are when it comes to implementing commercial 5G. The window of opportunity for the U.S. defense industrial and commercial base to catch up, or to leapfrog those nations’ advancements, is rapidly closing.

Now, the Opportunities

5G is being used today across a variety of industries and agencies. You’ve no doubt seen cell phone companies’ commercials touting the benefits of 5G and making claims about what it can offer. And a few in some sectors, agencies, and companies are rolling out their own private 5G networks. These networks, however, are only the tip of the 5G “iceberg.” 5G networks available commercially today in the U.S. are non-standalone networks – basically 4.5G – leveraging new radio spectrum but at the core leveraging legacy LTE equipment. Once fully 5G stand-alone compliant services are available we will see the full potential of the 5G feature set.

In the Federal space, 5G is being used to support experiments at the DoD as well as initial P5G production pilots at air, sea, and land bases across the U.S. The Department of Veteran Affairs has leveraged MNO-provided 5G to support virtual reality (VR) and augmented reality (AR) surgery training at the Palo Alto Medical Center. The Federal Bureau of Prisons has deployed private wireless at correctional facilities across the country to provide inmates with video and audio chat with those outside, as well as on-demand network and training access.

At GDIT, we see 5G as an enabler for digital transformation and industry 4.0 – for everyone from warfighters in theater, to first responders to scientists in remote locations using edge devices to engineers collaborating on digital twin simulations. In our 5G Emerge Lab, we test and demonstrate 5G technologies alongside our customers. The lab is a place to convene and train skilled talent, assess emerging 5G solutions, and work with partners to innovate and rapidly develop new ones. It allows us to prototype and prove capabilities in response to customer requirements, to test and compare vendor technology, and to interact with the latest 5G technologies and then use that knowledge in support of all our customers.

We look forward to continuing to conduct our research and develop 5G innovations in partnership with our customers so that we can help accelerate progress on 5G and next generation wireless across the Federal government, enabling agencies to look ahead to what’s next and to expand their capacity to deliver on their missions.

The Federal Factory of the Future: How AI is Transforming Manufacturing

By: Bob Venero, President & CEO, Future Tech Enterprise, Inc., Ftei.com

Manufacturing is all about operational efficiency – make it quicker, cheaper, and ship it for less. For this reason, the industry has long been at the forefront of the application of new technologies, finding creative solutions to increase production and decrease costs.

Basically, how can we innovate faster while still prioritizing safety?

There is a mood of cautious optimism in the industry, and companies still wary of ongoing turbulence are using this time to invest in the future. In manufacturing – including manufacturing for Federal organizations and by Federal contractors – that future lies in artificial intelligence (AI).

Intelligent Design

A recent study from MeriTalk found that almost all – ninety-five percent – of Federal technology leaders feel that the appropriate use of artificial intelligence (AI) could supercharge the effectiveness of government and benefit the American people.  Michael Shepherd, a senior distinguished engineer at Dell Technologies, says increased adoption of AI represents a “tremendous amount of opportunity” for Federal agencies, despite the workforce challenges.

Investing in AI “is going to make a difference,” Shepherd said in a recent interview with MeriTV. “I guarantee you, it’s happening in other countries, and we need to have that same level of investment here in the U.S. as well, especially within the armed forces and the Federal government.”

One of the many areas where that impact is significant is within manufacturing – from smart factories that can adjust production to meet evolving needs; to predictive maintenance that reduces equipment downtime and maximizes fleet readiness.

Manufacturing has been going digital for about a decade, leading some to christen this period the “Third Industrial Revolution”.

Direct automation, reduced downtime, 24/7 production, lower operational costs, greater efficiency, and faster decision making are just some of the rewards on offer to organizations that embrace the transformation and master the implementation of AI throughout their entire business.

The process of introducing AI is not without its challenges – it’s highly complex, costly, time-consuming, and requires a systematic approach. Just four in ten Federal IT leaders say they feel completely prepared for AI project implementation, with the lack of resources and available talent noted as the biggest roadblocks – ahead of budget.

But those that jump in earliest will gain a competitive edge. For example, John Deere debuted a fully autonomous tractor during CES 2022, powered by artificial intelligence and in development for over 20 years. The technology is now advancing so rapidly that organizations that don’t make their move into AI soon will find themselves falling behind.

Three Areas of Transformation

AI in manufacturing is often associated with futuristic robots, and for good reason. According to Global Market Insights, the industrial robotics market is forecasted to be worth more than $80 billion by 2024. But most (if not all) AI applications are software, and can improve a wide variety of functions for a manufacturer.

  1. Maintenance – In manufacturing, the greatest value from AI can be created by using it for predictive maintenance(generating more than $0.5 trillion across the world’s businesses). AI’s ability to process massive amounts of data means it can quickly identify anomalies to prevent breakdowns or malfunctions. The problem is getting that data. To scale requires more data, which requires more computing power to process. In fact, data preparation for AI systems is still 80-90% of the work needed to make AI successful.

One workaround might be the use of synthetic data, created “algorithmically” instead of real-world. Manufacturers are able to use synthetic data to build “digital twins” of their own datasets to test performance, improve functionality, and speed-up development so they can scale faster.

Enabling users to create precise digital twins is one thing Future Tech’s partner, NVIDIA, is making easier with its NVIDIA Omniverse Enterprise.

NVIDIA Omniverse Enterprise is a virtual environment enabling creators, designers, and engineers to connect major design tools, assets, and projects to collaborate and iterate in a shared virtual space.

Omniverse Enterprise is built on NVIDIA’s entire body of work, allowing users to simulate shared virtual 3D worlds. Here is the key part: these shared virtual worlds obey the laws of physics.

And, by doing that, Omniverse Enterprise enables photorealistic 3D simulation and collaboration. This in turn allows users to simulate things from the real world that cannot – and in many cases should not – be first tested in the real world.

To date, NVIDIA has demonstrated great success for Omniverse Enterprise in numerous industries, including aerospace, architecture, automotive, construction and design, manufacturingmedia, and sensors.

  1. Safety – Optimizing safety on the factory floor is a critical consideration for any manufacturer. Advanced technologies are now focused on how to improve both factors at the same time.

Recent advances in AI can help catch compliance violations, enhance plant processes, and support better design and process flows.

Other AI-powered safety measures include being able to immediately detect whether employees are wearing the right type of gloves or safety goggles for a specific situation. Background process analytics can also be run to estimate potential for fatigue, reminding people when to take breaks.

  1. Quality Control – Within the manufacturing industry, quality control is the most important use case for AI. Everybody makes mistakes, even robots. Defective products and shipping errors don’t just cost companies millions, they also damage reputations and jeopardize safety. Now, AI can inspect the products for us.

Using special cameras and IIoT (industrial internet of things) sensors, products can be analyzed by AI software to detect defects automatically. The computer can then make decisions on what to do with the defective products, cutting down on waste. Better yet, the AI will learn from the experience so it doesn’t happen again.

Futureproof

As advances in AI take place over time, one day we might see fully-automated factories, product designs created with limited human oversight, and innovations we have not yet considered. Smart manufacturing environments will be there to help us build them.

The Quantum Impact on Cyber

By Dr. Jim Matney, Vice President and General Manager, DISA and Enterprise Services, at General Dynamics Information Technology (GDIT)

As almost any cybersecurity professional would tell you, you can’t reliably know what vulnerability a hacker will find and exploit. To avoid an attack, your defenses must be right 100 percent of the time. The hacker only must be right once.

Quantum computing turns that all on its head. Why? Two reasons.

First, quantum computers are exponentially more efficient than classical computers for certain problems and can support more advanced and complex compute applications. The emergence of quantum as a compute resource for solving specific computing challenges is at the same time full of promise and peril.

Second, some encryption algorithms being used today can be hacked with already-built quantum algorithms. For a long time, that was fine because there were not quantum computers big enough or fast enough to crack them (i.e., crypto-analytically relevant quantum computers or CRQC). But that’s changing. American adversaries are investing heavily in the quantum computing space, making the threat of quantum-based attacks against our encryption algorithms a much more compelling one.

Moreover, we are reliant on encryption because the networks our data travel on can be intercepted. A persistent fear is that – even if bad actors can’t yet do anything with our data – they can harvest it now, store it, and play it back later.

So, what are Federal agencies to do in the face of this reality?

As the National Institute of Standards and Technology (NIST), the Cybersecurity and Infrastructure Security Agency (CISA), and the National Security Agency (NSA) all advise, agencies should continue to conduct good cyber hygiene practices and not yet purchase enterprise quantum-resistant algorithm solutions, except for piloting and testing. NSA expects a transition to quantum resistant algorithms will take place by 2035 once a national standard is adopted. In the meantime, however, CNSA (Commercial National Security Algorithm) 1.0 and 2.0 algorithms offer the benchmarks for national security systems.

Additionally, agencies should take stock of their cryptological assets and ensure they have confidence that their scans are not missing particular devices or data stores and connections to internet of things (IoT) and peripheral devices. Agencies should also ensure they have a clear understanding and rating of the criticality and sensitivity of the various sets of data within their organization. While all data is important, bandwidth and resource limitations do exist, so agencies must develop a road map that ensures the highest-sensitivity data is secured first.

These are good practices in line with a zero trust approach, but the quantum threat provides extra impetus for getting this done as soon as possible. In addition, the White House issued a national security memorandum that sets the requirements for Federal agencies to prepare for this threat.

GDIT has developed a framework to help agencies prepare for the quantum threat and navigate the new standards that will ultimately be issued for agencies to implement, which are expected from NIST by 2024. The GDIT Quantum Resilience Framework is a risk-based, post-quantum cryptography implementation approach. It includes these steps:

Assess Risk

Begin by looking at your overall encryption risk profile. Some algorithms will be severely impacted by quantum computing, and some will be less impacted. Agencies should know where they are in relation to the latest NIST encryption standards and understand the risks of not being up to date.

Analyze Impact

Examine what encryptions are used throughout your organization and what services they support. This can include web browsing, email, digital signatures, message digests, key exchanges, VPN, enterprise data center transport and data at rest. Determine which ones are most important to protect and understand the impact of a potential quantum-enabled attack.

Prioritize Actions

With the impact assessment complete, create a risk response strategy (e.g., accept, avoid, transfer, or mitigate). Then, prioritize your risk categories (critical, high, medium, low) and define risk tolerance statements for each. In these statements, articulate what actions you’ll take, in what order, who will perform them, and on what time horizon.

Examine Solutions

Once you’ve prioritized the actions to take to protect critical services, examine the available solutions to address them. There should be no appetite for accepting risk where there is an approved quantum-resistant solution available. Agencies should explore viable near-term solutions to counter the “harvest and replay later” threat. Long-term solutions should be based on approved NIST and/or NSA standards.

Implement

Implementing quantum-resistant algorithms that drive resiliency is a logical next step, once NIST has fully vetted and provided guidance. Part of the implementation process involves following the risk prioritization schedule and being clear about what solutions will be implemented in what sequence.

Track to Completion

Agencies should be sure to track and document their solution implementation. This will provide a roadmap for future updates when new standards are released.

Monitor Continuously

As with anything cybersecurity-related, agencies should continuously monitor their encryption risk. Expect standards and solutions to be updated frequently in line with quantum advancements, as well as the advancements in the sophistication of hacking techniques.

To be quantum resilient across the enterprise, agencies should plan and budget for these activities now so that they can be prepared to implement new solutions as soon as the new standards are released. The goal is to conduct proactive planning that drives future security; to improve trust in data confidentiality and integrity; to lower the risk of the pending quantum threat to current encryption algorithms; and to consistently broaden awareness of quantum’s impact to cybersecurity across the enterprise.

How Next-Gen Computers Will Transform What’s Possible for Federal Government

The Accenture Federal Technology Vision highlights four technology trends that will have significant impact on how government operates in the near future. Today we look at Trend #4, Computing the Impossible: New Machines, New Possibilities.

When Intel launched the Intel® 4004 processor in 1971, the first general-purpose programmable processor was the size of a small fingernail and held 2,300 transistors. Today, state-of-the-art microprocessors pack in 60 billioneven 80 billion – transistors.

For federal agencies, the trend towards ever-more-powerful computers has driven significant new efficiencies and discoveries. Yet, we are now approaching the physical and engineering limits of Moore’s Law, the guiding framework for computing advancements over the past five decades. In its place, new computing approaches are emerging, fueling a range of new federal use cases.

Quantum computers, high-performance computers (HPC), and even bio-inspired computing all promise to accelerate innovation and expand upon existing capabilities in the federal space.

Federal leaders are bracing for impact. In a survey by Accenture, ninety-seven percent of U.S. federal executives say their organization is pivoting in response to the unprecedented computational power that is becoming available. The same percentage report that their organization’s long-term success will depend on leveraging next-generation computing to solve the seemingly unsolvable problems.

With ever-more-powerful machines coming to the fore, agencies need to start thinking now about how to make best use of these emerging capabilities in order to take full advantage of the opportunities they present.

What’s Coming

Among next-generation computing types, quantum is currently receiving the most attention because it promises to be so disruptive and transformative. Quantum computers use “qubits,” which can be both 1 and 0 simultaneously, rather than being restricted to one or the other. This quality of qubits enables quantum computers to run more complicated algorithms, tackle millions of computations simultaneously, and operate far faster than traditional computers.

Quantum machines are well-suited for solving optimization problems incorporating large numbers of factors and criteria, giving greater visibility to the entire landscape of possible solutions to decision-makers. The most immediate use cases for this type of capability includes greater efficiencies in scheduling or supply chains, as well as support for the financial services or manufacturing industries.

Then there’s HPC, or massive parallel processing supercomputers. The most mature of the next-gen computers, HPCs help organizations leverage large volumes of data that may be too expensive, time-consuming, or impractical for traditional computers to handle.

HPCs typically rely on different hardware and system designs – where multiple computing processors, each tackling different parts of a problem, are connected together to operate simultaneously. This enables them to solve more complex problems that involve large amounts of data.

HPC is already having an impact for federal agencies. The Energy Department’s National Renewable Energy Laboratory is developing its Kestrel supercomputer to answer key questions needed to advance the adoption of cleaner energy sources. And three federal departments – Health and Human Services, Veterans Affairs, and Energy – have jointly leveraged HPC to accelerate COVID-19 research.

Waiting in the wings is “biocomputing,” which relies on natural biological processes to store data, solve problems, or model complex systems in fundamentally different ways. It could have implications especially for data storage: One estimate predicts DNA could store an exabyte of data in just one cubic centimeter of space, with great reliability.

A related capability, “bio-inspired computing” draws inspiration from biological processes to address challenges in areas such as chip architectures and learning algorithms. Pilots have shown this emergent field can deliver benefits like greater power efficiency, speed, and accuracy in solving more complex problems.

These examples help to demonstrate the potential for advanced computing to enhance the federal mission. In fact, 68 percent of U.S. federal executives say quantum computing will have a breakthrough or transformational positive impact on their organizations in the future, while 55 percent say the same for high-performance computing.

Forging Tomorrow’s Agencies

Next-generation computing will have ripple effects on the federal government, whether agencies act or not. The computers that will create and fuel the next generation of government and industry are already being built, and agencies need to be part of this wave or risk being swept away by it.

One of the most well-known effects is quantum computing’s forecasted impact on cybersecurity. The clock is currently counting down to Q-day, or the day when everything that runs on computer systems – our financial accounts, government secrets, power grids, transportation systems, and more – may suddenly become susceptible to quantum-powered cyberattacks.

Q-day is an event with incredibly serious implications for national security and the day-to-day operations of our digitally connected society. Comprehensive new approaches to cybersecurity – such as crypto-agility – will be needed to prepare agencies’ security architectures for this day.

For decades, computers that could efficiently solve the world’s grand challenges have been nothing more than theoretical concepts. But enterprises can’t afford to think about them in the abstract any longer. They are rapidly improving, and their impact on our most fundamental problems and parameters may be the biggest opportunity in generations.

The agencies that start anticipating a future with these machines will have the best shot at taking full advantage of the opportunities available, while preparing for the risks.

Learn more about how agencies can capitalize on next-generation computing in Trend 4 of the Accenture Federal Technology Vision 2022: Computing the Impossible.

Authors:

  • Chris Copeland: Managing Director – Chief Technology Officer
  • Chris Hagner: Managing Director – Technical Innovation and Engineering Lead, National Security Portfolio
  • Justin Shirk: Managing Director – Cloud GTM Lead, National Security Portfolio
  • Mimi Whitehouse: Emerging Technology Senior Manager
  • Garland Garris: Post-Quantum Cryptography Lead
  • Mary Lou Hall: Chief Data Scientist, Defense Portfolio

Agencies Must Take an Authentic Approach to Synthetic Data

The Accenture Federal Technology Vision 2022 highlights four technology trends that will have significant impact on how government operates in the near future. Today we look at Trend #3, The Unreal: Making Synthetic Authentic.

Artificial intelligence (AI) is one of the most strategic technologies impacting all parts of government. From protecting our nation to serving its citizens, AI has proven itself mission critical. However, at its core, there is a growing paradox.

Synthetic data is increasingly being used to fill some AI methods’ need for large amounts of data. Gartner predicts that 60 percent of the data used for AI development and analytics projects will be synthetically generated by 2024. Synthetic data is data that, while manufactured, mimics features of real-world data.

At the same time, the growing use of synthetic data presents challenges. Bad actors are using these same technologies to create deepfakes and disinformation that undermines trust. For example, social media was weaponized using a deepfake in the early days of the Russian-Ukrainian War in an unsuccessful effort to sow confusion.

In our latest research, we found that by judging data based on its authenticity – instead of its “realness” – we can begin to put in place safeguards needed to use synthetic data confidently.

Where Synthetic Data is Making a Difference Today

Government already is leveraging synthetic data to create meaningful outcomes.

During the height of the COVID crisis, for example, researchers needed extensive data about how the virus affected the human body and public health. Much of this data was being collected in patients’ electronic medical records, but researchers typically face barriers in obtaining such data due to privacy concerns.

Using synthetic data, a wide array of COVID research was artificially generated and informed by – though not directly derived from – actual patient data. For example, the National Institutes of Health (NIH) in 2021 partnered with the California-based startup Syntegra to generate and validate a nonidentifiable replica of the NIH’s extensive database of COVID-19 patient records, called the National COVID Cohort Collaborative (N3C) Data Enclave. Today, N3C consists of more than 5 million COVID-positive individuals. The synthetic data set precisely duplicates the original data set’s statistical properties but with no links to the original information so it can be shared and used by researchers around the world trying to develop insights, treatments, and vaccines.

The U.S. Census Bureau has leveraged synthetic data as well. Its Survey of Income and Program Participation (SIPP) gives insight into national income distributions, the impacts of government assistance programs, and the complex relationships between tax policy and economic activity. But that data is highly detailed and could be used to identify specific individuals.

To make the data safe for public use, while also retaining its research value, the Census Bureau created synthetic data from the SIPP data sets.

A Framework for Synthetic Data

To create a framework for when using synthetic data is appropriate, agencies can start by considering potential uses cases, to see which ones align with mission.

For example, a healthcare organization or financial institution might be particularly interested in leveraging synthetic data to protect Personally Identifiable Information.

Synthetic data could also be used to understand rare, or “edge,” events, like training a self-driving car to respond to infrequent occurrences like when debris falls on a highway at night. There won’t be much real-world data on something that happens so infrequently, but synthetic data could fill in the gaps.

Synthetic data likewise could be of interest to agencies looking to control for bias in their models. It can be used to improve fairness and remove bias in credit and loan decisions, for example, by generating training data that removes protected variables such as gender and race.

In addition, many agencies can benefit from the reduced cost of synthetic data. Rather than having to collect and/or mine vast troves of real-life information, they could turn to machine-generated data to build models quickly and more cost-effectively.

In the near future, artificial intelligence “factories” could even be used to generate synthetic data. Generative AI refers to the use of AI to create synthetic data rapidly, at great scale, and accurately. It can enable computers to learn patterns from a large amount of real-world data – including text, visual data, and multimedia – and to generate new content that mimics those underlying patterns.

One common approach to generative AI is using generative adversarial networks (GANS) – modeling architectures that pit two neural networks – a generator and a discriminator – against each other. This creates a feedback loop in which the generator constantly learns to produce more realistic data, while the discriminator gets better at differentiating fake data from the real data. However, this same technology is also being used to enable deepfakes.

Principles of Authenticity

As this synthetic realness progresses, conversations about AI that align good and bad with real and fake will shift to focus instead on authenticity. Instead of asking “Is this real?” we’ll begin to evaluate “Is this authentic?” based on four primary tenets:

  • Provenance (what is its history?)
  • Policy (what are its restrictions?)
  • People (who is responsible?)
  • Purpose (what is it trying to do?)

Many already understand the urgency here: 98% of U.S. federal government executives say their organizations are committed to authenticating the origin of their data as it pertains to AI.

With these principles, synthetic realness can push AI to new heights. By solving for issues of data bias and data privacy, it can bring next-level improvements to AI models in terms of both fairness and innovation. And synthetic content will enable customers and employees alike to have more seamless experiences with AI, not only saving valuable time and energy but also enabling novel interactions.

As AI progresses and models improve, enterprises are building the unreal world. But whether we use synthetic data in ways to improve the world or fall victim to malicious actors is yet to be determined. Most likely, we will land somewhere in the expansive in-between, and that’s why elevating authenticity within your organization is so important. Authenticity is the compass and the framework that will guide your agency to use AI in a genuine way – across mission sectors, use cases, and time – by considering provenance, policy, people, and purpose.

Learn more about synthetic data and how federal agencies can use it successfully and authentically in Trend 3 of the Accenture Federal Technology Vision 2022: The Unreal.

Authors:

  • Nilanjan Sengupta: Managing Director – Applied Intelligence Chief Technology Officer
  • Marc Bosch Ruiz, Ph.D.: Managing Director – Computer Vision Lead
  • Viveca Pavon-Harr, Ph.D.: Applied Intelligence Discovery Lab Director
  • David Lindenbaum: Director of Machine Learning
  • Shauna Revay, Ph.D.: Machine Learning Center of Excellence Lead
  • Jennifer Sample, Ph.D.: Applied Intelligence Growth and Strategy Lead

Biometrics and Privacy: Finding the Perfect Middle Ground

By Bob Eckel, CEO, Aware

Confirming the identification of the world’s most wanted man leaves no margin for error. In fact, when Osama bin Laden was killed in 2011, he was identified through facial recognition which was later confirmed by DNA analysis. Few would argue this was not a commendable use of biometrics.

Since then, biometric technology has been used in a variety of laudable initiatives designed to keep our country and citizens safe. In the past several years, U.S. Customs and Border Protection (CBP) has had tremendous success with its facial comparison system detecting criminals, terrorists and impostors trying to enter the country by using another person’s identification and travel documents. Most recently, DNA tests have helped determine if children arriving at the southern border do in fact belong to the accompanying adults or are being used as pawns.

Privacy concerns surrounding biometrics may be perceived as a gray area, but we believe the resolution lies in properly designing a system to eliminate the need for compromise. Biometrics are simply too valuable, and yet there are proven ways they can be implemented ethically, in a manner that builds the public’s trust. For instance:

Clear Opt-In/Opt-Out Procedures: Currently, the CBP is using biometrics to scan travelers at 26 seaports and 159 land ports and airports across the country. Complete privacy notices are prominently visible at locations using facial recognition, along with easy-to-understand instructions on how American citizen travelers can simply opt out of the screening. The facial recognition verification process takes less than two seconds for arrivals, and this convenience factor is a prime reason the vast majority of people opt into the system.

The CBP is often highlighted as an example among operators in terms of how to implement biometrics correctly, and this has much to do with their very clear opt-in and opt-out procedures. Everyone is fully informed of their options and as a result, people feel a sense of control.

Privacy by Design – Proper Data Storage, Processing and Protection: People often worry that the collection of biometric data in one central database makes us vulnerable to “the mother of all data breaches.” In reality, there are actually several easy ways for this to be avoided. First, an organization deploying biometrics may choose to delete data, such as facial images, within a matter of milliseconds after they are captured, used and no longer needed. In addition, organizations can make sure this data is never shared with third parties or industry partners.

There are other techniques as well, such as the “cancellable biometric” – where a distorted biometric image derived from the original is used for authentication. For example, instead of enrolling with your true finger (or other biometric), the fingerprint is intentionally distorted in a repeatable manner and this new print is used. If, for some reason, your fingerprint is “stolen,” an essentially “new” fingerprint can be issued by simply changing the parameters of the distortion process.

Biometric data can also be stored completely and separately away from other personally identifiable information (PII), meaning that even if a hacker were to be able to get access to biometric data, without accompanying PII, this data would hold no value. Finally, one of the most groundbreaking new techniques involves breaking biometric templates into anonymized bits and then storing this data in different places throughout a network, making it virtually impossible for a hacker to access complete biometric templates.

Eliminating the Potential for Bias: In the area of facial recognition, research has shown that certain biometric algorithms may not be as accurate in matching or distinguishing the facial morphologies of certain minorities – including Asians, Blacks and Native Americans – and genders.

However, facial recognition has come an extremely long way in recent years, driven by advances in machine learning and the availability of massive amounts of data for algorithm training. An algorithm’s accuracy is heavily dependent on the data it’s fed, and today’s leading ones are being trained on more diverse datasets than ever before. According to the      most recent evaluation, the top 150 algorithms are over 99 percent accurate across a variety of demographics. Even at these performance levels, we always recommend human involvement in any final decisions made in areas like crime investigation or border security, since in our view people and technology working together represent the strongest combination.

In closing, consider this law enforcement example. In crime investigations, eyewitness misidentifications are the leading cause of wrongful convictions, which are often resolved through DNA exoneration. It’s clear that we can’t afford to do away with the most accurate weapon in our arsenal – the benefits are far too vast. Rather, the key is leveraging the unmatched power of biometrics with the right privacy safeguards indelibly in place.

Two-Way Street: Why Officials and Constituents Are Equally Responsible for Securing the Midterms

By Melissa Trace, VP, Global Government Solutions at Forescout

As we approach the upcoming midterm elections, U.S. officials are on high alert for bad actors looking to target election networks and devices. Both state and non-state threat actors view our nation’s democratic processes as threats against their beliefs and see disrupting our upcoming election as a means of advancing their own agenda.

Made up of a diverse set of networks and infrastructure controls, election systems are often older, remote, or unpatched – making them attractive targets for adversaries. Additionally, while many larger communities can invest in election security, smaller localities are often budget-restricted, leaving them vulnerable to attacks.

To combat these potential system vulnerabilities, the Cybersecurity and Infrastructure Security Agency (CISA) officials have seen success in deterring threats with programs such as the Cybersecurity Toolkit and Shields Up, as well as guided exercises for election officials, and private-public partnerships. These programs have all provided comprehensive guidance for officials and private organizations to fill gaps in government policies with best practices from the private sector.

While these practices and programs help election officials handle potential threats, there are still additional steps both officials and constituents can take immediately to help ensure a free and fair U.S. election this fall.

To make the best use of the CISA Cybersecurity toolkit, election officials must ensure they are employing basic cybersecurity hygiene practices:

  1. Gain a full understanding of the network environment – in order to quickly identify vulnerable devices, officials must have both extensive visibility and understanding of what devices are connected and what operating systems they are running;
  2. Take inventory of existing security processes – this will help ensure that they are updated and functioning properly; and
  3. Identify non-compliant devices – once these devices are identified, they should be immediately quarantined and investigated.

These three steps should be continuously repeated, so the network is assessed in real-time to provide the most accurate and comprehensive risk assessment to officials. Once these basic hygiene steps are incorporated into officials’ cybersecurity routines, they can turn their attention to CISA’s Cybersecurity toolkit and upleveling its guidance.

Rather than doing just a weekly scan of the network, officials can take this recommendation to the next level by implementing real-time monitoring of their network and assets. Work from home has impacted elections and how election information is controlled, so being able to immediately identify vulnerable devices and isolate them until they are patched is vital to securing that data. Much like the rest of the population where many industries include work-from-home policies now, election workers operating remotely are also a prime target for hackers, which can lead to misinformation campaigns that may deliver incorrect information about voting locations, candidate policy positions, and more. Configuration Management Databases (CMBD) should also be updated in real-time, and continuous monitoring can help to ensure the library and patches are kept up to date. Given the shortage of election workers, automating these processes can help ensure they are followed without the need for a human to initiate the update.

The responsibility of securing the upcoming election does not just fall on election officials, but also on constituents. The average person most likely doesn’t view their home network as vulnerable, let alone a hunting ground for bad actors to access election networks, yet each household is a gateway to personal and community data. By following preventative practices, constituents can help to ensure they do not become a vector for an attack on the upcoming elections:

  1. Change your home network and device(s) default password;
  2. Deploy multi-factor authentication (MFA) whenever offered; and
  3. Inspect the home network, looking for unknown connected devices or users.

Everyone from election officials to volunteers to constituents must do their part to secure election networks and data ahead of the midterm elections. By deploying basic cybersecurity hygiene practices to all networks, both in the home and in election devices, and utilizing the comprehensive tools already available, everyone can make the 2022 midterm elections the most secure yet.

The “Programmable World” Will Bring the Best of the Virtual World Into the Physical One

The Accenture Federal Technology Vision highlights four technology trends that will have significant impact on how government operates in the near future. Today we look at Trend #2, Programmable World: Our Planet, Personalized.

What is a “programmable world?” Consider what’s going on at Tyndall Air Force Base, near Panama City, Fla. In line with the base’s ambition to be the “Installation of the Future,” the Air Force has created a digital twin of Tyndall. In March 2022, the base unveiled its new Hololab, which is the portal to access Tyndall’s digital twin and conduct “what if” scenarios and explore how new designs might look.

Using the digital twin, base planners and engineers can locate and design new flight line facilities, or better understand security vulnerabilities and conduct resilience planning. For example, planners and engineers can use the digital twin to perform storm surge modeling and simulate the effects of a big storm on the base’s critical infrastructure or conduct a range of active-shooter scenarios to optimize preparation and response planning.

This impressive technology is just one example of our increasingly programmable world – a world in which the control, customization, and automation of software is being embedded throughout our physical environments. And it will give Federal agencies greater intelligence and adaptability to tackle complex issues, including climate change, public safety, geopolitical tensions, and population health.

By overlaying integrated digital capabilities across our physical surroundings, agencies can increase the efficiency and effectiveness of Federal operations. Many see promise here: 94 percent of Federal leaders believe that programming the physical environment will be a strategic competency in the future.

And most take this even further, seeing this as not just an opportunity but a necessity. Almost all (98 percent) Federal leaders believe that leading organizations will push the boundaries of the virtual world to make it more real. That means there will be a greater need for persistent and seamless navigation between the digital and physical worlds.

We’ve been building toward the programmable world for years, with proliferating IoT networks. The global 5G rollout adds more fuel to the fire, setting the stage for more adoption of low-power, low-latency connected devices. And researchers across enterprise and academia alike are working on transformative technologies, like augmented reality glasses, new methods of manufacturing, and new kinds of smart materials. These trends – alongside advances in natural language processing, computer vision, and edge computing – will help embed digital interactions as an ambient and persistent layer across our physical environments.

Practical Outcomes

What will a programmable world mean to Federal leaders in practical terms? Consider three key use cases: smart workers, smart environments, and smart materials.

  • Smart workers — Federal workers perform all kinds of highly specialized tasks, from surgery and fixing complex machinery to securing our borders and flying jet airplanes. Technologies like augmented reality can give them superpowers to create a more agile, insight-driven workforce. In some cases, AR may be used to guide remote workers. In other instances, access to real-time information is the key to higher performance. Already, AR is being used to train both fighter pilots and case workers to better address unique situations.
  • Smart environments — A fully programmable world offers the ability to create life-like digital models of facilities or equipment. These models can then be used to explore options and run planning scenarios across a complex environment. Jet engine manufacturer Rolls-Royce, for example, is collecting real-time engine data from its airline customers to model performance in the cloud using digital twins. Using the digital twins, the company hopes to reduce unnecessary maintenance and ground time, as well as develop more sustainable flying and maintenance practices that could lower carbon emissions.
  • Smart materials — Even the materials we use to manufacture objects can be programmed to respond to interactions, and to provide deeper and more immediate insight. A Veterans Affairs Department research team at the Advanced Platform Technology Center in Cleveland, for example, developed a smart bandage that applies electrical stimulation to treat chronic wounds, also known as pressure injuries, that would otherwise struggle to heal on their own. It also records temperature readings and impedance across the wound, which inform the clinician how well the wound is healing.

Next Steps for Federal Agencies

For Federal leaders, taking advantage of the programmable world will require exploration, experimentation, and development. Agencies can start by developing a deeper understanding of the three layers that comprise the programmable world – the connected, the experiential, and the material.

Many are already investing in and deploying the foundational, connected layer; consider the push toward 5G, which is poised to be game-changing in terms of its speed and low latency.

Then, the experiential layer is about creating natural computer interfaces linking the physical and digital worlds. In the absence of keyboards and microphones, a focus on human-centered design can help create these connections by exploring how users approach and learn how to interact with new experiences, such as through the trial-and-error process of using gestures to direct complex systems.

The final layer requires understanding of how new generations of manufacturing and materials will bring greater programmability into our physical environments. Agencies can look to partner with startups and universities to stay at the forefront of real-world technology innovation in the material realm.

Cybersecurity must stay a priority at every layer. The programmable world holds tremendous promise, but it also vastly expands the attack surface available to cyber threats, putting critical networks and infrastructures further at risk. Holistic, smartly architected security frameworks, such as zero trust, will be critical to protecting these hyper-connected environments.

The increasing programmability of the material world promises to reshape Federal operations. We’re about to live in environments that can physically transform on command, that can be customized and controlled to an unprecedented degree, and that can change faster and more often than we have ever seen before. With these environments, a new arena for government innovation will be born.

Read Trend 2 of the Accenture Federal Technology Vision 2022 to explore how agencies can further prepare for the programmable world.

Authors:

  • Bill Marion – Managing Director – Defense Portfolio Growth & Strategy and Air & Space Force Lead
  • Jessica Bannasch – Digital Platforms Technology, Solutioning & Delivery Excellence Lead
  • Rick Driggers – Critical Infrastructure Cybersecurity Lead
  • Jessica Powell – Managing Director
  • Scott Van Velsor – Managing Director – DevSecOps Practice Lead

Cyberattacks are a Common Occurrence and the Costs are Higher Than Ever

By: Terry Halvorsen, general manager, U.S. Federal Market, IBM

The pandemic accelerated digital transformation, amplifying both opportunities and risks. Remote workers, new devices, partners, and integrations open organizations in ways that can radically increase their threat surface, making it less of a question of if a cyber attack will happen, but rather when. Therefore, the well-being of organizations today depends on not only protecting against and preventing cyber incidents, but also rapidly detecting, responding to, and recovering from them – and the costs prove it.

IBM recently released its annual Cost of Data Breach Report, which found that the financial cost of a data breach in 2022 reached an all-time high of $4.35 million on average. And one of the key revelations in this year’s report is that the financial impact of breaches is starting to extend well beyond the individual organization itself. We’re now beginning to see a hidden “cyber tax” paid by consumers because of the growing number of breaches. In fact, IBM found that for 60 percent of organizations, breaches led to price increases passed on to consumers. A prime example, in the wake of the 2021 Colonial Pipeline ransomware attack, gas prices rose 10 percent on a temporary basis, and some of this increase can be attributed to that attack.

While certain factors can exacerbate breach costs, such as focusing on responding to data breaches versus preventing them, there are other factors, including a zero trust strategy, that can help mitigate the financial and mission impacts of a breach.

  • Slow down bad actors with zero trust. The study found that organizations who adopt zero trust strategies pay on average $1 million less in breach costs than those who don’t. Instead of trusting that security defenses will succeed, zero trust assumes that an adversary’s attack won’t fail. To put a twist on an old Washingtonian phrase: don’t trust but still verify. Taking this approach helps organizations buy more time and slow down bad actors. It eliminates the element of surprise and moves away from patrolling a perimeter 24×7 – a strategy that has already crumbled at the feet of today’s digital revolution. In the year since the White House issued its cybersecurity executive order outlining a mandatory zero trust security strategy for the federal government, agencies are making progress toward their zero trust security goals. However, there’s still more work to be done specifically related to implementation. Assessing your current environment and properly defining what you’re trying to achieve will make for a higher probability of success. Zero trust is a journey, and patience is key.
  • Reduce the data breach lifecycle with security AI and automation. A zero trust approach helps slow down bad actors, which ultimately helps reduce costs. Security AI and automation can go hand in hand, also helping to reduce the time and ultimately costs of a data breach, by shortening the total breach lifecycle. With 62 percent of organizations stating that they are not sufficiently staffed to meet their security needs, using AI to automate certain repetitive tasks, for example, can help address today’s security skills shortage while also positively impacting response times and security outcomes. This year’s report found that organizations with fully deployed security AI and automation can pay an average of $3.05 million less in breach costs than those that don’t – the biggest cost saver observed in the study. For those organizations with fully deployed AI and automation, it took an average of 74 days less to identify and contain a breach (known as the breach lifecycle), compared to those with no security AI or automation deployed.
  • Enhance preparedness by testing, creating and evolving incident response playbooks. Zero trust makes it harder for attackers to gain access, but it doesn’t make it impossible. Incident response planning and capabilities can supplement by helping organizations quickly and effectively respond to security incidents and ultimately save costs associated with data breaches. In fact, the study found on average data breaches cost $2.66 million more for organizations that don’t have an incident response team or test their incident response plan compared to those that have both ($3.26 million vs. $5.29 million). That represents a 58 percent cost savings, compared to 2020 when the cost difference was only $1.77 million.

With the cost of a data breach higher than ever, it’s clear that the pressure on chief information security officers (CISOs) is not likely to let up anytime soon. The right strategies and technologies can help organizations across industry and government get their cybersecurity houses in order and may hold the key to reducing breach costs.

Increasing Equity Through Data and Customer Experience

By: Santiago Milian, Principal, Booz Allen; and Jenna Petersen, senior lead technologist, Booz Allen

From housing assistance to community health to closing the digital divide, equity is on the agenda of multiple Federal agencies. But making Federal services more equitable demands a close look at how constituents experience them.

Do services address the full breadth of a constituent’s challenges and needs? Are people able to get what they need with minimal delays and frustration? And are programs being tracked and held accountable for results?

Progress – and Gaps – in Customer Experience

Federal agencies have been improving in these areas. For the second year in a row – and even during a global pandemic and political turbulence – Federal agencies have improved their collective scores for customer experience (CX), according to a 2021 Forrester report. But the average Federal CX score – 62.6 out of 100 points – still lags behind the private sector average by a full 10.7 points.

These metrics matter. When private sector companies fall behind, they know immediately through their revenues and stock prices. Public sector organizations must also track progress to gain and retain trust amongst the public, maintain accountability, and shape future funding decisions related to CX and equity.

This is where data collection and analysis come in, increasing program accountability and transparency in order to improve the public’s trust and confidence. But tracking CX progress is just one way in which data can lead to greater equity in Federal services.

Data Ensures Services Reflect the Full Complexity of Needs

Many factors affect how a person experiences inequity. Consider, for example, the homebuying experience. What’s holding underserved homebuyers back from their goals, and how can a Federal program help?

Potential homebuyers may lack the kind of support network that helps establish credit history or overcome weaker credit ratings. They may experience racism or unconscious bias from lenders, real estate professionals, and appraisers, or may simply fall victim to the systemic biases built into the credit system itself. Based on their personal experiences or those of their close personal networks, they may not see homeownership as an achievable goal or a means toward financial security, or they may not trust in the underlying systems that support the homeownership ecosystem.

Data, both qualitative and quantitative, helps Federal agencies understand how inequities persist, how they affect people’s lives, and how they can be overcome. Among these complexities, Black women may face different challenges than Black men due to the intersectionality of race and gender. People with multiple underserved identities often face multiple burdens. Analyzing intersectional data ensures that constituents with multiple identities and challenges have their unique needs met, too.

In short, data helps Federal agencies center underserved constituents in their efforts—one of the core tenets of equity.

Lowering the ‘Time Tax’ With Data and Trust

Equitable services are easy for people to access. To find out how a program rates in this area, consider the following questions. Are eligibility criteria transparent? Is maintaining eligibility burdensome? Do those in most need know what’s available?

Difficulty navigating the issues above results in a “time tax” of delays and frustrations. This time tax is often highest for those with the fewest resources.

Federal agencies can use data to reduce the time tax, creating a comprehensive picture of challenges, needs, services constituents already participate in, services they’ve applied for, and services they are eligible for. With all of this information in one place like a cross-government CRM, the government can use this data to proactively serve customers with all the benefits for which they are eligible.

Meanwhile, staff can use pre-existing data and cross-agency collaboration to process applications and approvals more swiftly. Take, for example, the rapid mailing of millions of stimulus checks during the COVID-19 pandemic. Thanks to IRS data, the government already knew about income – and this saved administrative staff and recipients time and hassle.

Making this work requires trust. Agencies must engage more directly and meaningfully with traditionally underserved populations. They will need to understand their perspectives, challenges, and drivers for their behavior, and they will need to rebuild trust such that the traditionally underserved are willing to engage once again. Finally, agencies must be transparent about data collection and use: allaying fears of discrimination and communicating the benefits of data sharing.

With these tactics and caveats in mind, Federal agencies can turn knowledge into power, to deliver the American people more equitable experiences and more responsive services.

The AI Edge: Why Edge Computing and AI Strategies Must Be Complementary

By: John Dvorak, Chief Architect, North America Public Sector, Red Hat

If you’re like many decision-makers in Federal agencies, you’ve begun to explore edge computing use cases. There are many reasons to push compute capability closer to the source of data and closer to end users.

At the same time, you may be exploring or implementing artificial intelligence (AI) or machine learning (ML). You’ve recognized the promise of automating discovery and gaining data-driven insights while bypassing the constraints of traditional datacenters.

But if you aren’t actively combining your edge and AI strategies, you’re missing out on transformative possibilities.

Edging Into AI at Federal Agencies

There are clear indicators that edge and data analytics are converging. For starters, data creation at the edge will surge a compounded 33 percent through 2025 to account for more than one-fifth of data, according to IDC. By 2023, data analytics pros will focus more than 50 percent of their efforts on data created and analyzed at the edge, Gartner predicts.

It’s no wonder that 9 in 10 Federal technology leaders say edge solutions are very or extremely important to meeting their agency’s mission, according to Accenture. And 78 percent of those decision-makers expect edge to have the greatest effect on AI and ML.

Traditionally, agencies need to transmit remote data to a datacenter or commercial cloud to perform analytics and extract value. This can be challenging in edge environments because of increases in data volumes, limited or no network access, and the increasing demand for faster decision-making in real time.

But today, the availability of enhanced small-footprint chipsets, high-density compute and storage, and mesh-network technologies is laying the groundwork for agencies to deploy AI workloads closer to the source of data production.

Getting Started With Edge AI

To enable edge AI use cases, identify where near-real-time data decisions can significantly enhance user experiences and achieve mission objectives. More and more, we’re seeing edge use cases focused on next-generation flyaway kits to support law enforcement, cybersecurity and health investigations. Where investigators once collected data for later processing, newer deployment kits include the advanced tools to process and explore data onsite.

Next, identify where you’re transmitting large volumes of edge data. If you can process data at the remote location, then you only need to transmit the results. By moving only a small fraction of the data, you can free up bandwidth, reduce costs and make decisions faster.

Take advantage of loosely coupled edge components to achieve necessary compute power. A single sensor can’t perform robust processing. But high-speed mesh networks allow you to link nodes, with some handling data collection, others processing, and so on. You can even retrain ML models at the edge to ensure continued prediction accuracy.

Infrastructure as Code for Far-Flung AI

A best practice for AI at the edge is infrastructure as code (IaC). IaC allows you to manage network and security configurations through configuration files rather than through physical hardware. With IaC, configuration files include infrastructure specifications, making it easier to change and distribute configurations, and ensuring that your environment is provisioned consistently.

Also consider using microservices and running them within containers and automating the iterative deployment of the ML models into production at the Edge with DevSecOps capabilities such as CI/CD pipelines, GitOps, etc. Containers provide flexibility to write code once and use it anywhere.

You should seek to use consistent technologies and tools at the edge and core. That way, you don’t need specialized expertise, you avoid one-off problems, and you can scale more easily.

Edge AI in the Real World (and Beyond)

Agencies from military to law enforcement to those managing critical infrastructure are performing AI at the edge. One exciting and extreme-edge example is space and, specifically, the International Space Station (ISS).

The ISS includes an onsite laboratory for performing research and running experiments. In one example, scientists are focused on the DNA genome sequencing of microbes found on the ISS. Genome sequencing produces tremendous amounts of data, but scientists need to analyze only a portion of it.

In the past, the ISS transmitted all data to ground stations for centralized processing – typically many terabytes of data with each sequence. With transitional transmission rates, the data could take weeks to reach earth-based scientists. But using the power of containers and AI, research is completed directly on the ISS, with only the results being transmitted to the ground. Now analysis can be performed the same day.

The system is simple to manage in an environment where space and power are limited. Software updates are pushed to the edge as necessary, and ML model training takes place onsite. And the system is flexible enough to handle other types of ML-based analysis in the future.

Combining AI and edge can enable your agency to perform analytics in any footprint and any location. With a common framework from core to edge, you can extend and scale AI in remote locations. By placing analytics close to where data is generated and users interact, you can make faster decisions, deliver services more rapidly and extend your mission wherever it needs to go.

How Metaverses and Web3 can Reshape Government

The Accenture Federal Technology Vision 2022 analyzes four emerging technology trends that will have significant impact on how government operates in the near future. Today we look at Trend #1, WebMe: Putting the Me in Metaverse.

In the wake of the pandemic, people’s digital and “real world” lives are melding. Accenture research found that 70% of consumers globally report spending substantially more time online, and 38% of agree that their digital life is increasingly becoming their “real life.”

Alongside this, two distinct technology shifts are taking place: the rise of metaverses, and the arrival of Web3. Together they are driving a shift towards a more decentralized and human-centric internet. Federal leaders will need to prepare for this profound shift – and some agencies are already starting to dip their toes into this new future.

For example, the U.S. Army is building a Synthetic Training Environment (STE), which aims to revolutionize the Army’s entire training paradigm by allowing units and soldiers to conduct realistic, multi-echelon, collective training anywhere in the world. Currently scheduled to be fully operational in 2023, the STE will combine live, virtual, constructive, and gaming training environments that simulate real-world terrain in its full complexity.

Together, the metaverse and Web3 create tremendous opportunities for federal agencies, most notably around modeling complex interactions in real-time, whether they be on the battlefield, in major cities, in warehouses and other large facilities, or on public lands. At the same time, they enable more powerful collaboration, whether that be a training scenario or engaging with an increasingly digitally-minded audience. Together, these two developments are building a more immersive, more impactful digital world for government to explore and use to further its missions.

Breaking Down the Metaverse and Web3

With still-emerging concepts, it’s important to define our terms. We see the metaverse as enabling users to move beyond browsing, toward inhabiting and participating in a persistent shared digital experience. Web3 refers to the use of technologies like blockchain and tokenization to build a more distributed data layer into the internet.

In practical terms, we can think of metaverses as 3D digital environments where people can explore, play, socialize, experience, train, collaborate, create, and interact with others. A range of emerging creation tools, platforms, and technologies are taking this beyond the realm of video games to provide immersive experiences for consumers shopping, employees training, and more.

Web3 deepens that experience, introducing a data framework that generates the veracity, trust, and consensus that the virtual world has often lacked. By building services and applications atop often permissionless blockchains outfitted with open protocols and open standards, Web3 will allow for more freedom, decentralization, and democracy for individual users, content creators, and projects.

Many in government are already looking ahead toward the rise of the metaverse and Web3. Nearly two-thirds of U.S. federal government executives (64%) say that the metaverse will have a positive impact on their agency, with 25% calling it breakthrough or transformational. Of those anticipating the most significant impact, 94% believe it will happen in the next four years.

While both the metaverse and Web3 present interesting possibilities on their own, federal leaders should be especially attentive to the coming together of these two trends. Specifically, Web3 infuses the context, ownership and interoperability needed to transform the metaverse into a thriving community.

Federal Agencies are Helping Build the Metaverse

While this may all sound a bit fantastical, federal agencies are already exploring the possibilities.

The Veterans Health Administration Innovation Ecosystem Extended Reality Network is exploring new care models using extended reality for veterans suffering from post-traumatic stress disorder (PTSD), anxiety, depression, chronic pain, and other challenges. And the Department of Agriculture’s Forest Service is relying on AI and digital twin simulation technology in the metaverse to better understand wildfires and stop their spread. Lastly, the U.S. Air Force is pondering the creation of a space-themed metaverse and has even trademarked a name for it: SpaceVerse.

Perhaps one of the most compelling cases of a federal agency employing WebMe capabilities to design and create its own virtual environment is occurring at NASA’s Jet Propulsion Laboratory (JPL) near Los Angeles. The project team began scanning workspaces and rooms at JPL and then digitally reconstructing them. JPL employees can then wear Oculus Quest 2 headsets to attend virtual meetings in those scanned locations. The virtual space enables JPL employees to replicate the dynamic communication and collaboration needed for their complex engineering projects – even while working remotely.

Federal agencies can also leverage metaverse and Web3 technologies to optimize warehouse and logistics operations; mitigate vulnerabilities and improve resilience in industrial processes; better manage infrastructure elements; and deliver improved maintenance to weapons systems, equipment, and fleet vehicles — to name but a few of the possibilities.

Federal agencies need to start thinking now about what use cases might benefit from the immersive experience of a metaverse, and the verifiable data framework of Web3. Already, there are some standard metaverse use cases that agencies can leverage without high levels of risk. For instance, immersive technologies for training or productivity have been tested and experimented with for years.

Alongside use cases, they need to think about how they will do this: which human, technical, data, and other resources will they need — and which outside partners are best positioned to assist them — to move forward. Agencies will need to ensure they have both a solid technical foundation, as well as the skills and capabilities within their workforce to act on these opportunities.

Read Trend 1 of the Accenture Federal Technology Vision 2022: WebMe to learn more about the steps federal leaders can take to capitalize on the shift toward Web3 and the metaverse.

Authors:

  • Alejandro Lira Volpi: Managing Director – Accenture Federal Services, Financial Services Strategy Lead
  • EJ Dougherty III: U.S. Federal and Defense Extended Reality Lead, Accenture Federal Services
  • Kyle Michl: Managing Director – Accenture Federal Services, Chief Innovation Officer
  • Christina Bone: Senior Innovation Architect, Accenture Federal Services
  • Dave Dalling: Cybersecurity Chief Technology Officer, Accenture Federal Services
  • Terrianne Lord: Managing Director – Accenture Federal Services, Salesforce Delivery Practice Lead

Four Emerging Technology Trends set to Impact Government Most

By Chris Copeland, Chief Technology Officer, Accenture Federal Services; and Kyle Michl, Chief Innovation Officer, Accenture Federal Services

Both the public and private sector embraced technology to an unprecedented level in their response to the COVID-19 pandemic. This created a period of “compressed transformation” that disrupted the technology landscape dramatically. For example, the wholesale adoption of the cloud is now more common. In the pandemic’s wake, a number of emerging technologies – previously thought only to be on the distant horizon – came into clearer focus. Early adopters including federal agencies are already deploying them for enterprise impact.

In our latest Accenture Federal Technology Vision, we explore the four most prominent of these technology trends, as they are poised to up-end federal operations over the next three years. We also look at the convergence of these trends into a “Metaverse Continuum,” a spectrum of digitally enhanced worlds and business models rapidly taking shape. And in many cases, we find that the public sector is leading the way.

Through the Metaverse Continuum, the real and the virtual will merge in ways we’ve never seen before. Consider: Military pilots are already enhancing training in the metaverse, connecting live aircraft to a common augmented reality environment, up in the sky, to perform a refueling training exercise with a virtual tanker.

Using augmented reality, caseworkers are developing new interviewing skills and learning to manage stressful situations by immersing themselves in virtual experiences – complete with frantic parents, distracted children, and ringing phones.

The Metaverse Continuum will transform how federal government operates and its building blocks are being laid today. Extended reality, artificial intelligence, blockchain, quantum computing, advanced sensors, 5G, the Internet of Things, digital twins, etc. – when combined, these technologies create incredible new spaces, rich in breakthrough capabilities. We expect the Metaverse Continuum to eventually play a key role in missions ranging from training and education to logistics and maintenance to citizen service, and more.

And federal agencies are more equipped to adopt and scale these transformative technologies than ever before. Post-pandemic, the federal government now has a deeper understanding of what technology can do to further missions, and how to effectively integrate it.

Yet, navigating the Metaverse Continuum will still require a careful approach. There are opportunities ahead but also significant challenges, particularly in the areas of cybersecurity, privacy, trust, regulation, and more.

In this era, Accenture’s Federal Technology Vision 2022: Government Enters the Metaverse provides a map for assessing and organizing the technologies of the Metaverse Continuum into a meaningful structure for your agency.

The report dives into four trends:

  1. In WebMe, we explore how the internet is being reimagined, specifically through the intersection of metaverses and Web3. Metaverses are 3D environments, imbued with a sense of place and presence, where moving from an enterprise system to a social platform is as simple as walking from the office to the movie theater across the street. Web3 can underpin these environments with a data framework that generates veracity, trust, and consensus — things we’ve long had conventions for in the physical world, but which have often eluded us in the virtual world.
  2. Programmable World tracks the increasingly sophisticated ways in which technology is being threaded through our physical environments. The convergence of 5G, ambient computing, augmented reality, smart materials, and more is paving the way for agencies to reshape how they interact with the physical world, unlocking an unprecedented fidelity of control, automation, and personalization.
  3. The Unreal examines the paradox of synthetic data. Synthetic data – which is manufactured data meant to mimic real-world qualities – is increasingly needed to train artificial intelligence and power the algorithms that are being embedded in our day-to-day experiences. At the same time, synthetic data is negatively impacting populations, with the rapid growth of deepfakes and misinformation leading people to increasingly question what’s real and what’s not. In this world, authenticity – not “realness” – must be the north star for federal agencies.
  4. Finally, in Computing the Impossible, we look at the rise of quantum, biologically-inspired, and high-performance computing. Next-gen computing promises to reset the boundaries of what is computationally possible and will empower federal agencies to tackle problems once considered unsolvable.

The Federal Technology Vision 2022 incorporates insight from a global survey of 4,660 executives spanning 23 industries and 24,000 consumers worldwide. It applies these findings to the unique challenges and demands facing the U.S. federal government, featuring in-depth analysis from more than 20 Accenture Federal Services experts and results of a survey of 200 U.S. federal government executives.

Learn more about how the Metaverse Continuum can transform your agency in the report.

5G Enables AI at the Edge

By Michael Zurat, Senior Solutions Architect, GDIT

5G, Artificial Intelligence and Edge Computing have been areas of focus in the IT space for some time. What’s been given less attention, until recently, is how these three powerful technologies interact with one another and how, as one example, 5G is precisely what enables AI capabilities at the edge.

The proliferation of data-creating devices – whether that’s smart phones, Internet of Things (IoT) devices or drones – means that the sheer volume of data that exists is too large for humans to examine, and it has been for a while now. AI algorithms enable us to identify patterns and outliers and to either address a problem or pinpoint the things that require human intervention. 5G enables us to deploy those algorithms faster and more securely to devices at the edge, making them more powerful and capable than ever before.

Massive Machine-To-Machine Connectivity

Here’s why: 5G enables massive machine-to-machine connectivity. This allows for connected sensors and devices to “speak” to each other – sometimes without connecting to an enterprise or hub, such as in a mesh network. This allows for devices to manage themselves and react to data inputs from other devices, even without connecting to a cloud or centralized server. By leveraging embedded algorithms and even machine learning at the edge, we’re at the beginning of what will become a smart and independent mesh network of devices.

One example is how smart city traffic lights can automatically adjust to control or regulate the flow of traffic without communicating to a central server. As edge computing like this becomes ubiquitous, it will start to compete with the cloud as a compute and storage commodity. This may provide alternative physical locations for storage and compute for existing cloud vendors, and it could disrupt them altogether with potential new offerings from cellular network operators.

We’re already seeing the beginning of this from content distribution services. Netflix stores data in massive data centers in Virginia and California, but locally they’re using Akamai relays to cache data at the neighborhood level.

Blurred Lines Demand Advanced Cybersecurity

In this example and hundreds like it, as data moves from place to place with encryption at both ends, 5G and edge are, in effect, blurring the lines and creating a larger and more diverse attack surface in need of advanced cybersecurity. On top of that, more complex system architectures are built on more complex Infrastructure as a Service (IaaS), Software as a Service (SaaS) and Platform as a Service (PaaS) services that abstract the cloud and edge compute and storage locations. There are also more complex layers of containerization involved. No longer will cyber teams be able to think about security in local, cloud or on-prem contexts.

In environments where everything is virtualized and when it’s not exactly clear where the data is at any given moment, or when serverless systems are being leveraged, cybersecurity becomes more important than ever. So, too, does having a secure software supply chain. Look no further than the recent log4j breach.

As ever-more complex systems enable ever-more complex capabilities, it can be easy to focus on the delivery of functionality without looking back at how and what software is being used and how secure it is. The stakes are even higher for government and Department of Defense customers and their mission partners. As a systems integrator, GDIT has broad and deep capabilities that address these complexities and the stakes – both today and for the future. This includes our 5G Emerge Lab that proves out 5G solutions and demonstrates how we can securely connect and protect 5G-enabled devices, data, and applications across the broader enterprise.

Underpinning Technologies Paved the Way

Paving the way for what’s to come with regard to AI at the edge are a host of underpinning technologies. Containerization, as mentioned above, helps to quickly move applications into the cloud. Virtualization abstracts applications from physical hardware. Graphical Processing Units, or GPUs, allow us to accelerate workloads. Gaming engines gave us the blueprint for reusing software components or to more quickly port applications to other platforms.

Today, GDIT is leveraging these advances as well as the capabilities offered by 5G to enable AI at the edge on a diverse array of use cases – from training and simulations that leverage virtual reality on edge devices, to medical trainings with smart mannequins. We are also exploring how to use AI and guided assistance within Denied, Disrupted, Intermittent, and Limited bandwidth (DDIL) environments. Another use case involves deploying smart warehouse technology on edge devices for logistics and supply chain management purposes.

What is clearer than ever is that teams are just scratching the surface of the enormous potential of AI at the edge. 5G, and the continued bandwidth enhancements that will come after it, will expand that potential exponentially. And we stand ready to capitalize on it, exploring the art of the possible, for customers.

Plugging Cyber Holes in Federal Acquisition

By Ken Walker, President & Chief Executive Officer, Owl Cyber Defense

Government agencies are under siege from ransomware and incredibly sophisticated cybersecurity threats, such as the 2020 SolarWinds supply chain attack. To help fight back, lawmakers are introducing steps to broaden defenses through non-traditional approaches. The Supply Chain Security Training Act (SCSTA) bill, recently passed in the U.S. Senate, would extend cyber responsibilities to federal employees with supply chain risk management responsibilities, like program managers and procurement professionals.

This is a much-needed step. SCSTA directs the General Services Administration (GSA) to develop a training program for federal employees that will help them identify and reduce agencies’ supply chain risks. Extending security responsibilities in this way is practical and necessary to widen the resource pool for tackling cyber risks, particularly given the shortage of people with hard technical skills who are battling supply chain threats. At this point, everyone needs to stay vigilant and not expect security to be someone else’s responsibility.

While SCSTA would obligate another element among current job training requirements, it is vital – even for non-technical employees – to understand the security angles of technologies they are acquiring. Focusing on specific vendor practices for both physical and digital supply chains will drive a thorough assessment of cybersecurity across a vendor’s entire process, sub-supplier requirements, and risk mitigation policies.

To get started, agencies can frame the discussion around a few simple but strategic questions: what are the supply chain security risks in what you’re offering me? In what ways could your product be compromised? How could the product be installed or integrated incorrectly that might cause or increase cyber risk? Then drill down into specifics. Here are some ideas of what to probe:

For physical systems:

  • Production. Ask about steps taken to ensure that the vendor’s bill of materials for the supply chain includes known, trusted entities who prove they adhere to stated security procedures. That can include steps like physically checking products through high visibility scanning, to make sure that there was no unauthorized substitution of components and comparing the exact build kit against scrap analysis. The ongoing global supply chain issues have exacerbated production problems that can lead to acquisition of gray-market components which can make it much easier for an adversary to introduce counterfeit or maliciously modified parts.
  • Critical software. While downloading has become the predominant software delivery method, the most secure way to deliver software is still on physical media. For systems that will be deployed in highly sensitive missions, vendors should provide both disks and independently supplied validation codes to verify that the software meets a specific signature and profile. If those don’t match, it’s an indication to not move forward.
  • Delivery. Packaging should be verifiably tamper-resistant, such as with tamper-evident tape on all seams. Vendors should minimize the length of time a package is in transport – that means it has less time to be compromised. If deliveries are delayed, ask why to see if there is an associated risk. Once shipments are received, verify that any devices contained within also have tamper defenses, and that those are not compromised.

For software applications:

  • Vendor verification. Vendors should be able to verify how they secure their software so that it is not altered along the chain of command. Find out if any entity or manufacturer has access to the entire build. If so, what steps are taken to validate the full build’s integrity? This includes using an isolated build and test environment that requires multi factor authentication to access and requires multi-tunnel VPNs for remote access. Source code should never be released outside of the isolated environment and all packages (open-source or otherwise) imported into the isolated environment must be verified.
  • Assess the source of the download. Given that most software is downloaded, risk is created if the source code is compromised then proliferated through an organization – such as what happened to SolarWinds in 2020. Where is the download platform housed? Who has access to it? Be able to independently check where any validation codes come from to verify that they are authentic and as expected.

Even with elevated training, it can be complicated for program managers and procurement professionals to interpret vendor input for particularly technical situations. Consulting with internal cyber experts can help when vetting a vendor’s response. Still, in the face of supply chain shortages that are already straining the acquisition process, and the growing number of cyber threats, these roles will certainly get harder.

SCSTA represents a big task. Yet taking such steps is vital in the modern threat environment. Cybersecurity is not an endpoint, it is a journey. Legislation like the SCSTA is a call to action for strengthening layers of national cyber defenses with the resources we already have. More people understanding the risks, asking the right questions, and knowing what to look for, will go a long way in making agency systems – and the country – more secure.

Resilient Critical Infrastructure Starts with Zero Trust

By: Raghu Nandakumara, Senior Director, Head of Industry Solutions, Illumio

From the Colonial Pipeline breach to the JBS ransomware attack, the past year has shown us that cyberattacks on U.S. critical infrastructure are more relentless, sophisticated, and impactful than ever before – and all too often threaten the economic stability and wellbeing of U.S. citizens.

Because of this, critical infrastructure protection remains a top focus for the Federal government. The Biden Administration’s 2021 Executive Order on Improving the Nation’s Cybersecurity (EO) laid out specific security mandates and requirements that agencies must meet before Fiscal Year 2024 in order to bolster organizational and supply chain resilience. One critical component the EO specifically articulated is the advancement toward a Zero Trust architecture – a cybersecurity methodology first introduced nearly a decade ago, and predicated on the principles of “least privilege” and “assume breach.”

In March 2022, President Biden reaffirmed the 2021 EO with his “Statement… on our Nation’s Cybersecurity”, again, pointing to Zero Trust as a cybersecurity best practice as the U.S. looks to improve domestic cybersecurity and bolster national resilience in the wake of an emerging global conflict. Further, the Cyber Incident Reporting for Critical Infrastructure Act of 2022 signed into law in March 2022 will require private sector infrastructure operators to report cyber incidents and ransomware payments to the government – boosting the U.S. focus on protecting critical infrastructure.

Embracing ‘Assume Breach’

In order to bolster ongoing resilience efforts, organizations across the Federal government and private industry alike must start taking a proactive approach to cybersecurity. This starts with rethinking the way we fundamentally approach security.

Digital transformation has dramatically expanded the attack surface. Today, modern IT architecture is increasingly a hybrid mix of on-prem, public clouds and multi-clouds – opening up new doors for attackers to not just gain access, but also move across environments with ease. As the frequency and severity of breaches continue to increase, our industry is rapidly adopting an “assume breach” mindset – an understanding that even with the best preventative and rapid detection technologies, breaches are going to happen.

Think of the recent cybersecurity industry shifts this way: The first security era was solely focused on protection. In a walled in, on-prem data center the focus was on perimeter security – build a digital wall and keep the bad guys out. About a decade ago, a wave of high-profile breaches woke us up to the fact that a wall can’t keep the bad guys out entirely. From there, the focus shifted from perimeter-only security to the second security era of rapid detection and response – find the bad guy quickly after they scale the wall.

Now we are in the third wave of security: focus on containment and mitigation. This is where Zero Trust capabilities like Zero Trust Segmentation (i.e., microsegmentation) can help. For example, in the event that bad actors gain access to a Federal agency, Zero Trust Segmentation can help limit their impact by containing the intrusion to a single compromised system – vastly limiting access to sensitive data.

In fact, according to a recent study from ESG, organizations leveraging Zero Trust Segmentation are 2.1X more likely to have avoided a critical outage during an attack over the last 24 months, have saved $20.1M in the annual cost of downtime, and have averted five cyber disasters annually.

Going Back to Basics

As harrowing cyberattacks remain the norm, it’s never been more essential for critical infrastructure organizations to prioritize practicing and maintaining proper cybersecurity hygiene. Cyber hygiene is nothing revolutionary – it’s about adopting and putting the basics into practice, day in and day out.

In 2021, the White House issued a memo outlining key best practices for organizations looking to safeguard against ongoing ransomware attacks: make sure you’re backing up your data, patch when you’re told to patch, test your incident response plans, double check your team’s work (i.e., account for human error), and segment your networks, workloads and applications accordingly.

With proper cybersecurity basics in place, Federal agencies are better positioned to expand upon ongoing resilience efforts – like accelerating their Zero Trust journeys.

Building Resilience Starts now.

In the end, prioritizing proactive, preventative cybersecurity approaches like Zero Trust, and mandating them at a national level, will have positive long-term benefits on the nation’s security posture and overall resilience. But good cybersecurity hygiene and building real resilience is an ongoing effort. It’s important to start small. For example, start by segmenting your most critical assets away from legacy systems. That way, if a breach occurs, it can’t spread across your hybrid architecture to reach mission critical information. From there, you can move to larger, wider resilience undertakings.

But as with any goal, it’s important to not make “perfect” the enemy of good. In other words, not having a perfect plan shouldn’t be a barrier to starting somewhere. What is important is getting started today. Bad actors are evolving, emerging and now rebranding – and any cybersecurity hygiene practice (big or small) helps uplift organizational resilience. In the end, especially when it comes to public sector operations, we’re all only as strong as the weakest link in our supply chain.

Remember, “assume breach,” put the basics into practice, and prioritize securing your most critical infrastructure with Zero Trust security controls first.

The Evolution of Government Tech Procurement Under CMMC 2.0

By: Kyle Dimitt, Principal Engineer, Compliance Research at LogRhythm

Supply chain attacks have been on the rise across the globe, as we saw with targeted attacks against SolarWinds and Kaseya. The spike has created a large risk in the Federal government since industry supply chains don’t necessarily have to adhere to a set level of cybersecurity standards, specifically with agencies like the Department of Defense (DoD). To combat this, the DoD has attempted to minimize the risk by increasing the security of the Defense Industrial Base (DIB) with the introduction of the Cybersecurity Maturity Model Certification (CMMC) in 2019.

CMMC requires contractors to obtain third-party certification to ensure appropriate levels of cybersecurity practices are in place to meet basic cyber hygiene standards, as well as protect controlled unclassified information (CUI) that resides on partner systems. While this won’t answer all the government’s cybersecurity woes, it addresses what is becoming a more frequently seen exploit.

Challenges with CMMC 1.0

CMMC 1.0 was a very big change for the DIB. While contractors may have been required to adhere to NIST standards prior to its introduction, there was no requirement for proof of adherence to those standards with CUI. The requirement from CMMC to get audited and prove that your organization was adhering to these requirements became very costly depending on where the CUI was in your environment.

Many DIB contractors were also not confident in what CUI they had in their environments, further adding to the complexity of CMMC requirements. Because they couldn’t fully identify the CUI, they couldn’t fully scope what needed to be protected by the controls and what they would be audited against.

Additionally, there was no allowance for plans of action and milestones (PoAMs). Certifications were a firm pass/fail, which meant organizations could lose out on an opportunity for a contract if they weren’t certified and would have to be audited again once they remediated any deficiencies noted in the audit.

Introducing CMMC 2.0

In November 2021, the DoD announced CMMC 2.0, which came with an updated program structure and requirements. The key changes in CMMC 2.0 address some of the grievances shared above regarding CMMC 1.0, but other challenges remain.

CMMC 2.0 is more flexible, allowing for PoAMs and waivers to CMMC requirements under certain circumstances. This enables contractors who do not meet the security requirements to continue to bid on DoD contracts. The five-tier security system levels have also been revised to three levels, simplifying and streamlining requirements to focus on the most critical.

Third-party assessment requirements have also changed for version 2.0, reducing the number of government contractors that require a third-party assessment. Level 1 no longer requires third-party assessments but requires organizations to perform annual self-assessments. Level 2 requires triennial third-party assessments and annual self-assessments for select programs. Level 3 requires triennial government-led assessments by the Defense Industrial Base Cybersecurity Assessment Center.

The biggest challenge is still the depth of understanding in CUI. Organizations must continue working with their DoD and Federal partners to understand what CUI needs to be protected and where it is in order to properly perform third-party and self-assessments.

Impact on State and Local Governments

CMMC 2.0 will aid Federal agencies’ ability to buy and implement new technologies by allowing for more flexibility for contractors to gain CMMC certification. By making the certification process simpler and more affordable, contractors can find a quicker path to certification and ultimately a smoother procurement process. While CMMC does not impact state and local government agencies for now, it’s reasonable to expect similar mandates to extend to the local level in the future. State and local governments will likely take their cue from Federal agencies and will be more likely to work with contractors who have gained certification and federal contracts, even though it is not currently required they work with vendors with CMMC certification.

State and local governments should keep a watchful eye on the continued rollout of CMMC as they look to incorporate similar requirements in their supply chain. More importantly, these governments should stay up to date on President Biden’s executive orders around cybersecurity as they could have cascading effects for those entities. There is nothing as broad and sweeping as CMMC at the state level but with the increased focus on cybersecurity, many states are introducing new cybersecurity legislation.

Looking Ahead

The latest changes to the Cybersecurity Maturity Model Certification are a great move by the Office of Acquisition and Development and the DoD to continue to hold the DIB accountable while providing a more flexible standard. CMMC 2.0 will allow more contractors to obtain the certification and adhere to the standards but, most importantly, the second iteration is creating greater public buy-in.

The U.S. government has taken great first steps to modernize its own cybersecurity efforts before extending a set of audited requirements to the entire government supply chain. A government-wide implementation of CMMC or something very similar is not out of the question, but don’t expect to see it before CMMC demonstrates some success with its new model or before the DoD’s full phase-in by 2026.

Zero Trust Requires Continuous, Tested Security for Federal Agencies

By Scott Ormiston, Federal Solutions Architect, Synack

Within a single week in late March, the Biden administration both reissued the call for American companies to shore up their cybersecurity efforts in the wake of the Russia-Ukraine war, and requested nearly $11 billion in cybersecurity funding from Congress for the Federal government and its agencies for fiscal 2023 – a billion dollars more than the year prior.

Record numbers of Common Vulnerabilities and Exposures (CVE) and zero day exploits also contribute to the urgency felt across the cybersecurity industry, which is being squeezed by a lack of talent and a hot labor market. Meanwhile, the federal government and its agencies are in the middle of an effort to modernize their technology – a herculean task that has the potential to widen attack surfaces and further burden cybersecurity professionals.

Adopting an adversarial, offensive cybersecurity strategy that aligns with the Federal government’s mandate to move to zero trust architecture can release some of that pressure by working proactively to harden your agency’s existing security program.

Zero trust architecture, as outlined in the Federal zero trust strategy memorandum M-22-09, is aligned with the Cybersecurity and Infrastructure Security Agency’s (CISA) five pillars of its Zero Trust Maturity Model. Those five pillars include: Identity, Devices, Networks, Application Workload and Data. Each pillar requires different kinds of tools and services to adhere to zero trust principles, which all coalesce around preventing unauthorized access by making access granular and as-needed.

Taking a closer look at the Application Workload pillar, optimal functionality should be designed for continuous testing. When an application is in development, security testing for Federal agencies should happen routinely throughout and continue once deployed. Once applications are deployed, CISA recommends continuous, external monitoring.

The common themes for all five of the pillars include continuity and externality. Why? Because that is the manner in which adversaries are scanning attack surfaces for potential threat vectors; they are continually learning from organizations’ security measures and augmenting their own approaches. The adversary is on-the-clock 24/7, looking for a way in, so security teams must rebuild their efforts to match.

To make the move toward zero trust, security teams need to establish if their existing security systems and processes are working as designed. Conducting outside-in testing and gaining an adversarial perspective on current security implements will demonstrate where to prioritize remediation efforts.

Synack provides dedicated application security testing that enables federal agencies to adhere to mandates, advancing their moves to zero trust principles. Agencies that select Synack will also benefit from its FedRAMP Moderate In Process designation, indicating that 325 security controls were met to enhance security for users working in Synack’s FedRAMP environment.

As former National Security Agency and Defense Department technical security experts, Synack’s founders know intimately the importance of securing federal operations and technologies in cyberspace.

CEO Jay Kaplan and CTO Dr. Mark Kuhr saw firsthand how difficult it was to unite thousands of government employees and acquire the necessary security expertise to proactively, and effectively, protect against today’s cyber attacks and threat actors. That view led them to create Synack, the premier on-demand security testing platform backed by a vetted community of ethical researchers for continuous penetration testing and vulnerability management.

“Helping defend the U.S. against cyberattacks is in our DNA. It’s why my co-founder Jay and I started Synack in the first place and it’s what our network of trusted ethical hackers do every day on the platform,” said Dr. Kuhr. “Synack’s FedRAMP designation is a powerful accelerant for even more Federal customers to benefit from continuous, crowdsourced security testing, which is an essential best practice especially in light of recent vulnerabilities like Log4j. The Synack offering can aid organizations by rapidly responding to the most urgent CVEs.”

Synack has worked with more than 30 government organizations on application security testing capabilities with capacity to deliver better results at scale than traditional methods, and is committed to helping agencies protect citizens and their data. Addressing the Biden administration’s call to make now the time to progress with security efforts, Synack can provide organizations with on-demand access to the most trusted worldwide network of security researchers.

How Multi-INT Fusion Accelerates Mission Intelligence for Real-Time Decision Advantage

Intelligence analysts are drowning in data amid ever-increasing numbers of communication channels, devices, and open-source intelligence.

Intelligence professionals often apply an adaptation of the Pareto Principle (aka the 80/20 rule) to the challenges created by all this data. Analysts anecdotally spend 80 percent of their time looking for data and information, leaving only 20 percent to analyze, render judgments, and articulate their analyses in a way that’s useful for decision makers. In this age of fast-evolving threats, it’s time to flip that number.

For the intelligence community (IC), access to near real-time information and actionable analysis is mission critical. That’s where multi-INT fusion comes in. Multi-INT fusion is the fusing of multiple data types to convert data to information more easily, enabling analysts to understand context and render intelligence assessments faster. It also involves converting manual tradecraft to more automated processes, empowering analysts to accelerate and improve their analyses and gain insights more quickly.

While multi-INT fusion is already a long-established practice, intelligence agencies can now apply new and emerging technologies to achieve it in faster, more efficient ways, in turn freeing analysts’ time for the problems that require more critical thinking. Here are two things IC leaders should consider as we drive toward a large-scale revolution in multi-INT fusion.

Thinking Big, Starting Small

Advancing multi-INT fusion requires a multi-prong approach. Not everything can change overnight, but the IC can seek quick wins that work with legacy systems, while remaining on the path toward enterprise transformation. For legacy systems, automation tools can be as simple as recommenders or real-time bidding – the same idea as seeing suggestions in a Google search or on your Waze app. Based on the user’s search patterns or location, this type of automation can often narrow down (and accelerate) analysts’ research efforts.

Ultimately, algorithms like those used in recommenders refine results based on what an analyst is looking for or is interested in, thus creating significant time savings. Similarly, we recently worked with a partner in the IC to develop an automated source extractor capability, which reduced the time it takes an analyst to properly cite sources by 75 percent. Not only do micro applications like this save analysts time, but they can also be rapidly integrated into legacy systems because they are low-cost and don’t take up much bandwidth.

More sweeping transformation might look like integrating advanced capabilities such as artificial intelligence (AI) and machine learning (ML) into new systems from the get-go. AI/ML can draw connections much more quickly than a human analyst can, which even more advanced AI can then analyze to predict outcomes and recommend action. In turn, AI/ML can correlate target activities to create faster decisions. This isn’t replacing the role of the human analyst, but rather increasing the pace of analytics to machine speed. This is also achieved by integrating AI/ML with advanced data science techniques, such as probabilistic and predictive modeling and automation of data pipelines.

Understanding Where Open Architecture Can Help

As intelligence organizations modernize, it’s important to create more sophisticated cross-domain pipelines. Fusing new, classified streams with commercially available and open-source data at scale requires layered levels of data engineering and analysis.

The data must be indexed, fused, and analyzed at appropriate security levels for multiple stakeholders, not just in the IC but also military, government, and international partners. Then, those recommendations need to be distilled to send to leaders at their level of access, whenever needed. Such large-scale, diverse fusion requires open architectures that move operations to the cloud for bandwidth and scale. To work within allotted budgets, many intelligence agencies should approach large-scale modernization incrementally, making small changes that build toward a larger vision.

Taking a modular open architecture approach simplifies transformation, while also allowing agencies to make decisions according to their funding resources. Often the answer lies in tools and frameworks created from commercial off-the-shelf or open-source software – ideally, tools that are already cleared through the accreditation process, have intelligence-level cybersecurity built in, and are proven in operation. The right technologies and solutions accelerate decision-making while providing capacity, all working together seamlessly, for faster, smarter analysis.

By Rob Goodson and Saurin Shah, Vice Presidents in Booz Allen Hamilton’s national security business

Three Things to Consider for Responsible AI in Government

The use of AI and analytics are crucial for government agencies, who are among the largest owners of data. The benefits of AI are often discussed – operational efficiencies, intelligent automation possibilities and the ability to gain deeper insights from massive amounts of data.

Claire Walsh, VP, Engineering and Services, Excella

With the intense interest and proliferation of AI, governance of machine intelligence is getting more attention and appropriately so. Absent legislation, organizations must anticipate and adopt voluntary practices to minimize risk and avoid undesirable outcomes.

Here are three areas of focus recommended as part of a comprehensive responsible AI strategy.

Plan Ahead

Responsible AI solutions start with planning. Some key questions to ask (and then answer) during the initiation of an AI project are:

  • What is the intended use case and are there other unintended uses that may need to be mitigated against?
  • What are the expected outcomes of the AI solution and are there possible unintentional impacts on individuals or community welfare? Are these positive or negative impacts?
  • How is the data used in the AI solution monitored and managed? Are data governance policies defined and applied consistently? Is data quality consistent and at appropriate level of completeness?
  • Where is there potential for bias with the AI solution and how can this be monitored and managed?
  • What considerations are needed to create model transparency and explainability?

Similar to applying the DevSecOps mindset, where teams “shift left” on security planning and execution to include these activities from the start, the same is recommended for risk planning in AI projects. Identify potential challenges and risks early and commit to maintaining a plan to assess and address them.

Be Transparent

AI models are complex, and transparency into how machine intelligence is making decisions and taking action is becoming increasingly critical. AI models now help us drive more safely through real-time alerts, or in some cases drive for us. AI is being incorporated into medical research and treatment plans. This level of complexity can be difficult to decipher when systems don’t operate with expected outcomes. What went wrong? Why was a decision made or an action taken?

Explainable AI (or XAI) advocates for fully transparent AI solutions. This means that all code and workflows can be interpreted and understood without having advanced technical knowledge.  This often requires additional steps in the design and build of the solution to ensure explainability is achieved and maintained.

Think of explainability as a two-step process – first, interpretability, the ability to interpret an AI model, and second, explainability, to be able to explain it in a way humans can comprehend. Explainable models provide transparency – so organizations stay accountable to users or customers and build trust over time. A black box solution that cannot be interpreted when things go awry is high risk investment that is potentially damaging and unexpectedly more expensive.

Enable Humans in the Loop

The Toyota production line Andon Cord is famous for its ability to stop the production line in the pursuit of quality. A physical rope used to halt all work when a defect was suspected, enabling an assessment and resolution of the issue before it can proliferate further.

What is the equivalent in the build and use of possibly high-stakes automated AI solutions? A human in the loop – enabling the ability for a person to oversee and have the option to override the system outputs. This can include data labeling by humans to support the model training process, human involvement in validating model results to support model “learning,” and implementing monitoring and alerts that require human review when specific or unexpected conditions are detected.

The combination of human and machine intelligence is a powerful one that expands possibilities while enacting safeguards.

By implementing governance guidelines and adopting approaches that specifically address the challenges and risks associated with AI solutions, federal organizations can proactively act to protect the interests of the public and Federal employees.

Legislation, White House Orders Show Agencies Opportunity for Hybrid Cloud

From pandemic relief bills to the cybersecurity executive order and the bipartisan infrastructure bill, Federal agencies have a wealth of mandates and opportunities to create new programs.

While each of these executive and legislative actions feature varying priorities, funding methods, and delivery objectives, in a larger sense they are unified by requiring Federal agencies to leverage advanced technologies like big data analytics, artificial intelligence, edge computing, and enhanced cybersecurity frameworks.

To meet these goals and ongoing program needs, agencies should consider increased investments in hybrid cloud technologies. Hybrid cloud infrastructure can provide the computing power, scalability, and security that on-prem or single cloud environments traditionally lack.

Where Hybrid Cloud Can Provide Value

Let’s look more at each piece of legislation and mandate to see how hybrid cloud improves traditional systems.

  • The Infrastructure Bill: The $1.2 trillion infrastructure bill was signed in November of 2021 and features funding for public sector cybersecurity enhancements, broadband access, and electronic vehicle expansion. Outside of the direct impact of improving technology infrastructure, agencies need to ensure they have the technological capability to administer properly the countless programs outlined in the new law.
  • The American Rescue Plan: Passed in January of 2021, the American Rescue Plan was most known for providing many Americans with stimulus payments. As with the Infrastructure Bill, this plan increased the need of Federal agencies to act quickly and provide a wealth of new and enhanced services to citizens in need, which put stress on outdated technology systems. However, the bill also created several short and long-term programs aimed at helping citizens, businesses, and schools return to everyday life.
  • The Cybersecurity Executive Order: The order calls for removing barriers to threat information sharing between government and the private sector and modernizing and implementing more robust cybersecurity standards in the federal government. Leveraging a hybrid cloud will allow for solid cybersecurity analytics and create a more modular architecture that can be harder to attack.

Pushing Further Toward Hybrid

The COVID-19 pandemic began to push Federal agencies toward hybrid cloud, and moved technology leaders closer to that idea. With more employees working remotely, agencies needed the power hybrid cloud platforms to allow remote employees to access applications.

While traditional cloud architectures offer the capability to scale, many agencies have created siloed cloud environments. Instead of having an application on an individual on-prem rack, as before, they now follow the same practice with the cloud. This goes against the ingrained benefits of a cloud model, something technology leaders understand.

In speaking with Federal customers, many have voiced how the pandemic amplified the importance of migrating to hybrid cloud – some even made the move before they originally planned. However, sometimes they moved too fast without proper planning.

What Agencies Need to Know About Hybrid

Faced with this increase in Federal mandates and programs, agencies need to ensure the proper technology infrastructure is in place to operate them. This means a hybrid cloud infrastructure with a flexible architecture that allows for more modular use.

For technology leaders, a switch to hybrid and how to better leverage cloud requires a culture shift. Agencies must move beyond the idea of siloed applications – even in the cloud – and utilize a hybrid structure that allows for better scaling, quicker time to action, and more adaptability for new applications.

Too often, agencies attempt to put a round peg in a square hole when it comes to applications. They need to look at hybrid methodologies – even if it means having a hybrid architecture for 20 percent, 30 percent, or 50 percent of the overall infrastructure. Simply taking steps in this direction can have a dramatic impact on the ability to take advantage of new and emerging Federal initiatives.

Creating an Effective Framework for DoD’s Software Factories

The Department of Defense (DoD) last month released its Software Modernization Strategy, an important step to unifying existing technology and directing a more joint approach to systems of the future. It notes that our competitive posture is “reliant on strategic insight, proactive innovation, and effective technology integration enabled through software capabilities.” Further, it asks DoD entities that are driving software development to address management and governance of the 29 software factories spread across the services.

The promise of software factories is significant. The construct has quickly demonstrated agile development of critical capabilities, while also delivering the speed and flexibility that DoD requires. This has been achieved with commitment to automation, modular and open architecture, and continuous authority to operate (cATO). Given how much progress has been achieved, software factories will undoubtedly play an important role in modernizing DoD technology, through a common framework which will drive efficiencies and provide oversight, best practices, and baseline software for reuse.

Industry’s Role  

The industry can offer lessons learned and best practices from supporting software factories over the past five years, and the Software Modernization Strategy makes several mentions of the role of industry within the acquisition process. Booz Allen has worked in partnership with the DoD to rapidly develop, integrate and field cutting edge mission capabilities. We know what works, and what it will take to mature the software factory ecosystem to bring more efficient, innovative software to the DoD.

The software factory ecosystem cannot be a one-size-fits-all approach. Given different mission requirements (i.e., from IT to OT), acquisition strategy and program maturity (i.e., inception to sustainment), bringing all of these software factories into a single factory or even entity will likely be both inefficient and ineffective.

DoD leaders would benefit from establishing an ecosystem that streamlines across today’s software factories and those that will exist in the future. We would offer that the DoD Chief Information Office should consider the following as a construct aligning software factories within a framework for governance:

Mission Purpose. Each software factory was purposefully designed to cover a broad spectrum of mission requirements. Some entities support complex combat requirements and require deep hardware integration for use on the battlefield driving additional layers of policy, cyber, security and technical specifications. Others focus on IT system development that requires less hardware integration, but instead requires standards for open architecture and system interoperability. Aligning against a core set of mission categories will help streamline baseline software and regulations.

Developmental Maturity of System. DoD must also consider the maturity of the software that’s being developed in applying governance to the software factories. For instance, a software factory for a system as mature as the F-35 program has very different needs (e.g., cyber regulations) than an entity that is incubating new software for prototyping and testing capabilities with more pipeline requirements (e.g., Navy’s Rapid Autonomy Integration Lab (RAIL) software factory).

Operations Model. Finally, the framework should address the variety of different models of ownership and accountability. Depending on how the DoD and industry partner together, the risks and benefits can vary significantly. In our experience, there are three broad models for operations that work effectively for managing cost, risk, and innovation: Government-Owned, Government-Operated (GoGo); Government-Owned, Contractor-Operated (GoCo); and Contractor-Owned, Contractor-Operated (CoCo). We have supported all of these models across the federal government and have lessons learned from each, including retaining government data rights, ensuring accountability and driving a culture of collaboration.

As the DoD works on the implementation plan for the Software Modernization Strategy and creates a framework for how to manage and govern its software factory ecosystem, aligning and categorizing its existing resources will be critical for delivering agile, mission critical software to current and future weapons systems.

Further, it is incumbent on the industry to continue to streamline and drive efficiencies in its software development. Across the Defense Industrial Base, we must commit to building open, reusable baseline software that can be extended and augmented across multiple use cases. Whether we are developing software on corporate research and development budgets or on government use cases, creating software that can be refined to meet mission needs is absolutely critical. From our own experience, we are committed to taking our investments and integrating them to build for custom mission solutions. Our AI/ ML software factories and cyber software factories are built consistently from a horizontal perspective so that they can be leveraged and reapplied for more vertical mission sets.

Through both a more consistent government framework for software factories built on real use cases and industry commitment to re-baselining, executing this new strategy will be critical for continuing to modernize technology to support today and tomorrow’s warfighter.

Realizing Upsides for Digital Security in the Hybrid Workplace

If and when the COVID pandemic fades into history, the shift toward remote and hybrid work is poised to persist. In an April 2021 Forrester Consulting survey of more than 1,300 security leaders, business executives, and remote workers, 70 percent said their organizations will have employees working from home one or more days a week during the next 12 to 24 months.

Amid its challenges, hybrid and remote work represents a significant opportunity in terms of human capital development. Many employees welcome the flexibility associated with hybrid work, and firms that allow remote roles can recruit without regard to location, increasing their potential applicant pools.

The problem is, we’ve only just begun to grapple with the digital security challenges ushered in by remote and hybrid work. Sixty-seven percent of respondents to the Forrester survey reported they had experienced “business-impacting” cyberattacks that specifically targeted remote workers.

Privacy and security challenges lie at the intersection of technology, human behavior, and policy. For example, as more and more workers are logged in to corporate networks from their homes, workers’ smart speakers, thermostats, or other “smart” devices — and their vulnerabilities — are now part of the virtual work environment.

There are also more opportunities for workers to inadvertently reveal proprietary or sensitive corporate data to others in their household, whether family members or roommates. And in an era of video conferencing, they may also risk revealing protected characteristics of themselves and their household. As a result, hybrid work could lead to a range of novel equity and liability concerns. For businesses operating across jurisdictions, the multitude of policy regimes that govern data make these privacy considerations even more complex.

For all these challenges, though, the shift to hybrid has plenty of potential upsides. The University of California, Berkeley’s Center for Long-Term Cybersecurity recently published a

paper, Security and Privacy Risks in an Era of Hybrid Work, that spells out recommendations for managing many of the emerging privacy and security issues attached to hybrid work environments, based on interviews with security, policy, human resources, and other leaders from private firms and government agencies.

The good news is that the shift to hybrid offers a rare opportunity to break through many of the long-standing habits and assumptions that have negatively impacted privacy and security.

First, firms now have more incentive than ever to move toward so-called “zero trust” architectures, which promise a seamless experience for employees and state-of-the-art digital security for employers. The zero trust model uses both multi-factor authentication and continuous authentication of the users and devices on a network, regardless of where they are located. Until now, many firms have been slow to adopt zero trust given its complexity and the investment required.

But we must do better to bring down the cost and simplify implementation for businesses of all sizes – hybrid work makes even more clear that the old password-based model is no longer a sustainable solution. Industry and government must work together to invest in zero trust and build awareness of its benefits.

Another habit that needs to be broken: conversations about security and privacy between firms and employees need to occur at a deeper level than boiler-plate consent agreements or an annual compliance-based cybersecurity training. Employees are uncertain about expectations concerning their own privacy in the hybrid workplace, as well as how they might protect firm data.

Solutions include investing in fresh approaches to employee training, creating mechanisms to make a firm’s security and privacy commitments visible in the context of an employee’s hybrid workday, and building coalitions of firms to establish a consensus on expectations for security and privacy. Firms that do engage in a robust discussion around privacy and data protection expectations with employees will reshape norms and improve security while strengthening their relationships with workers.

At a higher level, government investment should be allocated to improve home network security. Through the recently passed bipartisan infrastructure deal (Infrastructure Investment and Jobs Act), the U.S. Federal Government is set to invest approximately $65 billion in broadband to improve internet access, speeds, and pricing, with two-thirds of this funding to be allocated to the Department of Commerce Broadband Equity, Access, and Deployment Program. Broadband is a necessary ingredient for workers in less privileged circumstances to participate effectively in the hybrid labor market, but connectivity alone is not sufficient.

A more refined policy should repurpose some of these funds (or expand the overall pool of investment) to subsidize other parts of the hybrid work environment, including, for example, secure routers and other home network equipment. “The last mile” for internet connection (such as the coaxial cable from the street to the home router) should now extend fully into the home network and reflect the security and privacy requirements associated with hybrid work, regardless of whether the home is rented or owned.

Realizing all these potential upsides — and breaking through past habits and assumptions — will require a combination of legislative and regulatory action, roles for industry associations, and new tools and technologies. Security and privacy in the hybrid work environment will be tied tightly to productivity, equity, and innovation in the next decade. How firms and policymakers converge around new privacy and security considerations will determine whether hybrid work lives up to its promise.

A Future With AI and ML: The Power of Workforce Education

In today’s digital age, it’s clear that artificial intelligence (AI) and machine learning (ML) will be a part of all digital transformation journeys – including those of government.

In fact, the Federal government was projected to spend $3 billion on AI and ML technologies in 2021 and planned to invest more than $6 billion in AI-related research and development projects, according to a report by Bloomberg Government. We are even seeing the General Services Administration’s AI Center of Excellence and the Pentagon’s Joint AI Center (JAIC) speed the adoption of AI technologies by civilian and defense agencies.

However, a critical part of the implementation strategy for these technologies is frequently overlooked: workforce training and education. With new technology comes the transformation of employee roles, which requires new skills and training.

AI/ML in Government 

AI and ML have the power to help enable government agencies to be more effective and efficient.

The technologies can be used to automate processes, such as the management and optimization of cloud usage. The ability to scale capacity and workloads in response to changes in demand is a significant advantage of cloud computing. AI/ML simplifies the adoption of multi-cloud strategies, empowering more than one public or private cloud. A multi-cloud approach gives agencies more flexibility on which cloud services to use and the ability to innovate across clouds.

AI and ML can also be used to enhance the role of workers. IDC researchers predict that 85 percent of enterprises will combine human expertise with AI, ML, natural language processing and pattern recognition to augment foresight, making workers 25 percent more productive and effective by 2026.

In addition, automation technologies and predictive analytics are increasingly viewed as a matter of national security. The Senate’s FY2022 appropriations framework, released in October 2021, includes $500 million for AI programs across all military branches, plus $100 million for the Department of Defense to help recruit, retain and develop talent to advance use of AI.

Communication is Key

When it comes to implementing technologies like AI and ML, workforce training and education is critical. As with any new technologies, people need training to understand the power of AI and ML and rapidly adopt the technology for low level repeatable tasks.

To cut through assuage worker fears, government IT leaders must include people, processes, and technology when considering innovation across the agency. By communicating across all levels, leaders can more effectively help employees understand how their day-to-day tasks will be impacted by new technologies, and while having the workforce focus on more complex tasks.

Before and throughout the rollout of new technologies, IT leaders must ask themselves questions like: “how can we communicate the benefit of this technology” and “how does it improve the employee experience?” This will not only help people understand what these new technologies mean for them and their jobs – it will speed up the adoption of AI and ML in government so that agencies can more quickly deliver their benefits to taxpayers.

It’s Time Federal IT Leaders Pave the Way

Artificial intelligence and machine learning are poised to transform how the government operates and delivers services to citizens. But it won’t happen if IT leaders don’t take steps now to pave the way. The Senate’s recent passage of the AI Training Act, a bipartisan bill aimed to strengthen Federal employees’ AI knowledge and skills, is a significant step forward to help the public sector discern which systems are helpful and understand basic AI/ML functionality.

Now is the time to make sustained investments to create public sector models that promote innovation and support the upskilling and reskilling of workers across levels to understand the technology and use it efficiently. Improving outcomes for future acquisition solutions requires collaboration and communication between the private and public sectors. With the right communication in place, government employees can more quickly adopt AI and ML to enhance their roles and deliver better services to citizens, faster.

Five Tips to Begin MFA Integration and Embrace Zero Trust

The Federal government has recently taken new steps towards creating a zero trust security environment, building on last May’s Executive Order on Improving the Nation’s Cybersecurity (EO) aimed at advancing the standards by which we protect our federal information system.

On January 19, the President issued a National Security Memorandum extending the EO to National Security Systems (NSS), stating that NSS has 180 days to adopt Multi-Factor Authentication (MFA). On January 26, the Office of Management and Budget released a memorandum creating a Federal zero trust architecture, requiring that all agencies achieve zero trust security goals by FY2024 and referencing MFA as a critical part of the government’s security baseline.

The key foundation of all of this work is the integration of MFA agency by agency. As new measures are undertaken to protect our government’s cybersecurity systems, the government must ensure that MFA solutions are widely adopted across agencies and that lessons learned are shared. While we currently don’t have specific data (for understandable security reasons) describing where agencies are in the adoption of MFA, at Akamai, we know from our own MFA journey how much time, effort, and resources it takes for organizations to implement MFA solutions, and the struggles faced when doing so.

With this experience in mind, here are five tips for federal agencies, and others, looking to adopt MFA technology and begin a zero trust security journey:

Start With a Quick Win

It is daunting and, frankly, impractical, to migrate all systems and applications to MFA immediately.  So begin your journey with a quick and impactful win – for example, implement MFA for your Single Sign-On (SSO). Likely, you already have many applications behind the SSO, so this point of integration gets you MFA for all of those applications in one step. In addition, this step will get your teams familiar with implementing your chosen MFA solution and start getting your end-users into the habit of using MFA.

Prioritize MFA Integrations by Impact

Once you have a quick win under your belt, evaluate your environment to prioritize the remaining necessary MFA integrations. At the top of your list should be integrations that will have the greatest impact – either by volume of applications and systems protected or by criticality to your agency. After SSO, implement MFA for your virtual private network (VPN) (and better yet, replace your VPN with a Zero Trust Access solution), since numerous attacks have started by exploiting weak authentication on the VPN. This prioritization exercise will help you break your migration into manageable increments and ensure your most valuable assets are protected first. 

Leverage FIDO2 With Mobile Devices Versus Physical Tokens 

If you can use mobile devices for MFA instead of physical tokens, the MFA implementation and enrollment is greatly simplified both for your end users and your helpdesk. Everybody already has a mobile device, so by using these devices, you avoid the headache of rolling out and maintaining physical tokens. Moreover, push-based MFA for mobile devices is incredibly easy to use – your users will be delighted – and modern solutions make it very easy for users to enroll their devices, so almost no effort is needed from your helpdesk.

As long as your MFA solutions leverage the newer FIDO2 MFA security technology, you will both improve your security defenses and provide greater convenience to users with frictionless mobile push notifications. Of course, in some cases, physical tokens may be a necessity. In those cases, it’s important to have an MFA solution that is flexible enough to adapt to your agency’s requirements.

Piggyback on Other Cyber Initiatives

As with any IT or security initiative, there is no success without end-user awareness and adoption. To help speed up adoption, we recommend combining an MFA rollout with other cybersecurity training or awareness campaigns whenever possible. By introducing (and then reminding) your employees about how to use MFA and explaining its role in a broader zero trust architecture as part of your regular cadence of training, you help prevent training fatigue and integrate MFA into the day-to-day technology landscape.

Invest in a Strong Identity Solution

While it’s a separate initiative from the MFA implementation, I’d be remiss if I didn’t mention the importance of your other Identity and Access Management (IAM) systems like Identity Management (IdM). These systems provide the framework to link authenticated users with the policies that control what they are able to access. Consider when and how you can focus on your IdM solution, either in parallel with your MFA implementation, or shortly thereafter.?? Strong identity and access management with FIDO2-based MFA is the foundational technology upon which additional security technologies can be most effective.

With these key steps in mind, Federal agencies can ease the transition to an MFA solution and work to improve their cybersecurity defenses. These steps also satisfy the requirements of the Administration’s Executive Order for improving the government’s cybersecurity posture, and help agencies move towards a true zero trust approach to security.