Smarter Gov Tech, Stronger MerITocracy
This page is not built out yet. If you are seeing this page, please contact an administrator.

The Vital Intersection Between Equity and Digital Transformation

Congressional and White House mandates have put digital transformation at the top of to-do lists across the Federal government for many years now – from Cloud Smart strategies to the recent Executive Order on Improving the Nation’s Cybersecurity. These mandates, as well as agency-driven modernization, articulate critical goals such as improving cybersecurity and creating more efficiency.

But as we reflect on the momentum of the Biden administration’s Executive Order on Transforming Federal Customer Experience and Service Delivery to Rebuild Trust in Government (EO), we have the opportunity to flip the script on digital transformation:

If a modern and secure system is successfully deployed but doesn’t reach critical stakeholders and historically underserved communities – it’s like having no system at all.

By elevating equity as a reason for modernization and a key measurement of success, we can drive change at a higher level across government. Let’s explore how the intersection of equity and digital transformation can lead to a more fair, just, and impactful future for the American public.

Equity as a Mission Imperative

Our nation is navigating complex and dynamic challenges, within our borders and across the globe. And there’s one clear thing we’ve learned from the pandemic: We can’t do things the way they have been done in the past. The nature of what we face as a country requires a new paradigm for modernization that is focused on bringing people along – and not leaving some behind.

That, of course, requires understanding the needs and barriers of end users, which our colleagues have discussed in a previous piece. But in large part, an inclusive future depends on asking the right questions from the start.

Here’s what we mean: In the process of digital transformation efforts, Federal agencies need to acknowledge the limits of a technology-first approach. It sounds counterintuitive within the context of IT, so let’s discuss why this is important. Consider an organization developing a new and modern platform for grants distribution. From the beginning, it’s essential to examine issues that include:

  • How to access communities that lack broadband connectivity
  • What infrastructure can be built to support target populations that need the platform
  • What technical assistance should be deployed for sustained and long-term impact

These types of questions can halt the self-fulfilling breaks in our system. Across the Federal government, organizations are already unpacking these issues and innovating on solutions that challenge historical inequities.

One of those organizations is the U.S. Forest Service, along with an interagency coalition that built Recreation.gov – an e-commerce platform that modernizes how the public explores the nation’s lands, waters, and cultural destinations. Using data from facilities and feedback collected from visitors, they’re at work identifying accessibility gaps and investing in human-centered improvements. Those include ensuring that people of all physical abilities can safely and comfortably access a campground – and continuing to optimize the Recreation.gov platform so it serves people and communities who may not have access to high-speed internet.

Building the Future Workforce

More than ever, and as reflected in the recent executive order, our nation needs to prioritize trust in government. A big part of that is demonstrating to the public that Federal leadership and its workforce represents them – speaking their language, sharing their backgrounds, and offering channels that allow every person to participate.

In the Federal workforce of the future, sameness is not an asset. While the public eye is often on civilian agencies, it’s clear that the issue of inclusivity and diversity is experiencing momentum in areas that impact defense and national security.

“Ensuring that we have an IC [Intelligence community] workforce made up of people who think differently, see problems differently, and overcome challenges differently is a prerequisite to our success,” said Avril Haines, director of national intelligence, in congressional testimony late last year. “Their creativity makes us smarter, more innovative, and more successful. And that makes our nation safer and more secure against the array of adversaries and the foreign threats we face.”

Her office has established an intelligence community accessibility executive, to oversee initiatives that ensure people with different abilities have fair and equal access to opportunities within the organization. It has also created a diversity, equity, and inclusion (DEI) office that is dedicated to driving representation across the organization and empowering all people to have a seat at the table.

Ultimately, this future workforce is fundamental to inclusive modernization. And as we move forward, there are legacy barriers to break between technology and mission offices. High-impact decisions – whether that’s upgrading a public-facing system or deploying new mission capabilities – should be evaluated by multidisciplinary leaders who represent the layered and complex world we live in.

Perpetuating a Cycle of Equitable Practice

We know it’s critical to diversify the organizations responsible for improving American health, safety, and welfare – bringing all thoughts and minds to solve some of the toughest challenges we face. And now, bolstered by the Biden administration’s actions, we have a prime opportunity to deeply embed equity into the lifecycle of transformation.

Often, that starts with the supply chain – making sure agency procurements are bringing in strong and diverse industry partners to solve mission challenges. From small and minority-owned businesses to academia, there is momentum across government to level the playing field, transform legacy procurement strategies, and increase transparency around future contracting opportunities.

With more voices at the table – from suppliers to the federal workforce – we can better envision solutions that prioritize fair and just outcomes. However, those solutions need to be sustained and improved over time, so the cycle of equity continues. To that end, we need to embed new questions into the modernization process, such as: What personnel should be on the ground to provide technical assistance to communities? What do certain populations need in terms of capacity building to fully access new programs and services? What upskilling and retooling do staff need to keep up with technology updates and continue to be impactful leaders?

As we look beyond the mandates and beyond what tactically needs to be done in response to the executive orders, equity is ultimately a whole-of-government challenge. Agencies are actively working to bring more voices to the table and to change the paradigm around digital transformation. They’re also coming together in new ways so they can accelerate change as a collective. For instance, several agencies, including the Defense Intelligence Agency and the National Geospatial Intelligence Agency, have been actively engaged with non-partisan and non-profit organizations such as the Intelligence and National Security Alliance (INSA) to exchange ideas around topics like equity and digital transformation, and they’re doing it in a way that’s bridging the gap between private and public organizations.

Whether for matters of national security or for serving individuals and families in moments of need – advancing equity is imperative to the mission. Experimentation and impact will most easily begin within agencies and their programs, but we can envision a future where data, collaboration, and best practices are shared across offices and organizations. With that, we’ll have the ability to identify gaps between organizations and extend our reach into historically underserved communities.

With the continued funding of technology modernization across government, including $1 billion as part of the American Rescue Plan, organizations are already strengthening IT systems and adapting to emerging digital infrastructures. Embedding issues of inclusive and fair access from the start will ensure platforms and systems are holistically designed to achieve mission outcomes.

Continue the equity conversation by learning more about applying a new mindset to scale innovation using equity as a platform, another topic in the Booz Allen series on advancing equity across Federal government programs.

In this series, “Equity as a National Priority: An Interagency Perspective,” Booz Allen discusses the topic of advancing equity across Federal government programs – offering perspectives for a framework that prioritizes fair and inclusive service delivery to the public.

Equity as a Platform: Applying a New Mindset to Scale Innovation

Equitable service delivery is at the center of today’s government mission, with executive orders spurring a holistic evaluation of agency practices, programs, and policies. Leaders across Federal agencies are committed to providing value to all people when they need it most, but they also recognize that the journey requires new approaches and mechanisms to make a sustained difference.

As we look ahead, progress in equity cannot be measured by traditional indicators of program success, such as whether an earmarked budget was spent or whether a set of tasks was completed on time. Traditional programmatic wins tend to create value for segments of the population but may inadvertently exacerbate challenges for other, more underserved communities. For example, website and mobile upgrades can improve certain customer experience (CX) metrics while also simultaneously leaving behind people with limited broadband or access to a device.

The hard question agencies are then asking is: How can we build and enhance solutions to provide the most value to the most people, including those who have been historically underserved?

Achieving the broad equity aims of the Biden administration requires an empowered Federal workforce that can disrupt patterns of behavior and improve service delivery on a continuous loop. And it requires goalposts that are inherently ambitious and a culture that harnesses both the “hits” and “misses.” In many ways, that starts with embracing a product mindset.

Achieving Outcomes through Product Centricity

Private sector companies often focus their services on a particular customer segment, but agencies can’t take a narrow view – and it’s highly complex to build solutions that are tailored for all people. However, changes in service development are necessary; the global pandemic has highlighted that government cannot conduct business as usual.

Encouragingly, we’re starting to see Federal organizations embrace what’s called a product mindset. Let’s explain this concept.

Within both the public and private sectors it’s common to manage progress around projects – initiatives that have start and end dates, center around producing something (such as a tool or service), and measure success by whether milestones and budget were hit. Often, this approach is satisfactory and aligns to the way contracts or programs are funded.

A product mindset, however, flips the script. Instead of development teams tasked to build solutions and then move on to the next project, multi-disciplinary and longstanding product teams are established to:

  • Build a deep understanding of their customers’ needs and emotions, and gain an on-the-ground view of the real barriers to access
  • Rapidly build solutions that can be tested and reviewed with customers to determine their ability to make lasting impact – while balancing the need to address urgent relief and support
  • Create a virtuous cycle of customer engagement, prototypes, and product innovation

Over time and with experience, product teams become in-depth experts about the customers that they are supporting as well as the mission and service they are delivering. They are then more capable of producing targeted value and more inclusive and equitable services. This heightened sense of agility and ability to continuously improve, rather than just roll out, new ways to serve people and communities, helps us get closer to a future vision: where at every juncture from birth, a person has ready access to the services they qualify for.

Going Directly to the Source

In a previous piece, our colleagues talked about the limitations of customer experience data and how traditional feedback mechanisms only account for a small slice of the public. Surveys and focus groups tend to miss populations that don’t have access to those services at all, due to challenges such as digital literacy, language barriers, or simply a lack of time to navigate a complex system.

Focusing on the dynamic needs, challenges, and perspectives of users requires a newly empowered and skilled workforce. Enterprises like Target and Amazon have sets of tools, organizational structures, and teams to assess performance and enhance products, as well as multiple channels for customers to provide feedback. While the Federal government must account for inherently more significant and complex mission objectives, there are things we can learn from the private sector and the way customers participate in the process.

When building a product to improve equity and access, it’s fundamental to quickly engage with – not just read or interpret metadata about – real people accessing services and assess how they’re receiving and experiencing them in real time. Going directly to the source and into the field to understand unique perspectives and important distinctions within communities helps create a richer set of insights to shape continuous improvements and inform decisions around technical assistance.

It may sound simple at first, but engaging with customers often requires zooming out to see a larger web of stakeholders. We witnessed this in action when we helped a major Federal agency – the Centers for Medicare & Medicaid Services (CMS) – understand the diverse inequities and perspectives of their beneficiaries and reduce longstanding burdens and barriers to access. By engaging with broad stakeholders and populations – from patients, families, and caregivers to providers, facilities, and data support vendors – it was possible to build a rich and layered set of needs to address. Through the power of engaging across the customer ecosystem, the organization continues to eliminate burdens for all, promote interoperability, and pinpoint opportunities that help providers spend more time with their patients.

Envisioning a Future Marketplace for Equity

While rapid discovery often starts within individual organizations and programs, sustained progress for equitable service delivery needs to be addressed at the interagency level.

Through the leadership of the Office of Management and Budget, the government is already taking crucial steps toward improving equity across agencies and interconnected experiences. We’re also seeing models emerge such as the General Services Administration’s CX Center of Excellence and the Veterans Experience Office, which are creating new value by orienting innovation around customer needs, centralizing best practices, and encouraging collaboration among agencies and the private sector.

To advance equity at scale into the future, mature product organizations can help enable widespread progress beyond a single program to maximize outcomes for more people and more communities.

Imagine the possibilities if there was a marketplace of tested customer products that agencies could access, integrate, and build on to meet specific requirements. This vision isn’t far off, and the concept of “government as a platform” is already changing product development and delivery in key mission areas. With examples like Healthcare.gov and the Biden administration’s transition to USA.gov, we’re in a prime position to start thinking about equity through a similar lens and to create a reusable repository for platform-driven services and products.

We know that no technology or packaged solution can address complex service gaps for segments of the population. But as we envision ways to overcome these gaps, embracing human-centered product development allows the government to continuously improve so we can better support individuals, families, and communities in the moments that matter. New innovations and partnerships will require goalposts that are inherently ambitious. But in this context, it’s better to miss an aspirational target, know why it was missed, and continuously improve than it is to change or lower the goalposts along the journey.

Continue the equity conversation by learning more about the vital intersection between equity and digital transformation, another topic in the Booz Allen series on advancing equity across Federal government programs.

In this series, “Equity as a National Priority: An Interagency Perspective,” Booz Allen discusses the topic of advancing equity across Federal government programs – offering perspectives for a framework that prioritizes fair and inclusive service delivery to the public.

Harnessing the Right Data for Evidence-Based Equity

The Biden administration is determined to improve trust in government, and as part of that goal has made equity a top priority. Agencies are at work responding to recent mandates with assessments about their programs and policies to uncover barriers to serving all demographics and communities.

This collective movement will help rethink longstanding agency operations for program development, implementation, and transparency. But this commitment to evidence-based and continuous improvement requires a thorough examination of the data that will drive our decision making: What information matters most to equity? How can we assess it from the customer’s – rather than the agency’s – point of view? How do we start to understand and address the root causes of service gaps?

Federal leaders know that improving equity of services for the American people will take sustainable, actionable, and long-term plans to understand what’s impacting their customers across their government journey. And they’ll need to be equipped with the tools to meaningfully measure their impact along the way.

Here are some core areas of focus for the journey ahead.

Enabling Cross-Agency Insights

It’s critically important for each government organization to assess individual service and equity metrics. But the lived experiences of real people include a myriad of interactions that often span different organizations all at once. To reflect this journey, transparency and insights need to be considered from the customer’s point of view.

This means being able to look at complex issues, like housing or disaster recovery, and examine the intersections and equity gaps within and across missions. Of course, that clear-eyed picture requires data collected centrally – across divisions – for a transparent and holistic assessment.

Fortunately, we have a blueprint and a foundation for collective reporting. The Digital Accountability and Transparency Act (DATA Act) of 2014 requires agencies to publish spending information on USASpending.gov. With a single data schema and strategy, this unified approach to data collection from different organizations creates a transparent, simplified means of reporting and accessing information. While the DATA Act allows the public to see where funds flowed, we can apply this same unified mindset to answer the question: How do we know if our spending made an impact for more people, and more communities?

The real power of the equity assessments will be in the insights they uncover across the board. If every program assesses its small slice, we can then look horizontally across programs and use the collective data to identify underserved populations, think differently about who’s being left out or left behind, share information across programs, and work collaboratively on solutions.

But centralized data collection isn’t meaningful if we don’t have the right metrics on hand. That gets us to our next issue.

Focusing on Root Causes

When examining data and the flow of funding, it’s common to look at the symptoms of inequities. For example, say there’s data showing that certain minority small business owners are falling behind on getting contracts. Is it enough to increase the flow of contracts? That’s a reasonable decision, but it’s just tackling the symptoms and not the underlying causes of the gap. In other words, we need to uncover the “why” behind the data.

As we start to dig deeper, equity requires a focus on “humanizing” data with indicators that are not always quantitative in nature; it can include metrics about respect, inclusion, freedom of identity, empowerment, being heard, equality of opportunity, and accessibility. This information grounds the numbers in the context of lived experiences so we can start to uncover systemic barriers that may be impeding equitable access or outcomes.

Increasingly, we are seeing agencies embrace new methodologies to better understand their diverse customers and populations – which can yield unexpected results and previously unidentified metrics. For example, organizations like the U.S. Department of Agriculture (USDA) are examining areas of improvement through rich ethnographic research and journey maps that explore the intersections and points where collaboration can be improved across programs. Those types of efforts can serve as a roadmap for cross-agency research in the future as we uncover gaps in areas like veteran employment and disaster recovery, where a person needs to engage with multiple agencies at once.

Knowing the Limitations of Data

It’s common for agencies to collect customer experience data through surveys, and they’re able to use that feedback for short-term service recovery and ongoing improvements to policies, programs, and offerings. But while standard data collection reveals whether services are trusted, impactful, and easy to use – it’s just the tip of the iceberg when trying to understand barriers to service delivery.

However, it’s very difficult to quantify the part of the iceberg that’s under the water to get the whole story. For example, there are the people who:

  • Don’t know that the services are available or that they qualify for a benefit
  • Don’t have the proximity, resources, or faculties to access the services
  • Don’t trust the providers who offer the services

The absence of data about these types of people is in some cases more meaningful than the data on hand. It’s important to acknowledge the limitations of data that we have and to navigate ways to address those limitations. Given the restrictions of the digital divide, exploring alternative research methodologies is essential to capture a full picture.

Agencies can partner with state and local governments or community-based institutions to understand, quantify, and fill in gaps around awareness and accessibility. We can also collect this “under the water” data from social listening to help understand customer sentiments and respond to concerns outside of standard survey methodologies.

Following Existing Models for Success

The Home Mortgage Disclosure Act (HMDA) is a notable example of using data to address root causes of an issue and incorporate new measurements of equity into programming and decision making.

The HMDA requires that financial institutions maintain and publicly disclose information about mortgage loans to determine if they are serving communities’ housing needs and to identify possible discriminatory lending patterns. While the lending institutions are outside the immediate control of the Federal government, the accountability to report lending data enables the government to track loan approvals across racial demographics, examine root causes of discriminatory patterns, and track equity indicators across a broad ecosystem.

This example about the HMDA – using data outside across the sphere of influence to set meaningful and impactful metrics – highlights the power of collaboration and what’s in the realm of possible for the Federal government to effect. So, on a broader scale, how can we modernize for the future of evidence-based equity?

Our colleagues have previously discussed the importance of establishing accountability and shared services to transform the service delivery paradigm for equity. Centralized responsibility would allow for standard procedures, sharing of best practices, and unified infrastructure to collect data from across agencies. It would alleviate the cost and burden of setting up mechanisms in each individual organization and would establish the strong muscle memory required to comply with data privacy regulations, streamline data collection, analyze data sets, and protect the data in storage. And ultimately, it would create sustainability for long-term equity initiatives.

If we can bring that kind of centralized approach to customer experience data – and learn from lighthouse agencies such as the Veterans Affairs’ Veterans Experience Office or the USDA’s Office of Customer Experience – we’ll be able to minimize inefficiencies and gain a more comprehensive understanding of how customers are utilizing the totality of the government’s services, address gaps, and better serve all Americans.

Continue the equity conversation by learning more about changing the paradigm to achieve equity in government services, another topic in the Booz Allen series on advancing equity across Federal government programs.

In this series, “Equity as a National Priority: An Interagency Perspective,” Booz Allen discusses the topic of advancing equity across Federal government programs – offering perspectives for a framework that prioritizes fair and inclusive service delivery to the public.

From EO to Action: Human Factors of Enabling a Cyber Safety Review Board

President Biden’s executive order (EO) on improving the nation’s cybersecurity was a call to action to prioritize cyber safeguards in both the public and private sectors.

A key component of the EO is Section 5, which mandates a Cyber Safety Review Board (CSRB) to systematically review significant cyber incidents. Creation of the CSRB falls to stakeholders from organizations such as the Department of Homeland Security (DHS) and Cybersecurity and Infrastructure Security Agency (CISA), as well as private industry.

Recent media commentary has noted potential challenges in operating a review board. It has also highlighted the value of differentiating between common and rare events, and the importance of recognizing issues specific to cybersecurity compared to, say, aviation and public health.

But discourse on the CSRB hasn’t addressed a vital element: human factors. The following three imperatives highlight the value of prioritizing people, both to better serve those directly involved in cybersecurity incidents and to ensure a balanced approach within the CSRB.

Effectively Balance People, Process, and Technology

The CSRB must include individuals who are subject matter experts in cognitive science and human performance. Members of the review board with expertise in “people” will be critical to providing balanced assessments of incidents, and for creating effective mitigation plans.

The People, Process, Technology (PPT) operating model identifies three central factors that must be balanced to optimize organizational performance. To achieve balance, sufficient attention must be dedicated to each component. However, technology-focused domains such as cybersecurity often leave the human component as an afterthought. While the right technology is needed to protect information assets, and the right processes are necessary for implementing good security practices, the strengths and weaknesses of the people working with the technology and following the processes are rarely considered. If the “people” part of PPT is underrepresented, or not represented on the CSRB, there is a risk that assessments will be inadequate and proposed solutions will fail.

Recognize Cybersecurity as a Human-Machine System

Recognizing cybersecurity as a human-machine system will promote an interdisciplinary approach to incident reviews and recommendations that include readily adoptable criteria. It is unlikely that human beings will be redesigned in the near future, so CSRB recommendations must highlight opportunities to build resilient technologies that bolster human performance and protect against known human weaknesses.

The relationship between people and technology has never been stronger, and one could argue that the boundaries between people and technology are increasingly difficult to define. However, in cybersecurity incidents, people are frequently called the “weakest link.” What often goes unmentioned is that in many situations resulting in failures or breaches, people were asked to perform unrealistic tasks, or to perform with unrealistically high levels of consistency.

Technology research and development processes include stringent performance tests, along with myriad additional assessments. However, we often overlook parallel strategies for assessing human performance, including overlooking processes to better understand how people interact with technology. This oversight propagates a vicious cycle where responsibility for human performance issues, such as clicking a bad link or misconfiguring a system, falls solely on the person or people who made the observable mistake.

Make Robust Recommendations

Cybersecurity incidents vary dramatically and building a comprehensive understanding of the factors that contributed to, or that could protect against, reoccurrence will be a continuous challenge for the CSRB. To create and communicate robust recommendations, the CSRB must focus on people.

Recommendations prompt changes that often have unanticipated consequences. The CSRB’s ability to plan for, and mitigate the impact of, such consequences requires an assessment of how recommended actions might impact human performance as well as the performance of cybersecurity technologies.

In addition, the types and strength of the recommendations made by the CSRB will have a major role in their effectiveness. The CSRB can leverage findings from other industries such as aviation and healthcare, and adapt lessons learned elsewhere to improve recommendations for updating security strategies and requirements. For instance, adapting concepts from existing frameworks such as the hierarchy of intervention effectiveness (Institute for Safe Medication Practices, 1999) could promote the adoption of stronger, system-focused changes.

For instance, most organizations require and offer cybersecurity awareness training and additional training for employees working directly on building or maintaining IT systems. While training will always be a valuable component for improving cybersecurity, it does not serve as a strong standalone solution. In the hierarchy of intervention effectiveness, training is categorized as a weak, people-focused intervention. In contrast, interventions such as reducing complexity, developing standards, and using automation, are categorized as stronger, system-oriented, interventions.

In sum, recommendations must address both human-focused and systemic factors to promote holistic advancement towards a more resilient cybersecurity domain.

It’s important to note that both on an organizational level, as well as across government and industry, efforts to improve cybersecurity will be incremental. By building interdisciplinary teams and by engaging with experts in human behavior and performance to consciously integrate human factors into our security assessments and responses, we can identify the solutions that will have the greatest potential for improving our cyber safeguards.

For Equity in Government Services, It’s Time to Change the Paradigm

Almost two years ago, life fundamentally changed overnight. People of all backgrounds and communities found themselves needing government services to make it through challenging times, and the Federal government responded by authorizing initiatives like the Paycheck Protection Program (PPP) and economic impact payments.

It was an urgent and visible exercise of how agencies can and must come together and coordinate to provide critical services.

Individuals trying to access pandemic programs and other government services for the first time experienced what government employees and those living in and working with underserved communities have known for years: One customer is simultaneously served by more than a single program, or even a single agency. But for many who are in a state of emergency, or who can’t navigate a complex web of services, unintentional government silos can lead to disparities in the distribution of support.

Equity in service delivery is now in the spotlight for many government leaders, who face an important question: Is it time to change the paradigm for how Federal services are developed, implemented, and delivered?

Driving Equity in Government Services With a Customer-Centric Approach

For many years, there has been a significant and worthy focus on customer experience (CX) in government. From streamlining public-facing websites to developing AI-powered chatbots, CX efforts have steadily improved daily digital interactions with government platforms. But unfortunately, they can’t account for people and communities without regular access to digital platforms, or awareness about how and when to seek help. Put simply, the government services that are supposed to lift people and communities up so they can achieve equality and realize their full potential often don’t reach those who need them the most.

The Biden administration responded to the call for equity by issuing the Executive Order on Advancing Racial Equity and Support for Underserved Communities, which lays out a whole-of-government equity agenda, and the Executive Order on Transforming Federal Customer Experience and Service Delivery to Rebuild Trust in Government. This clear and renewed commitment is a continued step toward large-scale change, and will empower agencies to look internally, identify programs for improvement, and prioritize equity for mission value. But as we look ahead, a sustained and sweeping paradigm shift requires fundamental changes to service delivery.

Everyone has a role to play to address the needs of diverse populations and underserved communities – including our own organization as partner to government. Here are four areas to decouple this issue from any one agency or organization and help guide our actions forward.

  1. Establish the Connective Tissue for Advancing Equity

Operating for equity ultimately requires shared accountability across Federal agencies rather than individual organizations assessing their own progress. Establishing centralized accountability and shared services, for example through OMB, would help agencies band together, tap into collective resources, and support equity program implementation. This would enable Federal programs to collaborate on solutions for the public, disseminate best practices, and benefit from standardized accountability measures.

We’re seeing government move towards standardization and shared accountability in other areas, such as the creation of the Cybersecurity and Infrastructure Agency (CISA), which oversees cybersecurity initiatives to defend Federal networks. Similarly, as we mature our understanding of how equity can be embedded across mission areas, an overseeing body for government services ensures that programs and benefits are developed and administered in a standard, data-driven manner regardless of agency. Such a body would foster a truly customer-oriented approach and a provide a richer understanding of diverse populations and underserved communities.

  1. Take a Holistic, Customer-Oriented Approach to Services

Standardizing a whole-of-government approach to services puts the spotlight on the customer – not the agency – and it fosters a better understanding of the needs of diverse populations and underserved communities. While each agency has its own mission and requires unique systems, as a collective they can holistically examine and mitigate complexities and inequities that occur as people navigate government.

For example, consider a family impacted by a natural disaster. They may be seeking immediate services from the Federal Emergency Management Agency, housing relief through the Department of Housing and Urban Development, and unemployment insurance through the Department of Labor. Mapping services across organizations can help uncover critical gaps between them that are acutely felt in moments of need, to create solutions that are oriented around customers and communities – not around agencies.

  1. Address the Differences Between Federal Programs and Local Experiences

While new policies and programs can emerge at the Federal level, implementation may play out differently at the local level – where operations can depend on a multitude of factors, including capacity and feasibility.

Local communities may not have the technical resources or infrastructure to access critical services and benefits – or to deploy Federal resources – in the intended way. For that reason, it’s imperative to account for capacity building as part of operationalizing equity initiatives.

For example, consider the process to apply for a Federal grant: It’s a formal system to navigate, with criteria for solicitations developed at the Federal level. Certain communities are not well prepared to compete in this system, may have struggled to access funding in the past, or perhaps experienced disinvestment because of top-down criteria. How can we reduce technical barriers – from the application complexity to reporting requirements – to increase opportunities for more people and communities? And how can we rewrite criteria and outcomes to account for more diverse challenges being addressed through grants?

Grant funding is just an example. As agencies start to assess programs, small changes like these at the Federal level can improve equitable access to resources in the field. And over time, there’s an opportunity to invest in capacity building in a program-agnostic manner so that more programs and services, from health to education, can access fundamental infrastructure and resources.

  1. Think Boldly About What We Can Innovate in the Future

In the current environment, people need to seek out their own Federal resources – and they can’t benefit from programs they don’t know to ask about. Ultimately, government services that could make a real difference are often hidden from the people they are designed to help, from veterans who need healthcare to small business that are affected by COVID-19.

Earlier, we discussed a family in the aftermath of a natural disaster. In the future, what if that family proactively received information from government about all of the support they qualify for in their time of need? What if they didn’t have to independently navigate a multitude of different organizations, find out what services are available, and submit duplicative information? A customer-oriented and proactive model has the power to upend experiences and improve equitable access to critical services.

We can start to imagine benefit delivery and eligibility in a holistic way, where government has a platform to collect and assess data from across Federal organizations. That would allow agencies to work together and flip the experience by actively reaching out to people based on the data they already have instead of needing people to come on their own. With a rich understanding of the full customer journey across the government continuum and where agencies can have the most meaningful mission impact together, we can start to create a framework for this data-driven future.

Change is happening across government, and now is the time to harness this momentum and challenge ourselves to consider a new paradigm. We’re seeing Federal organizations engage with customer segments in new ways to fully understand what needs are not being met or what services are not being delivered fairly. The next opportunity will build the connective tissue between these efforts, focus on incremental success, and create the conditions for agencies to collaborate over the long term.

Continue the equity conversation by learning more about harnessing the right data for evidence based equity, another topic in the Booz Allen series on advancing equity across Federal government programs.

In this series, “Equity as a National Priority: An Interagency Perspective,” Booz Allen discusses the topic of advancing equity across Federal government programs offering perspectives for a framework that prioritizes fair and inclusive service delivery to the public.

Critical Questions to Ask When Considering Explainable AI (XAI) for Your Federal Agency

By now, Federal IT decision-makers are very familiar with machine learning (ML) and artificial intelligence (AI).

They know that – especially when augmented by artificial intelligence for IT operations (AIOps) to automate IT functions – ML and AI conceivably have no limits when it comes to expanding key capabilities.

Example of these capabilities include: helping agencies better design weapon systems; anticipating mission-hindering weather conditions; responding to disasters; predicting equipment maintenance needs; managing supply chains/inventory; distributing vaccines; and so much more. Seeing the vast potential for boosted efficiencies and efficacies, 91 percent of agencies are either piloting or adopting AI in some form, compared to just 73 percent of global organizations overall which are doing so.

And while the government continues to explore what ML/AI/AIOps can do, it is missing out on an essential knowledge component: It doesn’t know enough about how and why these intelligent machines do what they do.

This leads to trust gaps that are hindering progress. It is often difficult even for IT practitioners to understand an ML/AI/AIOps module’s decision-making and resulting actions, much less business and operations-side users. The inner workings of these systems are so complex, they’re commonly called “black boxes” within tech circles. Given their impact on agencies’ daily tasks and long-term strategies, IT teams must drive toward a greater awareness of these innovations.

This is where the concept of explainable AI (XAI) enters the equation, as a means of enabling humans to better comprehend these technologies. It provides the so-far elusive “how” and “why” answers in a way that users both understand and, more importantly, trust. As a result, XAI is poised for major global market growth, increasing from $3.55 billion two years ago, to nearly $21.8 billion by 2030.

Within the government, however, less than one-fifth of agency executives say they are preparing their workforces for explainable AI systems.

To illustrate XAI’s value, let’s say a Navy officer on a ship is at her work station, and her computer screen indicates that an AI machine has come up with a fix for a critical application which is not performing as needed. Without XAI, the officer struggles to figure out how the machine came to its root-cause and remediation recommendation. She sorts through an onslaught of data and, after a half hour, concludes (albeit, with a hint of doubt) that the machine’s read on the situation is likely correct and she puts in play its remediation steps.

But the half hour represents lost time. In the case of an application that is down while supporting key mission functions such as intelligence, surveillance, and reconnaissance (ISR), the outcomes could prove crippling.

However, with XAI bringing an immediate sense of how the machine “thinks,” and why it is recommending its specific remediation, the officer can instantly see all of the data upfront – presented in an end-to-end, fault-free structure, allowing for the review and replay of intermediate results – thereby gaining the required trust of the machine, and authorizing remediation while avoiding any lost time.

These, and other scenarios, make a compelling case for XAI adoption. In pursuing this course and seeking buy-in, agency IT leaders need to carefully consider the following critical questions and responses:

What exactly can AI do for us?

AI is a central driver of digital transformation, empowering government teams with commercial-like efficiencies, reliability, productivity, security, and speed. It applies predictive analytics to mission-related tasks and objectives, so users can make informed, data-backed decisions for future initiatives. Automation plays a lead role in this, bringing all of these benefits, along with the reduction of errors.

The federal government is committed to this transformation, as 82 percent of agency officials believe their organization needs to become more technologically advanced. Three of five feel that COVID-19 – which ushered in the work from home (WFH) era, creating a demand for new technology arrangements and collaboration/communications tools – has expedited their digital transformation.

How do I illustrate XAI’s value to gain budget support for it?

It begins with trust. In healthcare, for example, Veterans Affairs (VA) doctors are using XAI to make better decisions about treatment. Standard AI may tell them that a patient is at high risk and recommend a treatment plan. But XAI will tell them why the patient is at high risk, and why it is recommending the treatment plan. This establishment of trust will lead to additional buy-in for future XAI initiatives.

There are also compliance considerations. Regulatory policies often mandate the capturing of “explanations” for auditing purposes. That’s where XAI steps in, to satisfy these requirements.

Do I need to deploy XAI for all ML/AI/AIOps functions, or should we proceed more selectively?

XAI adds value to all AI/ML/AIOps functions. As indicated, it directly addresses the trust component. In addition, it enhances the predictive power of AI. If you’re working strictly with a black box, you won’t understand its answers, and you’ll remain at a loss as to what to do with the answers in terms of taking action that could impact future outcomes. You also can’t improve upon existing AI program algorithms if you can’t comprehend the program. XAI fills in these clarity gaps.

It’s perfectly reasonable to feel somewhat skeptical about yet another “new” technology that comes along. But let’s not think of XAI as an entirely new concept. Instead, we should treat it as a logical extension of existing AI, because it is. After all, everything else in business and life – R&D, sales/marketing initiatives, entertainment platforms, automobiles, appliances, etc. – perpetually drives toward the next improvement.

AI should serve as no exception. It has proven its value in providing the “what” in the answers we seek. XAI gives us that essential “how” and “why” to enable us to fully leverage the power of this innovation. Subsequently, agencies more readily and effectively achieve mission goals, because XAI is giving them absolute confidence in their decisions.

The Telework Model for Government: COVID Lessons for Building an Effective Workforce

Federal agencies continue to confront the challenge of enabling and expanding telework as the need to protect against the spread of COVID-19 affects employees in a variety of roles nationwide. Last year, through technology and ingenuity, agencies rose to the challenge, deploying telework for many functions that previously did not seem suited to it. 2020 created a laboratory for experimenting with technical solutions, personnel management techniques, and security practices—and 2021 is pushing those same limits.

The lesson learned? For many government organizations, telework can be a mission enabler that serves customers – the public – well when it’s implemented strategically and thoughtfully. In other cases, a hybrid model that includes remote and in-person working environments can best serve the mission. Most recently, Federal policy discussions have focused on weighing the most equitable and effective approach that agencies can and should take, based on lessons from 2020 when a huge shift occurred in moving most of the Federal workforce to remote operations.

With the pandemic’s impact on the nation changing almost daily, Federal agencies continue to consider what their workplaces will look like in the future. Agencies are still evaluating their employee re-entry plans using the Biden Administration’s guidance outlined in OMB M-21-21, and realizing that a “one- size” approach does not fit all for their organizations. Instead, new ways of working are needed to best deliver on their missions, operate efficiently, and meet their talent needs, including implementing non-traditional options such as teleworking, hybrid models, and co-working.

During this period of agencies’ self-reflection and evaluation, their leaders have an opportunity to learn from the experiences that private sector organizations have had in successfully managing telework and hybrid employee models. Many industry organizations that help government deliver on their missions, such as Cognosante, have been operating with both models for more than a year and have “lessons learned” to share about a successful telework strategy.

To aid our government partners in their evaluation of their own situations, in this article we highlight technological and security considerations, as well as the workforce, organizational, and cultural aspects that must be addressed for a successful teleworking strategy.

Headquartered in metropolitan Washington, DC, Cognosante’s employees are located throughout the U.S. Although most employees have worked remotely at home during the COVID-19 pandemic, we typically function in a hybrid model, with both onsite and remote employees.

As a result of working this way, we’ve learned that, with a sound technology strategy, a robust employee engagement plan, and a determination that location-based working doesn’t impact the mission, many organizations can maximize the benefit of multiple workforce arrangements. And all while mitigating risk and retaining flexibility to adapt to changing needs.

Here are a few of our lessons learned from the experience:

A Sound Technology Strategy Mitigates Risk, Removes Barriers, and Facilitates Collaboration

  • Successful telework or hybrid models begin with a solid cloud foundation. Many agencies have already embraced cloud technology for its ability to provide scalable infrastructure for a variety of IT needs. During COVID-19, cloud technology really enabled our own paradigm shift to remote work. With the majority of Cognosante employees working remotely at the start of the pandemic, we had to ensure that our workforce could access the applications and data necessary to perform work from home with no disruption to our customer deliverables. Our forward-thinking cloud strategy enabled us to be fully prepared for remote and hybrid work when the pandemic hit in early 2020.
  • Cloud infrastructure is as or more secure than on-premises data centers. By utilizing services like Microsoft Azure or Amazon Web Services that are already certified at a FedRAMP Moderate or High level, agencies can be assured that their most stringent requirements for non-classified work are met. Cloud IaaS, PaaS, SaaS, and hybrid cloud are all options that can be leveraged to meet organizational needs.
  • With the expanded surface area for attacks, cybersecurity tools can be used to address any new risks. Recent cybersecurity attacks have heightened the focus on the security of telework and remote collaboration tools. Technology can be used to create a telework environment that mirrors the physical controls that exist in an office, even for roles requiring access to sensitive information. For example, Cognosante employs Zscaler to maintain and enforce a “clean site” list to ensure that an individual can’t access prohibited internet sites from a company laptop. Cognosante makes efficient use of Absolute Resilience to enforce geo-fencing and to turn off access to a machine in the event of loss or theft, making a laptop useless to anyone outside the organization. And malware defenses can prevent information loss at an individual level.
  • For customer service functions, technologies like Cognosante’s eSante Aware can assist with supervision of individuals working remotely with access to sensitive information to notice irregular behaviors (such as an individual attempting to capture personal information using a cell phone) that would not otherwise be visible in a remote work environment. Tools like TeamViewer can allow supervisory or support staff to access an employee’s computer remotely to provide technical assistance when it is needed.
  • Applications can confirm connectivity before telework begins. Telework is only possible if an employee can reliably connect to an organization’s network. Apps or websites to collect information about an employee’s bandwidth can be used to determine whether the employee’s home network and device meet the minimum connectivity requirements for telework. This is particularly useful for organizations that allow employees to work on personal devices.
  • End-user controls remain critical. Even with a robust technology strategy, employers must recognize that end-users remain the first line of defense against information loss or privacy breaches. Technical solutions must be paired with end-user controls including multifactor authentication, password protections, security, compliance, and privacy training, and USB or print controls. In addition, those controls must be combined with a personnel strategy that sets and enforces rules of behavior.
  • Collaboration tools can facilitate meetings, employee collaboration and even foster organizational identity. Tools like Microsoft 365 and Sharepoint provide secure and scalable document sharing and storage, with rules to control access appropriately. Microsoft Teams, or even Zoom for Government can maintain employee engagement and facilitate collaboration with individuals in different geographic areas. Transitioning from audio to video communication can promote employee engagement, communication, and inclusion, by ensuring that remote employees can contribute equally to discussions. Federal CIO Clare Martorana cited the importance of collaboration tools in her address to the ACT-IAC’s Emerging Technology and Innovation virtual conference.

Creating a Culture of Engaged and Productive Teleworking Employees is Crucial to Success

While the tools and technologies exist to support telework, management challenges remain. Managers must ensure that on and offsite employees have equal access to information, resources, and opportunities, and that they are treated equitably in performance evaluations. Managers also need to meet employees’ need for personal connections by prioritizing employee engagement and morale.

In the early days of the COVID-19 pandemic, we transitioned our hybrid workforce to a fully remote environment in fewer than five days. Our staff has remained largely remote during the COVID-19 pandemic, and, when appropriate, will return to a hybrid model informed by customer and program needs.

By focusing on inclusion, communication, and fairness, organizations can adapt a management strategy that fosters employee engagement and morale without sacrificing accountability or productivity. With thoughtful planning, agencies and government contracting companies can navigate this transition in a way that is seamless to constituents and customers – and maintain employee morale as we continue to move towards national and global containment.

Successful management of employees who work in a remote environment requires four components:

  • Proactive Management is Key: Managers must be aware not only of employee outputs and results, but also of employee connectedness and satisfaction. Identifying and implementing technologies that increase collaboration, such as video conferencing, helps managers remain engaged with their teams, particularly when they are accustomed to frequent face-to-face interaction. Incorporating change management techniques may be required to give support to a new way of working.
  • Train for the Remote Environment: Remote work does require a different set of soft skills, for both employees and managers. Providing web-based training on organizational expectations and successful telework strategies, along with clear guidance on telework and security policies, help employees and managers adapt to changing organizational norms, and minimize confusion about expectations.
  • Communicate, Communicate, Communicate: Frequent and transparent communication about policies, procedures and expectations maintains employee satisfaction and boosts morale. Regular employee newsletters and leadership briefings can foster greater mission-focus and provide a sense of context, particularly for employees in large organizations with a diverse set of functions. Virtual brown bag lunches are an excellent opportunity to promote a sense of employee connection and situational awareness. To ensure widespread awareness and understanding of organizational priorities, use a range of communication channels, including email, teleconferencing, video messaging, internal websites, and FAQ documents, and more.
  • Engage Your Employees: Employee engagement becomes more critical when employees are remote, not less. Every employee has varying needs for connection and belonging, so presenting options for how teams should continue to engage with each other, with their leaders, and with the organizational culture is critical.

The COVID-19 pandemic has forced a paradigm shift in how people work. By leveraging the lessons learned from the past year and a half, as well as the experiences of organizations that previously implemented remote and hybrid work models, Federal agencies can create a future workforce that is flexible, secure, and well positioned to take on the challenges of today, tomorrow, and into the future.

DevSecOps: 4 Steps for Mitigating the Next Cyber Attack in Your Federal IT Environment

Across the U.S. government, Federal CISOs and CIOs are working to address potential vulnerabilities on the new “front lines” of defense: cybersecurity and the software supply chain. The SolarWinds and Colonial Pipeline cyberattacks raised more widespread visibility and understanding of the impact of these threats, and in the months that have followed, a cadre of new mandates, draft legislation, and operational directives are taking aim at solving existing vulnerabilities and preventing new ones.

In recent news, the Cybersecurity and Infrastructure Security Agency (CISA) issued a new binding operational directive that gives Federal civilian agencies a six-month clock to remediate known vulnerabilities in software and hardware used in Federal information systems, including both on premise and cloud-hosted.

In addition, the White House’s cyber Executive Order (EO) issued earlier this year specifically highlights current issues with software supply chain security and lays out stringent requirements for improvement, stating, “The development of commercial software often lacks transparency, sufficient focus on the ability of the software to resist attack, and adequate controls to prevent tampering by malicious actors. There is a pressing need to implement more rigorous and predictable mechanisms for ensuring that products function securely, and as intended. The security and integrity of ‘critical software’ — software that performs functions critical to trust… Accordingly, the Federal Government must take action to rapidly improve the security and integrity of the software supply chain, with a priority on addressing critical software.”

President Biden has asked the National Institute of Standards and Technology (NIST) to work with industry organizations and vendors to create a new framework for improving software supply chain security.

Software developers are truly on the front lines, and while more and more Federal agencies are embracing the DevOps principles of “release often and quickly,” the ability to also release securely is essential. Software updates in some application environments can take months or even years to deliver – which is problematic if the update is addressing a vulnerability or an urgent mission requirement.

So what steps can Federal agencies and their mission partners take to secure the full development lifecycle and software supply chain – including development, updates/patching, and at the same time enable rapid software delivery? Here are a few recommendations you might consider:

Step 1: Unite Security Teams with DevOps Teams

While developers recognize that security is important, oftentimes it’s not their top priority. More typically, DevOps teams prioritize delivering new capabilities and features to the business and customers, often as part of larger digital transformation initiatives. And developers often view security as something that will slow down deployments.

When developing software, it’s important for security teams and DevOps teams to work closely to “shift left” and effectively bake security into every stage of the development process.

The security team should communicate new, needed features and security guidelines to the DevOps team. In turn, the DevOps team must have some security knowledge, should get in the process of “thinking like a hacker,” and provide a realistic deployment plan for updates to ensure they are secure.

Step 2: Write Once, Use Many

Under the Department of Defense’ (DoD) Platform One initiative, developers working within command operations can access a central repository of secure, tested and validated software components that have been hardened to the DoD’s specifications. The program – known as Iron Bank – empowers developers to deliver custom mission applications rapidly and securely. It also gives vendors, including JFrog, Continuous Authority to Operate (C-ATO) with government defense organizations.

Step 3: Implement a Multi-Pronged Testing Approach

Following initial development, or prior to significant updates, all software must go through rigorous multi-pronged security testing. The tests should be performed by the third-party developer, and, ideally, also by an external security auditor. The external audit should include manual research, automated static analysis, and automated dynamic testing.

Another key testing process is red teaming, where a team of specialists mount a false attack on an agency’s systems. It is a valuable tool that agencies should use more often and can be implemented during the design process as well as the qualification cycle.

Red teaming has traditionally been used to test critical software. But, today, the risks are greater – driven by the risks associated with software supply chain attacks and by the tremendous proliferation of connected devices. Projections estimate there will be 24 billion IoT devices in use worldwide by 2026. According to Gartner, spending on the worldwide government IoT market is expected to jump 22 percent in the next year. This means millions more entry points for threats.

With an exponential rise in the number of supply chain attacks over the last year, now every system is critical and can provide a point of entry for malicious code. For example, the Army might not test software for a payroll system as rigorously as software for a weapons system. The payroll system, however, could be the entry point to attack the entire network. Red teaming identifies weaknesses and vulnerabilities in systems that can be mitigated prior to an attack, protecting the entire network.

Step 4: Implement Vulnerability Disclosure; Leverage the CVE Program

Establishing a vulnerability disclosure program (VDP) for your own organization is essential. This process involves two steps – sharing and receiving information. Organizations can tap into existing VDPs for an added layer of protection – specifically to deal with unforeseen situations that might include unusual or creative attack vectors. A VDP creates a system for organizations and researchers to easily communicate, identify vulnerabilities before they can be exploited, protect essential data from being accessed, and stay a step ahead of cybercriminals. As a best practice, use the knowledge base in VulnDB, and contribute newly discovered vulnerabilities to help make the cyberworld a better, safer place.

Another valuable resource is the CVE program. Their mission is to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities. There is one CVE Record for each vulnerability in the catalog. Partners made up of public and private sector tech organizations (including JFrog, Red Hat, Google, and Microsoft) publish CVE Records to communicate consistent descriptions of vulnerabilities. Cybersecurity and IT professionals worldwide use CVE records to coordinate their efforts for addressing critical software vulnerabilities.

Taken together, these measures will provide Federal agencies and military organizations with the best opportunity to innovate and deliver rapid custom applications and updates, securely.

Better Cyber Hygiene Helps, but Federal Security Needs SASE Lift

The recent Binding Operational Directive issued through the Cybersecurity and Infrastructure Security Agency (CISA) requiring Federal agencies to immediately patch hundreds of cybersecurity vulnerabilities affirms the Biden administration’s prioritization on securing Federal government networks and reinforces that improved cyber hygiene is critical to protect against malicious adversaries seeking to infiltrate government systems and compromise data.

Earlier this year, the Senate Committee on Homeland Security and Government Affairs issued a bipartisan report entitled, “Federal Cybersecurity: America’s Data Still at Risk” that outlines the challenge the government has with a mountain of technical debt. The report highlights that seven of the eight agencies audited used unsupported applications and technologies, and in doing so neglected to implement basic cybersecurity standards necessary to protect America’s sensitive data.

Although Federal cybersecurity spending has increased significantly over the past 10 years, the problem of securing our critical information and protecting our employees against adversaries hasn’t gotten any easier. And many would agree that the new, hybrid government workforce presents an additional set of challenges.

Bottom line: the proliferation of telework and the complexities associated with legacy government networks – in conjunction with the enduring shortage of cybersecurity professionals to perform these critical services – illuminates the need for the Federal government to modernize existing network security architectures and leverage cloud-native services for today’s network security functions.

Getting Going

The challenge for many agencies is knowing where to start. To securely connect today’s hybrid Federal employee and alleviate the burden on an already taxed cyber workforce, government agencies should adapt their network and security plans and focus their digital transformation efforts on moving to a Secure Access Service Edged (SASE) platform that provides the fundamental zero trust principles for secure connectivity outlined in NIST S.P. 800-207.

Moving to a SASE platform enables agencies to begin to implement a zero trust model, as required by the Biden administration’s Cybersecurity Executive Order issued in May.

When looking for a SASE platform, agencies should prioritize what networking and security capabilities they require to strengthen their cybersecurity posture and understand the impact their near-term decisions will have on their longer-term goals. Implementing the correct SASE service – one that provides native IPv6 support throughout the entire platform, for example – is critical for the Federal government to achieve its zero trust goals.

Some agencies will also start deploying Zero Trust Network Access (ZTNA), replacing their VPNs as a first step. Having ZTNA as an integrated security microservice within a SASE platform – and not as a standalone product – will help simplify and streamline the end-state architecture.

Insight into the expected end state – not only from a technology perspective, but also including doctrine, people, and process – will allow Federal agencies to achieve the outcomes they’re looking to accomplish through modernization:  increased security, reduced cost and complexity, improved performance, ability to deliver on mission, and assured compliance.

DoD, Feds Plot Top Cyber, Cloud Priorities for 2022

Top cybersecurity officials from the Defense Department (DoD), Federal civilian agencies, and the private sector laid out their developing strategies for zero trust security migration, cloud adoption, and meeting requirements of the Biden administration’s Cybersecurity Executive Order at an October meeting of the Foundation for American Science and Technology (FAST).

Emerging from the meeting was a much-needed dialogue between the public and private sectors for better collaboration, and a realization that while each Federal agency has its own mission and unique challenges, many share a similar focus.

Zero Trust and the Cyber EO

Executive Order 14028 on Improving the Nation’s Cybersecurity was released in May with nine sections outlining specific focus areas for security improvements. The EO places significant emphasis on zero trust security adoption – mentioning it eleven times. But six months after the order’s release, and despite several guidance documents from the Office of Management and Budget (OMB), the Cybersecurity and Infrastructure Security Agency (CISA), and the National Security Agency (NSA), Federal agencies are in many ways still grappling with how to best incorporate zero trust concepts into their overall security strategy.

While zero trust guidance provides a common roadmap, each agency faces the challenge of charting an effective course for adoption and layering zero trust onto its existing security strategy without disruption to mission sustainment. Despite the EO – and apart from a strong, proven use case as precedent – it can be difficult to make the first move, especially without dedicated funding.

Agencies are hoping that the criticality of zero trust, however, may provide an opportunity to break the traditional mold for procurement and implementation. They are pushing for changes to the requirements process with things like a lightweight or continuous Authority to Operate (ATO) – reducing the number of controls from hundreds to a few dozen core controls, and reducing the duration of the overall process. Sometimes referred to as a rapid ATO, this continuous authorization can allow software to be authorized once and used many times, providing the opportunity for security solutions to not just be used and shared across a single agency, but across multiple agencies as well.

Better security doesn’t just require modern solutions, it requires a modern approach for procurement, authorization, and adoption. Just as legacy tech can introduce security risks, legacy processes can allow pervasive security risks and threats to persist.

Cloud Adoption and Cyber in the Cloud

While many agencies were already leveraging the cloud in some capacity, the pandemic served as a forcing function that has propelled further adoption to satisfy requirements of accessing data and applications remotely. What’s top of mind now is consolidating and converging cloud instances for better security and visibility. Barring specific requirements for cloud adoption, and spurred by the need to maintain the mission, cloud management and security both now frequently fall to organizations to sustain independently. This has created a massive gap in visibility and increased risks for these organizations.

In addition to visibility, Federal organizations require a hybrid cloud model and cloud portability – the ability to move applications and data from one cloud provider to another, and to keep some critical data and applications on premise. Limited budgets are a key driver for the government’s requirement for portability and flexibility. Much like consumers shop around for the best value for goods or services, agencies have to use the service that represents the best value within their budget – and sometimes that means changing services.

Federal agencies at the October FAST meeting agreed with the idea that moving to the cloud will save money was a misnomer. While there may be some long-term cost savings and opportunities for improved efficiencies, the top drivers behind cloud adoption are mission requirements and the need for modernization and better security.

Modernization, Integration, and Continuous Authorization

IT modernization has been an ongoing effort across government for years, but in many cases, modernization really just means catching up as opposed to getting ahead. Government systems and networks weren’t architected for the cloud. Those that haven’t yet been modernized were built to support an on-premise environment, both in terms of IT operations and security.

While cloud adoption is but one facet of an overall modernization strategy, it’s a big one. From data transfer and data center consolidation to application and tools rationalization and retraining and retooling personnel, it’s a time-consuming and resource intensive process. And, because of the time required, the best-laid strategy for modernization and adoption might be realized as outdated by the time it’s fully funded and executed.

Federal and industry participants agreed that just as government needs to streamline procurement and ATO processes, industry can help reduce stove-piped solutions by providing integrated solution offerings. While Federal agency participants acknowledged the need to retire legacy tech, they also said they are looking for integrated solutions that augment what they already have, while complementing other new investments.

Solution providers selling to the government, of course, face the challenge of trying to provide Federal-specific solutions for a federated government that’s comprised of hundreds of individual organizations and sub-organizations.

What’s next?

While there are certainly some significant obstacles to implementing the necessary changes to meet the requirements of the Cyber EO, there are two clear actions that must remain in focus for both government and the private sector.

First, the mutual acknowledgement that legacy structures aren’t just limiting, but actually increase risk – not only in terms of technology, requirements, strategy and processes – but also in terms of technology and security expectations. Decisions in each of these areas that were made in years past may have been the best decisions at the time, but that doesn’t mean they are the right decisions for today’s environment. It’s never been more critical that the public and private sectors determine ways to overcome long-standing limitations brought about by precedent and political inertia, and demand improvements that exceed the current security status quo.

Second, there must be a willingness to assemble and speak candidly across the public and private sectors. Apart from transparent communication and a sincere desire to collaborate for the betterment of our nation’s security, progress will be difficult to realize for either sector. To that end, FAST will reconvene on Jan. 13, 2022 to continue the conversation and chart a course for tackling the hardest problems facing government.

Cloud-Native Government: How to Transform With Intention

We’re experiencing an evolution towards a cloud-native government, where capabilities are viewed as modular and shared like a commodity. To meet mission requirements into the future, this evolution will allow agencies to continuously adapt – flexing these modular, shared cloud-based capabilities to their changing needs.

While there’s pressure to fast-track this modernization, IT leaders must take the time to build an intelligent foundation that ensures resilient cloud solutions for the long haul. This requires an intentional framework across the full cloud lifecycle: determining what goes to the cloud; how to migrate; and how to achieve long term value once you’re there.

Let’s walk through each of these areas.

What to prioritize for the cloud: Strategically select top candidates

While organizations have a myriad of systems that can be moved into the cloud, IT leaders require an intentional business case for determining cloud eligibility. There are significant factors to consider for a purposeful migration strategy, including:

  • Costs, particularly with legacy systems;
  • Risks, including potential disruption to essential activities; and
  • Levels of control gained or lost.

Rather than replacing every legacy system, agencies should focus on the systems and applications that will significantly impact mission and business outcomes with enhanced scale and flexibility. IT organizations should use continuous feedback to inform progress and leverage incremental successes to demonstrate the long-term value of a purposeful approach.

How to migrate to the cloud: Use “pathfinding” for streamlining and speed

Once an organization determines its eligible candidates for cloud migration, they need to grapple with the next set of decisions: How should applications and systems be sequenced for migration? And how exactly should they be configured?

From on-premises applications that are containerized, to those that have associated service level agreements – organizations need a strategic, repeatable mechanism to handle wildly different scenarios. It’s tempting to migrate similar applications first or containerize dissimilar applications for convenience’s sake. But a smart migration involves pathfinding principles.

With this approach, teams of IT, mission, and business stakeholders can work together to identify common migration patterns across a portfolio, developing lists of applications with similar characteristics and selecting one application from each to serve as a “pathfinder.” This informs the development of a common migration process across the mix of patterns, enabling teams to gather early lessons and create repeatable pathways for accelerating and streamlining future migrations.

It’s not always the easiest approach – often you’ll encounter tough problems along with the quick wins that build momentum and excitement – but pathfinding is strategic and focuses on enterprise-wide scalability. You’re laying the groundwork up front, then applying the lessons learned and best practices to future applications.

Where to find value in the cloud: Centralize and share resources

While agencies can look to policies such as Cloud Smart to provide a technical framework for cloud adoption, long-term value at scale requires a new operating model.

To this end, agencies are consolidating cloud expertise into Centers of Excellence, creating enterprise shared services and environments, and even exploring how to extend these services to other agencies. This approach enables IT leaders to manage cloud platforms at scale and allows technical teams to focus on mission transformation rather than on business processes.

Consider the cloud offerings being developed by agencies for agencies, like the Treasury Department’s Workplace Community Cloud, and GSA’s Cloud Marketplace. Centrally operated cloud services connect development teams to an ecosystem of foundational and enabling elements, from virtual machines to low-code platforms, so they can focus their time and energy on the agency’s mission.

Think of moving to the cloud like you’re moving to a new house. You must understand the resources required today – to know what to buy, leave, or bring with – and have a strategy for making the space work as needs evolve. We’re on a journey towards a cloud-native government that accelerates capabilities through commodity IT services – from low-code platforms to serverless architectures. Designing a purposeful cloud strategy today enables agencies to create the resilient but flexible foundation to continuously improve mission delivery at the pace of technological advancement.

DoD and VA Health Networks Face Growing Threat From Medical-Device Vulnerabilities

When it comes to the financial impact of data breaches, the healthcare sector consistently tops the list of industries. Dollars and cents, however, represent only a fraction of the damage, especially when it comes to military healthcare networks and programs, which have an immediate and direct impact on national security.

The Military Health System (MHS) and the Department of Veterans Affairs (VA), which together provide service to upwards of 20 million veterans, members of the military and their families, are particularly attractive targets given attacks due to their massive scale, valuable data assets, and vital role in national security.

The risk is real and well documented. In July 2021, a U.S. Government Accountability Office report stated that the “lack of key cybersecurity management elements” at the VA is “concerning given that agencies’ systems are increasingly susceptible to the multitude of cyber-related threats that exist.”

Ransomware, in particular, has stolen the spotlight in recent years. It is only one of a growing number of insidious threat vectors. This summer, for example, Armis researchers identified a set of nine critical vulnerabilities in the leading solution for pneumatic tube systems (PTS) in North America – the Translogic PTS system that is used in over 80% of hospitals in North America. PTS devices play a crucial role in patient care and are utilized nearly 100 percent of the time.

Threats Abound

The threat landscape in the medical sector – including the VA and DHA – is massive and expanding daily with exponential growth in connected medical devices – which can make up as much as three-quarters of the devices connected to a hospital’s network. They are also an attractive entry point into a healthcare organization’s network.

“We’re connecting devices we’ve never connected before,” said Lt. Col. Luigi Rao, MHS Genesis Liaison Officer at the U.S. Army, at a military health summit in July 2021. “With more and more episodes of ransomware – there’s growing understanding and appreciation of the need to protect not just the patient’s data, but also safeguard it from malicious attacks, whether ransomware or other nefarious purposes. Other state actors are highly interested in high ranking personnel and patients we’ve seen.”

Traditional healthcare networks lack security controls such as segmentation, resulting in virtually all devices being on a relatively flat network including vulnerable medical devices. Because vendors certify devices with very specific configuration and operational parameters, it’s very difficult for teams to secure these devices, whether by upgrading end-of-life operating systems, installing critical security patches, or installing agents such as asset management or endpoint security agents.

For example, let’s consider a patient monitoring system, a critical system that tracks and reports vitals and cannot experience performance issues. A typical patient monitoring system includes patient monitors, central workstations for keeping an eye on numerous patients from a single location, multiple tiers of servers, and network equipment provided by the vendor. A delay, disruption, or downtime of these devices can directly impact patient care if nurses have reduced or no visibility into monitoring of patient vitals or there is a lag in updating the vitals shown in the central workstations.

To account for this, vendors often place monitoring systems on their own dedicated networks behind vendor-provided gateways. This segments traffic into near real-time critical traffic from lesser critical traffic and completely segregates from the patient monitor traffic from the production traffic of the hospital in order to minimize any sort of disruption that may arise from things such as production network changes or latency issues. This segmentation, however, can completely isolate such devices from the hospital network and thus create an additional blind spot.

Operational Disruption 

Traditional device vulnerability management programs use a scanner that actively and aggressively probes the network for assets and executes dated scanning methodology. While traditional scanners perform well against standard non-clinical endpoints, such as laptops and servers, these types of devices only account for a subset of the devices on a healthcare organization network.

As security teams try to expand the scope of existing vulnerability scanners to include medical devices, they face several challenges, including personnel resources. The resource implications go beyond the IT security and biomed teams to include clinical staff and can interrupt the clinical workflow and impeded patient care delivery. For medical devices that have a regular cadence for being scanned, information security personnel, biomed, and clinical staff must coordinate each time a scan is conducted to ensure the devices are online, not in clinical use for the duration of the scan, functional tested – a process that is not sustainable for a successful vulnerability management program.

New Threats Call for New Approach to Device Vulnerability Management

Healthcare organizations, including military healthcare programs and facilities, require a new approach to ensure the ability to assess risk continuously and unobtrusively and in a way that also encompasses contextual behavior of the devices, as well. In order to transition from the legacy approach to a continuous monitoring style methodology of vulnerability management, organizations need to leverage capabilities that exist in legacy platforms and add innovations with new approaches that enable:

Network behavior visibility

Healthcare organizations require visibility into everything in the enterprise airspace, including devices that communicate via Wi-Fi and many other peer-to-peer protocols that are invisible to traditional security tools. This capability enables visibility into potential network intrusion and data exfiltration points in the environment.

Real-time passive event-based vs. scheduled scanning

Healthcare organizations require real-time monitoring that does not impact device performance. An agentless passive architecture can create a foundation to automatically discover and support visibility into the behavior of every connected device in an environment – managed and unmanaged, medical and IT, wired and wireless, on or off the network, including IaaS environments and vendor managed network segments.

Baselined device behavioral telemetry

To effectively manage vulnerabilities, healthcare organization need to monitor a wide range of device characteristics. These metrics include manufacturer name, model, OS version, serial number, location, connections, FDA classification, and more. When organizations correlate valuable baseline data with real-time event-based scanning data, they can identify anomalous device behaviors that deviate from the normal profile of the device, such as MRI machines connecting to social media sites.

Utilizing these approaches allows for the creation of an architecture that takes into account not only the technology footprint but also the workflow impacts in an operational setting. It also provides security and operations teams with appropriate, contextualized data that is already prioritized. The end result is significant improvements in security and team efficiency for incident response and recovery operations.

New Federal Cybersecurity Requirements: How Agencies Should Implement a Zero Trust Architecture

With this year’s release of a major strategy policy on cybersecurity, the White House is sending a clear message to agencies: We must move toward the implementation of a zero trust architecture (ZTA) government-wide – and swiftly.

The draft version of the Federal Zero Trust Strategy supports the Executive Order on Improving the Nation’s Cybersecurity by clarifying ZTA priorities, identifying needed outcomes and setting baseline policies/technical requirements for agencies.

As defined by the Zero Trust Reference Architecture published by the Department of Defense (DoD) earlier this year, agencies with an effective ZTA enforce rules and controls so “no actor, system, network or service operating outside or within the security perimeter is trusted. Instead, (agencies) must verify anything and everything attempting to establish access. It is a dramatic paradigm shift in philosophy of how we secure our infrastructure, networks and data, from verify once at the perimeter to continual verification of each user, device, application and transaction.”

Fortunately, this transition is well underway: Four of five federal IT decision-makers and other government tech leaders and executives say they are including or defining zero trust within their cybersecurity strategy. But only 55 percent are “very” confident in their agency’s ability to deliver on a zero trust framework.

To hopefully boost this confidence, the White House strategy directs agencies to achieve five goals by the end of Fiscal Year 2024. All five are closely aligned to five pillars of the Zero Trust Maturity Model published by the Cybersecurity and Infrastructure Security Agency (CISA) in June. Here are the goals, along with our recommended best practices as to how to implement them:

1) The establishment of a single sign-on service (SSO) for users that is integrated into applications and common platforms, along with multi-factor authentication (MFA) at the application level with enterprise SSO whenever feasible.

Best practices for implementation: The government has widely adopted MFA, such as the DoD’s Common Access Card (CAC) and Personal Identity Verification (PIV), but not all systems can accommodate these controls,. It is essential to have a variety of authentication techniques that can be applied across the wide range of applications in government. This suggests that agencies must prioritize systems according to mission-criticality, sensitivity and likelihood of breach, and seek to prioritize MFA for systems deemed most critical and then work down from there.

In addition, agencies cannot overlook privileged access management (PAM) as part of this. While PAM isn’t addressed in depth in the strategy, 74 percent of IT decision-makers whose organizations have been breached indicate that the incident was linked to the accessing of a privileged account. Therefore, agencies need to establish effective, proven PAM controls.

2) The completion of an inventory of every device operated and authorized for government use, with the capability to detect and respond to incidents on these devices.

Best practices for implementation: Security teams should make sure that every device is covered, including Internet of Things (IoT), operational technology (OT) and cyber physical system (CPS) devices. A comprehensive ZTA plan will incorporate all of these into a monitoring, detection and protection program.

To increase effectiveness of threat hunting with government-wide endpoint detection and response, the data collected on endpoints need to be correlated, enriched, analyzed, and acted upon in a timely manner. Security orchestration, automation and analytics are essential to accomplish these goals.

3) The encryption of all DNS requests and HTTP traffic, and the segmentation of networks around their applications.

Best practices for implementation: Continued use of shared services such as CISA’s Protective DNS allows agencies to focus their efforts on other – and more challenging – aspects of zero trust strategy, particularly application segmentation. The strategy indicates that agencies must run every distinct application in its own separate network environment. “Multiple applications may rely on specific shared services for security or other purposes,” it states, “but should not rely on being co-located within a network with those services and should be prepared to create secure connections between them across untrusted networks.”

Using software-defined networks and security to create these micro-perimeters provides the speed, flexibility, and scalability needed to create these zero trust network segments. Segmentation can be enforced using various techniques applied at the network, application, user, or data layer. Therefore, is it essential to first understand the use cases and requirements prior to implementation.

4) The treatment of all applications as internet-connected while routinely subjecting these tools to rigorous testing and external vulnerability reports.

Best practices for implementation: This represents a major shift for the government – the acceptance and even embracement of a perimeter-less architecture in which all applications (including Federal Information Security Modernization Act-regulated ones) are connected to the internet. While the strategy states that agencies must “create minimum viable monitoring infrastructure and policy enforcement to safely allow internet access,” it doesn’t offer many specifics on how to accomplish this. Security teams will have to determine what level of monitoring and controls (firewalls, packet capture, network detection response, etc.) will effectively enforce the security standards required for these applications. Recent breaches stemming from SolarWinds and Microsoft Exchange highlight the need to improve software supply chain and application security capabilities, particularly with performing continuous analysis and continuous monitoring.

5) The deployment of protections that make use of thorough data categorization and access monitoring, and the implementation of enterprise-wide logging and information-sharing.

Best practices for implementation: This goal describes the automation of security monitoring and enforcement – or security orchestration, automation and response (SOAR) – as a “practical necessity.” But agencies will do themselves a disservice if they deploy SOAR solely to address the data goals. They must deploy SOAR throughout their entire IT environment as part of their ZTA program, and ensure that SOAR plays a lead role in achieving the five goals summarized here. In the process, agencies will benefit from a wealth of actionable intelligence to enrich their cybersecurity posture throughout the enterprise.

It is very encouraging to see the administration call for a comprehensive strategy. Security leaders and their teams are increasingly recognizing that zero trust brings a vigilant level of oversight and controls which modern times require. However, agencies should carefully consider what is needed in terms of resources and execution to sufficiently satisfy each goal – and even surpass what is “on paper” in the strategy to include SOAR, PAM and additional measures – to best protect themselves for now and the indefinite future.

Protecting Our Nation Through Big Data Analytics

The past decade has seen significant data generation around the globe. Market research firm International Data Corporation (IDC) has predicted that the amount of data generated globally will grow from 33 zettabytes today to 175 zettabytes by 2025, for a compound annual growth rate of 61 percent.

End result: Data will continue to move faster and grow more quickly than data users can handle.

Billions of location-rich data sets stream every day from satellites, drones, ships, aircraft, sensors, the Internet of Things (IoT) and other sources. In a word, nearly every conceivable aspect of work and life generates digital traffic: endpoint and network devices, servers, applications, and cloud infrastructure in the form of system logs and other telemetry data.

To interpret and visualize this data in near real time, companies have turned to Artificial Intelligence (AI) focused initiatives. But often, there is a lack of clarity on the elements of analytics – business intelligence, data science, machine learning and AI – and how they interact with one another.

In tandem with this ever-increasing amount of data, we see the convergence of cyber and physical systems (CPS). Over the years, we have become reliant on industrial control systems such as supervisory control and data acquisition (SCADA), programmable logic controllers (PLCs), and distributed control systems (DCS) for monitoring processes and controlling physical devices. CPS generates large amounts of data that are at once a tremendously useful resource and an attack vector.

Hackers are increasingly becoming more interested in operational technology, the physical connected devices that support industrial processes. We have seen serious attacks on industrial control systems and networks that have disrupted operations and denied critical services to society. The Colonial Pipeline ransomware attack is just one example. Since the onset of the pandemic, ransomware attacks have increased more than 500 percent.

ManTech’s 53-year history provides us with an excellent understanding of our customers’ problems, and access to the most complex – and informative – technological use cases. Our approach to staying ahead of the curve of the technology evolution and offering our customers innovative ideas is manifest in ManTech’s Innovation and Capabilities Office (ICO) organization, which embodies our commitment to “Bringing Digital to the Mission.”

Leveraging ManTech’s domain knowledge advantage in related technologies such as AI, big data analytics and Deep Neural Networks (DNN), our experts work with our customers to keep adversaries out of the government’s networks.

Among the many ways that ManTech helps protect our nation’s networks:  significant investments in analytics for cyber physical systems over the last 10 years. Two of those investments are ACRETM, a cyber analytics platform and ArchimedesTM, a big data platform.

ACRE provides a high-fidelity modeling, emulation, and training environment. This is a hybrid physical-virtual platform driven through software architecture that can be on-premises or on the cloud – all self-contained to simulate benign and malicious host and network-based traffic generation.

ACRE enables ManTech’s experts to model complex IT environments and run complex analytics on a safe digital twin where we can inject malware, ransomware attacks, and other hostile actions to reveal previously unknown vulnerabilities. Our experts run thousands of such scenarios on this virtual model to see how the system responds to it. On the successful completion of these cyber training exercises, ManTech’s team works with the customer to apply lessons learned to their real world, physical enterprise networks to harden their networks and systems to deter hostile attacks.

Archimedes is a big-data analytics platform that can run on any cloud platform or be hosted on- premises. It provides the ability to ingest huge volumes of data, process and curate that data,  provide analytics results at scale and speed to support dynamic mission challenges.

ManTech supports the Department of Homeland Security with big data analytics, automation, and AI solutions. Among the many key use cases: At our nation’s borders, ManTech solutions rapidly analyze volumes of data on incoming people, cargo and transportation, and provide analysts with relevant real-time information for critical decisions.

ManTech stays ahead of the curve with its continued investments in advanced cybersecurity, and is a market leader in understanding how security analytics and intelligence work together to support national security – key differentiators that make us the trusted partner of government.

Learn more about ManTech’s full-spectrum cyber capabilities here.

Three Ways COVID-19 Altered Federal, State IT Budget Allocations

Amid a rising tide of ransomware attacks against governments and schools nationwide accelerated by the COVID-19 pandemic, tech pros are prioritizing investments in core technologies to manage risk, including security and compliance, network infrastructure, and cloud computing.

But implementation is hampered by dwindling resources and access to personnel training, according to a new survey by SolarWinds. The report finds a lack of budget and resources are the top challenge to utilizing technology to mitigate and manage cyber risk.

Undoubtedly, the pandemic and resulting economic disruption have reshaped state economies and Federal budgets. What does this mean for agencies as they seek to build organizations that can withstand risk while transforming the delivery of government services?

Here are three interesting ways the pandemic altered Federal and state IT budget allocations and the corresponding implications for security and service delivery.

Nuanced Pandemic Budget Landscape

Many public sector organizations faced deep budget cuts during the pandemic, especially cities and counties who lost revenue sources such as parking fines, restaurant taxes, and tourism dollars, while simultaneously experiencing higher public health expenses.

For states, the picture was more nuanced. According to the Urban Institute, state tax revenue changes varied significantly during the pandemic depending on the prevalence of the coronavirus. Overall, 18 of 50 states reported lower revenue collections year-over-year, while 22 states saw tax revenues increase during the same period, some generating a budget surplus.

Technology Investments Recalibrated

Despite budget uncertainty and after a year on the frontlines of pandemic-driven crisis mode, in many cases, a lack of resources was not a hindrance to public sector innovation.

Indeed, necessity was the mother of invention. A study by Deloitte found the pandemic was more of an accelerator than an obstacle, with many proactive organizations using the crisis to recalibrate their technology investments – to get to a “safer and better normal.”

Areas transformed by pandemic technology spending include remote work and learning, automation, and new modes of service delivery and constituent interaction. Think online driver’s license renewals, vaccine sign-up portals, and chatbots connecting citizens to the services they need.

Federal IT Modernization Imperative

Although COVID-19 forced many agencies to pivot digitally, it also shone a spotlight on inefficiencies and deficiencies. For instance, the Federal government’s under-performing IT infrastructure led to delays in delivering essential services and information. In July 2020, the House Budget Committee issued a scathing report stating, despite the creation of emergency assistance programs through the CARES Act, “…many citizens were in limbo for weeks,” waiting on “…promised relief caused by antiquated information technology (IT) systems.”

Funding Challenges Remain

In response to budget shortfalls and recovery efforts, the American Rescue Plan provides much-needed funds for state and local government technology programs. The challenge for these agencies will be how to prioritize and allocate the $350 billion in the enormous rescue package – while addressing the backlog of technology infrastructure issues such as remote learning, cybersecurity, and deliberate cloud investments.

For instance, the rush to the cloud during the pandemic led many states and municipalities to cut corners and introduce risk into their deployments. To put this right, IT teams must have visibility into cloud performance and security – even in hybrid environments (nearly 40 percent of those surveyed by SolarWinds say they lack this capability).

Federal government agencies face a similar challenge – albeit their spending priorities are more prescribed and are intended to address the federal IT modernization deficit. For instance, the American Rescue Plan allocates an additional $1 billion to the Technology Modernization Fund to bolster cybersecurity maturity, improve public-facing digital services, and modernize high priority systems with a significant impact on longstanding security issues.

Smart Spending Decisions

To get the full benefit of new funding, smart decisions must be made about how taxpayer dollars are allocated.

Federal, state, and local officials have different missions and priorities. But most agencies agree it’s a case of “when” not “if” they will fall victim to a cyberattack, and it’s the IT team’s job to know exactly where risk management investments should go.

Technology alone is not the answer. Investments must also be made in upskilling and training, especially in the face of competition for cyber talent from the private sector.

As the SolarWinds survey shows, tech skills development is front and center as a key focus area for public sector technology practitioners, managers, and directors – particularly as it pertains to managing and mitigating security risks.

Furthermore, agencies shouldn’t prioritize skills development just for training’s sake. Too often, tech pros are required to complete numerous certifications by their organizations each year, many of which don’t align with or support larger strategies and initiatives. To succeed in a future built for risk, agencies must prioritize training that maps to the agency’s priorities and brings value to the mission.

2022 and Beyond

Looking ahead, there’s much work to be done. But government IT leaders have the budgets and the opportunity to make lasting changes in how they approach technology investments, optimize operations, and prioritize skills development. Changes will enable them to better manage, mitigate, and prevent risk in the future – while rethinking service delivery.

Ransomware is More Than a Cybersecurity Issue

Ransomware attacks are filling headlines. Now reaching unprecedented levels, the ransomware crescendo is part of the surge in cyber-attacks that became a side effect of the COVID-19 pandemic.

The rise of ransomware attacks has understandably been a cybersecurity challenge, and prevention is the current point of many conversations.

But there are operational disruptions to consider – especially as agencies rely on the integrity of files for big data analysis. Even with air gaps and secure networks, odds are increasing that a government agency will be hit at some point, necessitating a strategy to minimize disruption to data integrity while maintaining cyber resilience that will inevitably follow a successful attack.

There’s got to be a better way to minimize the impact of ransomware attacks, and object story technology is one way to consider.

Attack Anatomy: Timing is Crucial

In a ransomware attack, an executable script or program runs, encrypts your data, and a ransom is demanded for the encryption key.

There are two possible ways this happens: a user inside the network opens a bad file or link that immediately executes a harmful payload, or a malicious file that’s been lying in wait for months to bypass restore capabilities executes upon a trigger event. There are significantly different operational impacts resulting from each.

Although shocking and disruptive, the first situation often has limited impact that can be quickly remedied. Modern IT infrastructure is designed with lots of redundancies for just these kinds of events. A reliable backup scheme and disaster recovery program will allow data to be restored to a specific recovery point in time – for instance, if an attack hit on Tuesday at noon, you can restore the backup from 11:58 a.m. You may lose two minutes of work, but the event is essentially erased and the mission can move on.

The second situation poses a much more challenging predicament. Someone inside the firewall knowingly or unknowingly loads a trojan file that was improperly scanned or otherwise not detected. Later, a trigger event will cause that file to execute its payload, encrypting a larger block of files. The attacker then demands payment for the decryption key to unlock the data.

Triggers are usually timed to happen beyond the backup window, which is typically limited to three to six months given the costs of storing today’s massive datasets. Also, because of how backup management works, over that time the restore point expands (to an hour, to a week, to a month, etc.), limiting what is restorable to less and less finite options. By the end of the backup window, there is no restore capability at all because the data is simply gone.

That is a big reason why time-delayed ransomware is becoming more dominant. Skilled attackers – whether a disgruntled insider, an organized crime operation, or a nation-state level actor – understand the backup window vulnerability and manipulate it to their advantage. As we’ve seen, without restore points, the victim’s choice is to pay the ransom or lose their data for good. Their recourse has been to expand backup and restore capabilities to a bigger time window, and at greater expense. But adopting a different kind of technology can render this danger moot.

WORM: Effectively Manage the Data You Can’t Lose

WORM-based (write once read many) object storage technology has no executable files, prohibiting any corrupted files from executing while stored, and nullifying the triggers. All files are rendered immutable and cannot be modified. When a file is retrieved for use, it is accessed as read-only and transits a file share gateway to the user. In that process, should a corrupted file still manage to execute, the impact is limited to only that gateway point of access. Once in use, if the file is modified, it is stored as a new file version and in turn becomes immutable.

Object store’s file level deduplication capabilities also help contain data growth. For instance, if an email attachment is sent to fifteen recipients, in object storage only one copy is kept. That helps with the management of long-term data preservation, particularly valuable in an era where users love holding onto their files seemingly forever.

Large data lakes and Hadoop environments present prime opportunities for hiding time-delayed ransomware files. Because using object store internal services guarantees that files won’t get corrupted, integrity is guaranteed, and files remain usable for big data analysis and other operational purposes.

Preserving Data Preserves Mission Viability

Object store technology has come a long way from its traditional roots. Significant technological advances have transformed object store into a high performance alternative to Network Attached Storage (NAS), and one that is far more secure.

Given the proliferation of ransomware and other cyber threats, it is not a question of if, but when, an agency will be hit. Rendering stored files immutable and inoperative will provide agencies with a unique and valuable option to securely manage their data – the lifeblood of their mission – stemming the operational disruption of a cyberattack for as long as that data is needed.

Learn more about ransomware disruption prevention.

From Me to We: Take the Mission Further With Multiparty Systems

The Accenture Federal Technology Vision addresses the five technology trends poised to have the biggest impact on how government operates over the next three years. Today, we look at Trend 5: From Me to We, which promises to place the ecosystem at the core of government operations.

The breakdown of ordinary systems during the COVID-19 pandemic brought to the forefront the need for multiparty systems. Accenture found, for example, that 75 percent of Federal executives reported their organization faced a moderate to complete supply chain disruption due to the pandemic. Multiparty systems point toward a way forward amid pandemic-inspired upheaval.

Specifically, enterprises needed to build trusted relationships on the fly but often lacked a means for verifying authenticity within an increasingly virtual world.  Multiparty systems can fill this void by using shared data and shared data infrastructure to take collaboration to a new level.

Blockchain, distributed ledger, distributed database, tokenization, and similar technologies make this possible. These tools can drive greater efficiency, transparency, accountability, security, interoperability, and confidence in transactions and processes for Federal agencies.

Trust and Collaboration

The multiparty systems model enables trust and collaboration. To make this a reality, agencies need to consider new approaches that effectively pool the resources and contributions of many organizations.

If they can make that shift, positive outcomes emerge. In multiparty systems, agencies have the opportunity to institutionalize trust in their data and processes, presenting all parties involved a single source of truth.

They also have the chance to pare back the time, energy, and expense presently devoted to maintaining these relationships. Multiparty systems offer to spread the burden of collecting, validating, storing, managing, adjudicating, and maintaining all the data required to manage complex processes. Trust, transparency, accountability — all are byproducts of the technology underlying multiparty systems.

Rather than managing a complex process on their own, agencies can shift to orchestrating an ecosystem that executes the process within a shared, trusted, transparent environment. It’s a potentially powerful new way of doing business.

Meeting Federal Challenges

Federal agencies are making initial moves in this direction. Our Federal Technology Vision reports that 18 percent of Federal executives say their organizations are scaling their multiparty systems this year, with another 15 percent beginning to experiment.

Ensuring the integrity and safety of products is one focus for Federal agencies.  Take, for example, the Defense Logistics Agency, which needs to counter the threat of counterfeit and nonconforming parts entering the Defense Department’s supply chain. Along with outside partners, DLA has formed a Trusted Working Group to explore multiparty systems approaches to the problem.

Digital identity is another area ripe for innovation, especially in the wake of the COVID-19 pandemic.  Singapore, for instance, introduced a blockchain-based medical record system during the pandemic, enabling individuals to store medical documents in a secure digital wallet.  The International Air Transport Association has developed the IATA Travel Pass, which employs blockchain technology as a tool for travelers to share verified information between governments, airlines, test centers, and vaccination providers safely and securely.

And then there’s money.  The Treasury Department has been working with the National Science Foundation and a consortium of universities to use blockchain to streamline reporting and automate transactions within grants management.  As part of the Digital Dollar Project, Accenture is working to assess potential designs for a U.S. central bank digital currency (CBDC), or “digital dollar,” which has the potential to create a more inclusive financial system.

The biggest takeaway from Trend 5: From Me to We is that each of these examples builds on multiparty systems’ key strengths: Tracking assets, exchanging data, and automating processes. These could impact Federal use cases in various areas, from accounting and data provenance to supply chain management and digital identity. The technology underlying multiparty systems promises to make all these more efficient and transparent, with trusted partnerships driving fundamental improvements across a wide variety of government responsibilities.

Anywhere, Everywhere: Integrating Your Virtual Workplace

The Accenture Federal Technology Vision looks at five technology trends likely to have the most significant impact on how government operates over the next three years. Today, we look at Trend 4: Anywhere, Everywhere, considering the implications of virtual work patterns across the Federal space.

The COVID-inspired rush to remote work was a vast upheaval on traditional work patterns. An Accenture survey within the report found that 79 percent of Federal executives called it the largest and fastest human behavioral change in history. While some viewed it as a short-term situation, forward-looking agencies are considering this disruption as an opportunity to drive lasting change, empowering employees in new ways while creating a more agile workforce overall.

The reality is that the hybrid workplace with virtual workers is here to stay. Employees have spent a year experiencing the flexibility and benefits of working from home and elsewhere; many are now reluctant to return to traditional office environments. Likewise, many agencies have discovered that large-scale remote work can reduce energy, facility, and commute costs, and boost employee productivity. For employees, the future workplace may be pre-COVID offices, but for some, it may be 100 percent remote. Still others may want a mix of options, supporting a new “work-from-anywhere” culture.

We are moving into a future where work can be done from anywhere. To make the most of the opportunities this presents, government agencies need to rethink their organizational structures. They need to consider what can be achieved with a virtualized workforce model and pursue cultural change to bring that vision to life.

The Rise of BYOE

When telework and related “bring your own device” policies were first introduced to the working world, employees could connect remotely with their work, but their personal lives stayed essentially private and independent.  In today’s world, the integration is more immersive, stripping away layers of that privacy and independence, but also providing new freedoms and opportunities as well.

What we found in Trend 4: Anywhere, Everywhere is that many employees are combining their personal and professional lives into a single environment that supports both their home and work life – what we’re calling “bring your own environment,” or BYOE.  Within this environment, children’s homework may encroach on the traditional workday, and the vulnerabilities of home networks are becoming enterprise concerns. Still, employees can and are becoming more productive, engaged, and committed to their careers.

The Accenture Federal Technology Vision found that 87 percent of Federal executives agree that leading organizations are shifting towards BYOE, leveraging cultural changes to drive productivity and enhance employee satisfaction and engagement. To start, agency leaders may need to reassess the size and function of the physical office. Successful organizations will resist the urge to race everyone back to the office in favor of taking the time to rethink their workforce model in alignment with the future of work or new ways of working. Here are some keys:

  • Big, rapid change is doable. Agencies have found they are nimbler than they might have thought. COVID showed that workers equipped with the right tools could quickly and effectively determine how to achieve their mission objectives in a virtual work setting.
  • It’s time to re-imagine traditional work structures. News flash: In many cases, a physical presence isn’t necessarily required. And the standard eight-hour shift may not be optimal for everyone. In fact, many Federal leaders found that moving to a remote work model yielded improved productivity compared to pre-pandemic outputs.
  • Long-term success requires stakeholder engagement. The long-term rise of virtual work will affect everything from job descriptions and performance appraisals to interoffice communications and recruiting. Agency managers will need to engage their many stakeholders, including Federal unions, to re-calibrate workplace policies and practices to support a culture shift that recognizes and embraces alternate work modes.

New Model, New Culture

The implications of shifting to a hybrid workplace with more virtual workers run deep. For example, BYOE promises to be a boon for recruitment, with a more national focus widening the available labor pool. Accenture found that 87 percent of Federal executives believe the remote workforce opens the market for difficult-to-find talent.

Agencies will also need to reconsider the employee experience.  At minimum, workers will need to be provisioned with the technology they require to be productive and successful remotely.  Private networks, telework tools, and training, for example, all will need to be reassessed and upgraded in support of virtual work. Many IT services and capabilities will have to be made available in a self-service model.

When workers were in the office, it was also easier to spot emerging problems. With BYOE as the new future, the employee experience is more important than ever, but it can be obscured behind miles of distance, shifted schedules, and potentially disparate time zones.  Agencies will need to enable managers to lead in this environment, learning to trust remote workers while maintaining a leveled playing field regardless of one’s location.  At the same time, managers should be taking intentional steps to foster collaboration, team building, and trust across disparate workers and virtual work teams.

Perhaps most significant, the rise and acceptance of virtual work in the private sector means government will have to pivot towards BYOE and a flexible workplace to remain an employer of choice. On the upside: If agencies can shift, the government will continue to attract and retain top talent in the increasingly competitive virtual worker environment.

‘I, Technologist’: Empowering Innovators in the Federal Workforce

The Accenture Federal Technology Vision examines the five technology trends poised to have the broadest impact on how government operates over the next three years. Today, we consider Trend 3: I, Technologist, which looks at how technology can unlock human potential.

What does it mean for technology to be democratized and accessible across the agency?

It’s become increasingly clear that everyone in government – not just the IT department – needs to appreciate technology’s vast potential and be empowered to use it in support of government’s varied missions. But how is this to be achieved? Our Federal Technology Vision paints a picture of what a technologically empowered Federal workforce might look like.

Current Landscape

The shift toward a more democratized IT vision is already underway.

Accenture found that 89 percent of Federal executives believe technology democratization is becoming critical in their ability to ignite innovation. And 81 percent say government must train people to think like technologists – to use and customize technology solutions individually and without highly technical skills.

We already see Federal employees of all stripes employing a wide variety of emerging tools. They’re leveraging cloud-based platforms to create custom dashboards, run data analytics, and even introduce automation and AI into their workstreams.

An undeniable shift is underway, as powerful technology puts new capabilities into people’s hands. Natural language processing, low-code platforms, and robotic process automation (RPA) – all these make technology more accessible, empowering Federal workers to innovate in support of mission goals.

Supporting Innovation

As do-it-yourself technology becomes pervasive, agency leaders face an urgent need to figure out how best to manage their personnel and resources in a highly empowered IT environment.

Government needs consistent leadership, planning, skilling, and governance to capitalize on the promise of the “I, Technologist” environment.

Agency leaders will need to strike a careful balance here. On the one hand, democratized tech can dramatically improve productivity and drive mission performance. At the same time, agencies must ensure that all this grassroots activity is adequately secured, understood, and integrated into an overall enterprise framework.

Rather than simply saying no to grassroots IT initiatives, leaders can learn to manage these efforts and channel individual initiatives toward common goals. In Trend 3: I, Technologist, we lay out three guiding implications that leaders can use to orient themselves and support innovation:

  • Do-it-yourself IT will accelerate as business and mission units become more comfortable with building their own applications.
  • The role of IT will shift as business and mission units assume more control over their own IT provisioning and development.
  • Tech skilling will take on higher importance so that employees can be smarter about how they employ these new tools and capabilities.

Technology can empower individual employees to fix problems and improve processes, as they select the right tools for the task in an increasingly self-service model. For this to work, the role and function of IT will have to adjust accordingly.

In this vision, IT will no longer be the gatekeeper for all things IT. Instead, technology leaders will become the enablers, governors, collaborators, and advisors. Rather than stand in the way, they will use their expertise to empower workers to assert greater autonomy in deploying technology.

There will be an education component to all this. Federal agencies will need to help their workforces become savvy users and consumers of available technologies. Eighty-nine percent of Federal executives agree that as technology democratization unfolds, organizations will need training strategies that include a focus on security and data governance.

At the very least, all employees will need a foundational level of technical and data literacy going forward. From there, leaders will need to drive cultural change. As workers grow more comfortable employing technology tools and re-engineering their work processes, agencies can evolve toward a culture that is far more adaptable, nimble, and confident in meeting the challenges of the future.

What’s to be gained? Rapid innovation, greater worker satisfaction, and an increased ability to meet the Federal mission.

The Accenture Federal Technology Vision lays out a future in which people leverage technology to optimize their work or fix pain points independently. Meanwhile, IT professionals will still drive the big picture: They’ll collaborate with mission teams to identify new technologies and ensure those tools and platforms are deployed securely and efficiently.

By empowering those closest to a problem to create new solutions, IT will help agencies keep pace with rapidly changing needs.

Mirrored World: Digital Twins Report for Duty Across Government

The Accenture Federal Technology Vision highlights the five technology trends poised to have the most significant impact on how government operates over the next three years. Today, we look at Trend 2: Mirrored World, which promises to drive improvements across a range of Federal use cases.

The “digital twin” first came to the forefront in NASA’s circa-1970 use of computer simulations to diagnose and repair the damaged Apollo 13 spacecraft from 200,000 miles away. Lately, the rise of cloud, AI, machine learning, 5G, and IoT have pushed such modeling to the forefront as a critical tool for managing the enterprise.

For example, how can government ensure next-generation nuclear reactors are as safe and secure as possible? The Energy Department’s Idaho National Laboratory (INL) uses digital twin technology to develop reactors that can operate with unprecedented levels of monitoring, control, and supervision.

Meanwhile, all three military branches are using or exploring digital twins to improve weapons platforms and systems’ maintenance and readiness. The Army, for instance, is modeling the UH-60 Black Hawk helicopter to enhance maintenance and assess accident or battle damage, while the Air Force is using this approach to evaluate cyber vulnerabilities within its global positioning system (GPS) satellites and systems.

Simulated Systems

A digital twin replicates physical assets in a virtual environment as a simulated model of a machine, process, or system. This sophisticated modeling can help program managers understand how these objects might behave under various circumstances.

Digital twins can simulate complex scenarios in countless new and unimagined ways – for example, using machine learning – to capture new insights and surface new possibilities.

Machine intelligence makes it possible to create a mirrored world in which complex or chaotic interactions can be reproduced, analyzed, and optimized. Advanced versions of the digital twin can deliver living models of entire workplaces, warehouses, product lifecycles, supply chains, ports, mission spaces, and even cities.

Federal leaders see great potential for digital twins to impact the way government meets its mission. The Accenture Federal Technology Vision found that 24 percent of Federal executives say their organization is experimenting with this approach, and 63 percent expect their organization’s investment in intelligent digital twins to increase over the next three years.

The Federal Imperative

As a decision-support tool, digital twins give government the ability to mimic the real world with unprecedented precision and accuracy. And they enable new levels of experimentation: Decision-makers can change any number of variables and conduct unlimited ‘what-if’ analyses to model likely outcomes.

Digital twins can help to remove blind spots. By layering in machine learning algorithms, these tools can model a vastly expanded range of potential scenarios, helping government leaders become more proactive in risk awareness and mitigation.

In addition, the digital twin approach can help government agencies to drive more effective partnerships. The Air Force Research Laboratory, for instance, is teaming with a Florida-based public-private effort to develop a secure digital twin to improve semiconductor production. Eighty-seven percent of Federal executives agree that digital twins strengthen their ability to collaborate in strategic ecosystem partnerships.

Intelligent digital twins promise to change how Federal agencies operate, collaborate, and innovate. In Trend 2: Mirrored World we describe a range of potential Federal use cases – from asset optimization, to remote diagnostics and troubleshooting, to predictive maintenance, to route and traffic optimization.

Agencies can start today to build intelligent twins of their assets and ecosystems. By piecing together their first mirrored environments now, they will be far better positioned to succeed in a more agile and intelligent future.

Stack Strategically: Rearchitecting Government for What’s Next

The Accenture Federal Technology Vision addresses the five technology trends poised to have the most significant impact on how government operates over the next three years. Today, we look at Trend 1: Stack Strategically as it promises to be a game-changer for Federal agencies.

If the COVID-19 pandemic proved anything, the Federal government needs to rethink its approach to technology. In fast-moving times, it’s more important than ever to approach business strategy and IT strategy as two halves of a whole.

Take, for example, the U.S. Department of Education’s Office of Federal Student Aid (FSA), which processes Federal financial aid. Rather than just upgrade, the agency recently rearchitected to deliver a more consistent user experience across multiple channels. Thanks to automation, containerization, and a flexible cloud architecture, the agency is processing vastly greater numbers of applications more efficiently than ever.

IT Drives Change 

Just as Amazon has aligned its technology architecture with its business goals to create an often-unbeatable competitive differentiator – low costs, vast selection, fast delivery – Federal technology architects can leverage modernized tools to radically alter how their agencies execute the mission.

They can expand every element of the stack – from the distribution of cloud deployments, to the types of AI models, to the integration of edge devices – to drive mission improvements.

Rather than remain encumbered by outdated infrastructures, IT leaders can transform their architecture to play a more active role in propelling the agency’s mission and business operations. They can work with business-line leaders to make critical architectural decisions to leverage both existing and emerging capabilities to the greatest effect.

Most already see the wind blowing in this direction. Accenture found that 90 percent of Federal executives agree that their organization’s business and technology strategies are becoming inseparable – even indistinguishable.

To succeed in fast-changing times, agencies need robust and versatile infrastructures. They need to shed technical debt in favor of building technical wealth.

Three Principles

Trend 1: Stack Strategically of the Accenture Federal Technology Vision proposes a three-pronged approach to modernization. To succeed, agencies need to Fortify, Extend, and Reinvent their architectures to bring them into closer alignment with business objectives.

  • Fortify: As the FSA proved, cloud-native architectures can empower government to innovate and adapt at digital speed. By making smart strategic decisions, agencies can build technical wealth, leveraging modernization to generate cost savings and further enhancements.The U.S. Department of Agriculture, for example, has implemented an API architecture that allowed it to consolidate operations within eight mission areas, maximize technology ROI through decoupling and reuse, and provide more integrated customer service. The point is that applications and data alike offer opportunities to fortify the base of technical wealth.
  • Extend: Agencies can extend the impact of their IT investments by tying technology strategies to specific business tactics and solutions. For example, by connecting agency missions to the cloud, these programs can tap the rich variety of capabilities offered by cloud service providers to deliver innovative services.And with a multi-cloud approach – such as that adopted by the Homeland Security Department, the Treasury Department, and the CIA — they can tap into multiple best-of-breed platforms and applications while avoiding potential vendor lock-in.
  • Reinvent: Agencies need more than incremental advances. They need to take proactive steps to ensure the new tools are put to the best and highest use in the current environment.Given concerns about possible misuse of AI and ML, they need to reinvent their strategies and policies to prioritize “responsible” or “ethical” use. Reinvention challenges government agencies to develop a firm understanding of the emerging technologies proliferating across virtually every industry.

Architecture as Strategy

Our Federal Technology Vision found that just 30 percent of Federal executives say that technology drives their organization’s overall strategy and goals. But there’s a big potential win here, too: 87 percent of Federal executives believe that their organization’s ability to generate business value will increasingly be based on choices made around their technology architecture.

To adapt more nimbly to changing requirements and emerging capabilities, agencies can start by reimagining the role of the enterprise architect with a focus on weaving technology and data into their organizational DNA.

With a more strategic approach to the IT stack, agencies can accelerate their innovation strategies to meet changing mission demands. The technology choices they make today will have a far-reaching impact: By focusing on the intersection of architecture and business needs, agencies can more effectively determine their futures.

The Challenge Ahead: How Federal Leaders Can Become Masters of Change

Even before the COVID-19 pandemic, technology was establishing itself as a driving force in government, with cloud computing, artificial intelligence, automation, and other advances changing the way agencies meet the mission. The pandemic accelerated all that.

For the Federal government, COVID-19’s impact was far-reaching. “We are truly in unprecedented times in our nation,” Defense Department CIO Dana Deasy said in April 2020. In the face of crisis, Federal agencies showed what they could do, pivoting on the fly to meet the moment’s needs. And they demonstrated clearly that technology is vital to the mission.

Government leaders proved that, with technology, Federal agencies could undertake significant change rapidly when they need to. For example, they supported a massive shift to telework in an incredibly short timeframe. In a single day in early April last year, the Defense Department activated more than 250,000 remote accounts – an unprecedented feat. In fact, our research found that 91 percent of Federal executives say their organizations innovated with unique urgency amidst the pandemic.

The question now becomes: How pervasive and impactful will this shift in mindset be?

Masters of Change

This year’s Accenture Federal Technology Vision lays out a blueprint for carrying forward the key learnings of the pandemic. It addresses five technology trends that are changing the way government gets things done:

  • Stack Strategically: Rearchitecting Government for What’s Next Building and wielding the best technology stack for mission success means thinking about technology differently – making business and technology strategies indistinguishable.
  • Mirrored World: Digital Twins Report for Duty – Virtual models combine data and intelligence to improve shipyards, jet fighters, supply chains, product lifecycles, and more.
  • I, Technologist: Empowering Innovators in the Workforce – Now, every employee can be an innovator: optimizing work, fixing pain points, and keeping the business in lockstep with new and changing needs.
  • Anywhere, Everywhere: Integrating Your Virtual Workplace – Remote work will likely persist in some form. Leaders must develop “bring your own environment” strategies to address the security ramifications of remote work, drive cultural shifts, and evolve the uses of physical office space.
  • From Me to We: Take the Mission Further with Multiparty Systems – Blockchain and other powerful collaborative technologies enable agencies to leverage partnerships and trusted data to address increasingly complex challenges.

We found that the COVID crisis has given Federal leaders a renewed commitment to the power of technology. Virtually all Federal executives (97 percent) said COVID-19 created an unprecedented stress test for their organizations, while more than half (57 percent) said the pace of digital transformation for their organization is accelerating. That’s not a coincidence.

Golden Opportunity

Our Federal Technology Vision makes the case that Federal leaders now have a golden opportunity to reimagine the future. They can take to heart the past year’s lessons and hardwire their organizations to support continuous change.

Here’s what we learned from this year’s research:

  • Leaders don’t wait for a new normal, they build it – Federal agencies must pursue technology leadership urgently to keep pace with how fast the world and IT are evolving. To make the most of the present opportunity, agencies need to look to technology to support rapid transformation. You need to understand what technology is here, what’s coming around the corner, and what the impacts are likely to be.
  • Technology and business strategies are becoming blurred – Technology is so deeply immersed in our operating model that it often defines mission success. This means that deep technical expertise and competency – both individually and organizationally – are needed to sustain leadership. To achieve this, you will need to build the right innovation environments and cultures, taking down the organizational barriers that divide technology from the mission so that both are moving forward as one.

The Accenture Federal Technology Vision is a crucial read for Federal leaders looking to capitalize on lessons learned in the pandemic. With insights into what government and commercial leaders are doing in response to the major tech trends of the day, it’s a deep dive into how technology is making organizations smarter, more agile, and resilient – and more effective in addressing today’s complicated business and mission challenges.

MeriTalk Insight: FITARA, TMF, Telework, and Trust

Rather than focus on a single topic or recent news event in this column, let’s talk about a number of important Federal government management matters. Think of it as the good government version of Chris Berman’s complete NFL highlights coverage in 60 seconds…

FITARA Scorecard

In Congress’ version of an old TV soap opera where one could go away for months and come back to see that it was only later the same day in the timeline, the House Oversight Subcommittee on Government Operations released its latest edition of the FITARA Scorecard that measures agency progress in implementing the Federal Information Technology Acquisition Reform Act (FITARA).

First, kudos to the trade press reporters who could make a story out of 18 of 24 agencies receiving the same grade as on the previous scorecard. Sensing a little stagnation, Rep. Jody Hice, R-Ga., the ranking member of the subcommittee, called for reform of the scorecard.

In truth, the scorecard has undergone reform over the years from when it was first issued in 2015. These changes have built up from covering only provisions of FITARA, to adding elements of the Modernizing Government Technology (MGT) Act, and more recently efforts to transition to the General Services Administration’s Enterprise Infrastructure Solutions (EIS) contract by September 2022.

Rather than taking a “fresh look” at the scorecard, I’d suggest Congress take a look at who is given the grade. Grading categories – like agency CIO authorities, use of the MGT Act, transition to EIS, and even cybersecurity – are not solely within the purview of Federal agency CIOs. Other department/agency C-level executives either share in those decisions, or are in fact the real decision makers.

So a Capitol Hill session to rank agencies on their IT progress should be hearing from either the Deputy Secretary (COO), the Under Secretary for Management, or the “management team” at agencies – IT, acquisition, human resources and most importantly budget. And while it was important that the new Federal CIO, Clare Martorana, testified at the hearing, it would be nice in the future for her to be joined by the new Deputy Director for Management, Jason Miller.

CIO Testimony

Federal CIO Martorana made an excellent impression at the July 28 FITARA Scorecard

hearing, restating the Biden administration’s position that the government needs to rethink its approach to IT in order to focus on improving citizen and customer service. She also noted that the Technology Modernization Fund (TMF) Board has received more than 100 proposals totaling over $2 billion of funding requests. They will be evaluated against four criteria – modernizing high-priority systems; cybersecurity; public-facing digital services; and cross-government collaboration and scalable services.

The number of proposals received and the investments requested seems to be the main argument for the Biden request for an additional $500 million for the TMF in FY 2022 as opposed to the $50 million in the House appropriations at present.

But it is hardly unusual to have a grants entity receive proposals that total multiples of the funds available; more than twice is modest, in fact. Competition is good. Making hard choices is good. The winners should be exceptional — especially in this round of awards.

I would suggest the TMF Board look at an additional criterion – steps taken to set up an agency Working Capital Fund for future IT investments/actions in the annual budget process to redirect funding from Operations and Maintenance to Development, Modernization and Enhancement.

The bottom line: the $1 billion available this year is unlikely to be a recurrent event, so agencies need to start turning their IT budgeting ship for the outyears.

Future of Work

This has been a topic of discussion for some time and one can find conferences and events even now on the topic. But isn’t the future already here? The pandemic over these past 17 months has accelerated what looks to be permanent change, so that perhaps a better term is “modern work.”

For the government, the effect are sweeping – for employees, managers, collaboration, citizen services, technology, contractors, travel, owned and leased buildings, and on and on. The sooner the government views this as a 2021 issue and makes it an immediate priority the better it will be prepared to deal with core missions, serve citizens and recruit/retain a skilled workforce.

Trust

Another recurrent topic – trust.

Back in 1964, more than 75 percent of Americans said they trusted the Federal government. Today, according to the Pew Research Center, only 25 percent do. But we should note that government is not alone in this decline. Trust in the media had fallen from around 70 percent in the 1970s to around 40 percent today. Americans also report having less trust and more animosity towards one another than they used to.

So how can we explain America’s reported declines in trust over time? A review of recent research suggests some factors. One may be economic stagnation – the poorer and less educated you are, the less trusting you tend to be. In addition, our current partisan rancor actually has made to harder to measure trust. Survey questions that have been asked for decades, such as an approval rating for the President, have become less useful because answers now hinge on political partisanship. A recent “New Yorker” article asked whether we could in fact trust our indicators of trust. Another explanation proposed is that advances in technology have upended the old model in which trust was transmitted from institution to individual.

I raise these issues because many suggested reforms in government – agile, evidence-based, IT modernization, citizen focused, and so on – are justified on the basis of “restoring trust”. A better understanding of the trust deficit might allow us to craft a better restorative strategy.

Summer Reading

August – if we are so lucky – is a month for the beach and reading. Once you get through a few trashy novels, I recommend WE THE POSSIBILITY: Harnessing Public Entrepreneurship to Solve Our Most Urgent Problems by Mitchell Weiss (Harvard Business review Press). I find it readable, insightful, and thought-provoking for all those who support making government better.

CDM Dashboards Empower Threat Hunting, Require Smarter Approach to Data

As we have seen with recent security breaches – including the SolarWinds attack – it can be challenging for Federal security leaders to effectively detect cyber threats across their networks. In the last few decades, private sector organizations have established tools to help monitor for malicious activity, but until recently, the Federal government hasn’t had one centralized method.

The Continuous Diagnostics and Mitigation (CDM) program standardizes how civilian agencies monitor their networks for cyber threats while improving their cybersecurity posture. The program operates under the direction of the Department of Homeland Security (DHS) and the Cybersecurity and Infrastructure Security Agency (CISA).

Through this centralized strategy, the CDM dashboard empowers an ideal threat hunting environment for cybersecurity professionals to identify threats before they strike. DHS is leading the way in exploring advanced endpoint detection and response technologies available to the agency personnel that need them.

The May 12 release of the Executive Order on Improving the Nation’s Cybersecurity greatly accelerated these efforts. The Order requires agencies to deploy an Endpoint Detection and Response (EDR) capability. CISA is tasked with prescribing an EDR deployment initiative to support host-level visibility, attribution, and response regarding Federal Civilian Executive Branch (FCEB) Information Systems at scale. The Order also mandates the authorization of much-anticipated changes to the agreements between FCEB agencies and DHS, which now requires agencies to share detailed information about their systems through the CDM program.

For the first time, the CDM program now has both the technical ability to hunt for threats at scale, and policy authorizing it to do so. Here’s how Federal cyber professionals can leverage these updates and make the most out of the CDM program:

Standardization Makes it Possible to Hunt at Scale

Until now, the lack of normalized data formats has contributed to poor data quality and made it challenging to identify potential vulnerabilities at scale. Open technology tools and common schema specifications hold the power to unlock previously disparate data.

The CDM dashboard maximizes common schema tools, and the increased support for the dashboard means data from various sources, locations, and formats can be more quickly synced and analyzed.

Under the Chief Financial Officers (CFO) Act, there are 23 agency-specific dashboards that feed into the wide-spanning Federal CDM dashboard, as well as data from more than 70 non-CFO Act agencies. As even more agencies adopt the centralized CDM dashboard, the amount of valuable intelligence will continue to grow, delivering the comprehensive government-wide threat visibility required to combat increasingly sophisticated cybercriminals.

Finding Hidden Threats Quickly Depends on Data

Successful threat hunting begins with having the right data to answer the right questions at the right time. Without that data, there is no hunt. Why? Because dwell time – the time between when a compromise first occurs and when it is detected – is on the rise.

Sophisticated state-sponsored threat actors can remain undetected for months; foreign adversaries’ average dwell time is 10-plus days, and in the SolarWinds case we saw dwell times of 300-plus days. The only way to accurately analyze these long dwell times is to retain telemetry for longer periods of time so that historical analysis can be performed.

Data consolidation opens up more insights that can be gathered quickly and cost effectively. While this is good for threat hunters – the more data you have, the more effective your threat hunting process will be – it can easily overwhelm agency systems with its sheer volume and velocity. Further, some degree of data normalization is necessary to enable automated detection at scale, and to help analysts find what they’re looking for. The CDM dashboard provides a model for government-wide data normalization, and helps search through mountains of data – fast. And more data means determining where a breach occurred, what it impacted, and how it might be related to other events.

From Reactive to Proactive Cybersecurity

Agencies need an offensive mindset in today’s security environment and must always assume they’re at risk of being compromised. It’s in this zero-trust world where threat hunters can play a critical role. They understand where and how to look for security threats and can analyze the trends the CDM dashboard generates. Threat hunters are prepared to “fight the network” to eradicate adversaries from within ever-expanding agency perimeters.

It’s not a matter of if, but when, the next cyberattack occurs. When we started writing this article, the Microsoft Exchange breach “Hafnium” was discovered; by the time this article is published, the next big breach – or several big breaches – will have already taken place.

With a more unified approach to how we consume, manage, and analyze data, the nation’s defenders can stop observing the problem and start playing offense, making CDM’s government-wide vision of proactive security a reality of our cyber defense.

The effort has gained significant support from policymakers as well. This year, the National Defense Authorization Act (NDAA) gives CISA the ability to collect data from Federal agency networks and proactively hunt for vulnerabilities without notifying the individual agencies. The Executive Order on Improving the Nation’s Cybersecurity contains dozens of new and enhanced provisions which provide CISA with the authority to make a significant impact. In order to maintain transparency across government, DHS will be required to report the findings of this proactive threat hunting to Congress. With the CDM dashboard, this information is easily shareable, but adhering to the NDAA while continuing to increase threat hunting efforts requires agencies to change the way data is stored, accessed, and analyzed.

In its continuous evolution, more features will be added to the CDM dashboard during the government’s next fiscal year, and updated capabilities are already being piloted at a handful of smaller agencies to measure potential impact. These enhancements include a CDM-enabled Threat Hunting capability, which pulls EDR and log data into the agency dashboard, and enables query across agencies from the Federal dashboard. Generating deeper insights from across the government to alert threat hunters is a giant step forward, and the CDM dashboards can enable this – identifying threats in time to make a difference.

From Buckets to Budgets – Waiting on Infrastructure Plan Details

The bipartisan infrastructure agreement announced with much fanfare by the White House and a handful of Democratic and Republican senators on June 24 remains a feel-good topic in the nation’s Capitol – after all, who doesn’t like the old-fashioned notion of agreement amidst partisan divides – but the longer the wait for nitty gritty details on the deal, the more questions that come to mind.

Here are just a few:

Will the coalition of Democratic and Republican senators who crafted the plan hold together to ensure its passage?

How will it be paired up and managed with the other stand-alone Biden infrastructure plan – the American Families Plan – that the Democrats hope to get through the Senate using the reconciliation process on a straight party vote?

And how will either of the plans be paid for, especially the bipartisan one?

Note: As a former Federal departmental budget officer, I would NEVER be able to get away with anything like what the Office of Management and Budget (OMB) and the Congress are proposing as offsets to the bipartisan plan to insure it doesn’t add to the deficit.

With that caution aside, let’s start with two overarching questions that don’t really have official answers yet. First, what the heck is in the bipartisan plan? Second, if enacted, can our government actually deliver and, if so, when?

Speaking in Buckets

The only summaries of the bipartisan plan I have seen are “big buckets” and spending totals for each. By “big buckets” I mean things like highways, bridges, mass transit, broadband to rural areas, and so on. Moreover, there are whole categories of the bill – like $47 billion for “resiliency” steps to protect infrastructure – that have hardly been detailed at all.

While the details will be developed when the Senate’s bill is drafted in committee, I think for now that the ambiguity is deliberate and small “p” political.

The White House has described the deal as worth $1.2 trillion, and that sounds like a lot. But the original Biden infrastructure request was for $2.25 trillion, and that request was all for “new money.” By contrast, the bipartisan agreement of $1.2 trillion includes hundreds of billions that were already expected to be appropriated for infrastructure. When that is backed out, the bipartisan agreement totals only $579 billion in “new money” – about one quarter of what President Biden first proposed.

On the issue of how the new infrastructure investment will be paid for, the deal relies on several tried-and-true budget gimmicks – 5G spectrum auction proceeds, eliminating unspecified waste, fraud and abuse in unemployment insurance programs, and so on. It will be very interesting to see what the Congressional Budget Office reports when they score the bill, and don’t be surprised if CBO is using harder math than the White House and Senate. Until then, and maybe even afterwards, the fully paid-for bipartisan infrastructure agreement will be like the man behind the curtains in The Wizard of Oz. Pay no attention to him!

The Payoff

If the infrastructure investment agreement becomes law, when will the nation see the benefits?

In a recent Washington Post column, George Will asked “Is America Still Capable of Building Great Things?” In his opening paragraph, Will notes that construction of the San Francisco-Oakland Bay Bridge took four years in the 1930s. But after a 1989 earthquake, when one-third of the Bay Bridge had to be replaced, the project took over two decades. He went on to quote economists’ research that the inflation-adjusted costs of building a mile of the interstate highway system tripled between the 1960s and 1980s. Costs go up, and so does regulatory complexity.

On the plus side, then-Vice President Biden led the Obama-era stimulus effort and did so impressively, with few cases of fraud and abuse. But he also surely found that the promised “shovel-ready projects” were few and far between.

Today, major infrastructure projects take close to a decade just to clear the various bureaucratic hurdles before actual work can begin. Will attributes that to “activist government’s dysfunction” – government’s inability to do one thing at a time.

That phrase is from the Claremont Institute’s William Voegeli, and it means that the government can’t simply repair or replace a bridge, expand a highway, upgrade a rail line or a transit system, and the like. It must do so while also complying with a web of environmental, labor, safety and other mandates – all of which the Biden administration has indicated it will comply with –while also furthering small business and social equity advancements.

If there ever was a task for OMB’s Office of Information and Regulatory Affairs, the Innovation Fellows, and the Department of Transportation and its cabinet partners to take on, it should be this ever-thickening and sometimes immobilizing web of regulations and oversight. They were all put in place, individually and separately, for good reasons. But their accretion, like barnacles on the hull of the ship of state, will only slow a rebuilding of our crumbling national infrastructure.

What to Watch For

The next chapter of the infrastructure story may not take too long to play out, as the White House and congressional Democrats have a rather limited practical window to win approval for both the bipartisan infrastructure agreement, and the American Families Plan bill, before taking up Fiscal Year 2022 regular appropriations. We will know much more about all the details in those big infrastructure buckets once committees have to create legislative language to carry them forward.

Understanding Zero Trust in the Cyber Executive Order for Federal Agencies

Like many before him, President Biden seems to recognize that a crisis presents both danger and opportunity. Facing a barrage of high-profile cyberattacks, the President’s recent Cybersecurity Executive Order also illustrates the profound opportunity in front of his administration to improve the Federal government’s cybersecurity posture by an order of magnitude.

Exploits such as SolarWinds and the DarkSide ransomware attack on the Colonial Pipeline have disrupted national critical infrastructure and put the privacy and safety of millions of individuals at risk. These attacks and others like them also encourage cyber criminals to step up their efforts given the apparent ease with which these targets can be attacked in the name of espionage and profits. Security is no longer keeping up.

The White House’s Cybersecurity EO is therefore refreshing, both in the rigor with which short-term deadlines are imposed, and the clarity with which some clear-cut plans of action are described. Looking more broadly, the order highlights many specific areas of interest, not only for Federal government security, but also for how we should be thinking about security and network architecture everywhere –for every business and government agency, at every level.

Effective Zero Trust Approach Must be Data-Centric, Cloud-Smart

At the highest level, the Executive Order emphasizes that Federal agencies must migrate to cloud services and Zero Trust security concepts.

“To keep pace with today’s dynamic and increasingly sophisticated cyber threat environment, the Federal Government must … [increase} the Federal Government’s visibility into threats, while protecting privacy and civil liberties,” the order says. “The Federal Government must … advance toward Zero Trust Architecture; accelerate movement to secure cloud services… centralize and streamline access to cybersecurity data to drive analytics for identifying and managing cybersecurity risks; and invest in both technology and personnel to match these modernization goals.”

The order also makes it clear there is no time to waste. Agency heads are required to develop plans to implement Zero Trust Architecture within 60 days of the order, and then report on their progress. This is powerful, especially because it insists that Zero Trust principles be applied as part of a security architecture – exactly as our most secure business customers worldwide are already doing.

Judiciously applying Zero Trust also means we must go beyond merely controlling who has access to information, and move toward continuous, real-time access and policy controls that adapt on an ongoing basis based on a number of factors, including the users themselves, the devices they’re operating, the apps they’re accessing, the threats that are present, and the context with which they’re attempting to access data. And that must all be done in a world where users access data from where they are – working from anywhere to stay productive.

Despite the nascent popularity of the term Zero Trust, the big miss on many Zero Trust security initiatives is that they aren’t focused on data protection. Data protection is ultimately about context. By monitoring traffic between users and applications, including application programming interface (API) traffic, we can exert granular control. We can both allow and prevent data access based on a deep understanding of who the user is, what they are trying to do, and why they are trying to do it.

This data-centric approach is the only effective way to manage risk across a mix of third-party applications and a remote-heavy workforce that needs always-on access to cloud apps and data to stay productive. The Executive Order says Federal managers must deal with threats that exist both inside and outside traditional network boundaries. Yesterday’s security and network technologies won’t even start to address the threats created by these trends.

My company is in the cloud security business, focused on protecting data using the real-time context of how that data is being accessed and who is accessing it. The Executive Order provides admirable attention to cloud security concerns, which are what we’re discussing with our customers – some of the biggest and best-known organizations in the world. Importantly, the order also discusses cloud security issues as current issues; no longer is the need to secure cloud infrastructure something seen as “off in the distance.”

And I should commend some Federal CIOs – representing Commerce, the U.S. Patent and Trademark Office, and the Defense Department – who joined us this week at our headquarters in San Jose to explore commercial best practices and emerging SaaS-based cybersecurity technologies that help expedite cloud adoption. Our roundtable discussion allowed community leaders and cybersecurity vendors to hear from Federal CIOs about the pain points of the order and the specific challenges they’re facing across their agencies, and it provided agency leadership with the opportunity to witness firsthand the power behind a true security platform and the value of integration across vendors. I strongly believe this type of continued partnership across public and private sectors will be critical for agencies to successfully and effectively adopt Zero Trust and meet the requirements of the order.

Next Steps

The question now is what the rest of us can do to help the agencies realize and implement the more secure systems that our national security demands. There’s work to do for Congress, for companies like mine, and for states and localities all across the country.

Congress must do at least three things: 1) provide oversight to ensure that agencies follow through; 2) provide robust funding to strengthen and enlarge the Federal cyber workforce; and 3) work with stakeholders to modernize contract language that will identify the nature of cyber incidents that require reporting, the types of information regarding cyber incidents that require reporting, and the time periods within which contractors must report cyber incidents.

Contractors like Netskope that provide cybersecurity services need to be part of that discussion on contract language. But we also need to work with both Congress and the Biden Administration to help those policymakers and procurement officials understand relatively technical issues, such as the use of artificial intelligence or encrypted transmissions to protect data. Through collaboration, smart decisions can be made on securing federal systems while also enabling the right access for a workforce that often accesses those systems from their home computer or mobile device. In the coming weeks, we will launch a new initiative in this regard.

Some of the most important work must be done outside the Beltway. Local education systems must make cybersecurity a core piece of the curriculum so that we can effectively encourage young people to adopt cyber careers early on and think of it as a rewarding, aspirational career path. That can and should be a new American Dream with an inspiring combination of a well-paying career with securing the nation and its cherished freedoms. It is of utmost importance to get this right for the next generation of Americans.

Five Steps to Protect Your Agency Enterprise When Employees Return

Many of us are going back to work in person – and this includes the Federal government. The Office of Management and Budget (OMB), Office of Personnel Management (OPM), and General Services Administration (GSA) announced on June 10 that the 25 percent occupancy restriction for Federal offices has been lifted, and agencies will soon be able to increase the number of employees in their physical workplaces.

While much of the focus, and deservedly so, is on ensuring employees and the workspace meet COVID guidelines, there is another area of concern – cybersecurity. The COVID-19 pandemic forced a hurried shift to remote work in 2020 and agencies had to prioritize employee productivity and remote access. While home and public networks, along with cloud-based applications kept everyone working, they also introduced a hidden threat.

As lockdown restrictions lift and offices prepare to reopen, we must now address the risk posed by an influx of new and returning devices that have been operating with reduced IT oversight for an extended period of time.

As we all started working remotely, often this was replaced by consumer-grade routers with limited security controls on home and public networks, and an IT team fully reliant on a handful of endpoint agents (that can break or be disabled) to ensure device hygiene. Extended periods of remote work with infrequent IT oversight and limited network security controls causes device hygiene and security posture to deteriorate. Dubbed “device decay,” this exposes devices to vulnerabilities and threats, and translates into an increased attack surface for malicious actors to target.

As agencies prepare to reopen after months of low office occupancy, devices with degraded security posture can pose a serious risk to agency networks. They provide an entry point for threat actors looking to infiltrate agency networks, exfiltrate sensitive information or wreak havoc on day-to-day operations. This comes at a time of massive increases in cyberattacks, with the FBI alone handling more than 4,000 cybercrime incidents per day, a four-fold jump from pre-pandemic days.

Device decay manifests itself in different ways across different cohorts of devices:

  • Employee agency devices that started with generally good security posture in pre-pandemic days and have degraded over time – broken agents, missing security patches, unauthorized applications, and configuration drift.
  • New devices, often consumer-grade laptops, that got added into the work ecosystem during the pandemic without gold master images, and that never had the same stringent levels of device hygiene.
  • In-office or remote devices that were switched off because they weren’t needed during the work-from-home phase and haven’t been kept up to date with the latest security patches.
  • Always-on IoT and OT devices such as physical security systems, conference room smart TVs and HVAC systems that have remained idled/unused and gone unattended by IT, with potential exposure to vulnerabilities discovered in multiple TCP/IP stacks used by hundreds of vendors and billions of devices. These devices will take a long time to be patched, if they can be patched at all.

The following best practices can fortify agency network defenses to prepare for returning workers and their devices.

  1. Implement real-time inventory procedures. Managing risk starts with a continuous and accurate inventory process. You need to ensure you have full visibility and detailed insight into all devices on your network, and that you’re able to monitor their state and network interactions in real time.
  2. Assess and remediate all connecting devices. Set up a system to inspect all connecting devices, fix security issues, and continuously monitor for potential device hygiene decay. While many users are still out of the office, use this time to get a head start. First check the idled and always-on in-office systems to ensure they have the latest software releases and security patches installed and running. Assess them for vulnerabilities disclosed while they remained dormant. As degraded and non-compliant devices return to the office, initiate remediation workflows in concert with your security and IT systems.
  3. Automate zero trust policy Adapt your zero trust policies to include device hygiene and fix security issues such as broken security agents, unauthorized apps and missing patches before provisioning least privilege access. Segment and contain non-compliant, vulnerable and high-risk devices to limit their access until they’re remediated.
  4. Continuously monitor and track As devices start returning to the office, they are also expected to be away for extended periods. Continuously monitor all devices while they’re on your network, maintain visibility into their state while off-network, and reassess their hygiene after extended absence. Constant vigilance will allow you to adjust your approach based on the volumes and types of devices connecting to your network and the issues/risks that appear over time.
  5. Train/equip staff to help protect your network. Finally, you should ensure that these security measures are properly reflected in official agency policies. Employees should know the basics such as avoiding the use of unauthorized apps and keeping their devices up to date, so they can assist with combating device decay and help maintain high levels of device and network hygiene.

Managing device decay is not a one-time activity. In the new normal, hybrid work practices will be implemented differently by various agencies and will also vary by groups within agencies. What will be constant across all these work practices is that devices will remain away from the office for extended periods before returning/re-connecting and will be prone to device decay during the away-period.

How PAM Can Protect Feds From Third Party/Service Account Cyber Attacks

For decades, Federal chief information security officers (CISOs) focused on protecting a traditional perimeter and the users within. Today, however, they recognize that there are a seemingly endless number of third-party partners, vendors, and customer accounts, as well as service accounts – accounts which are either not directly tied to employees, or non-human accounts– which could result in compromises.

They need look no further than Russia’s massive hack of SolarWinds software – which led to the accessing of emails at the U.S. Treasury, Justice, Commerce, and other departments – for an Exhibit A illustration of the vulnerabilities of their agency’s entire cyber ecosystem, as opposed to strictly internal digital assets and users.

That expanded security perspective proves necessary due to modern mission requirements and the resources needed to achieve them: Within an agency, multiple external parties and service accounts support every server and system. Constantly monitoring and routinely auditing it all is extremely complex, challenging, and tedious. Hackers are well aware of the situation, and target both third-party partners (i.e., the “people” part of this equation) and service accounts (the non-human, technical component) as lucrative weak links:

The U.S. government is reporting more than 28,500 cybersecurity incidents a year, and 45 percent of breaches result from indirect attacks, according to research from Accenture. It should come as no surprise then that 85 percent of security executives say their organization needs to think beyond defending the enterprise and take steps to protect their entire ecosystem.

“Organizations should look beyond their four walls to protect their operational ecosystems and supply chains,” according to the Accenture report that published the research. “As soon as one breach avenue is foiled, attackers are quick to find other means,” it says.

When asked to assess various technologies and methods, these executives ranked privileged access management (PAM) as one of the top approaches in reducing successful attacks, minimizing breach impact, and shrinking the attack surface. With the defense industrial base (DIB) and perhaps other Federal agencies seeking to adopt Cybersecurity Maturity Model Certification (CMMC) standards as part of their overall strategy, PAM has emerged as a highly effective means toward this goal.

As defined by Gartner, PAM solutions manage and control privileged accounts by isolating, monitoring, recording, and auditing these account sessions, commands, and actions. Third parties and service accounts cannot do their jobs a majority of the time without elevated privileges for access – thus making them a de facto part of the agency enterprise. While such arrangements play an indispensable role in terms of mission performance, productivity, and efficiency, they also expand the attack surface. That’s why CISOs must strongly consider PAM as part of their third-party/service account security strategy, to establish the following capabilities:

Comprehensive auditing. PAM ensures that all service account and privileged activity is audited. You record every session and watch it for anomalous and potentially suspicious interactions/patterns, just as if you were watching a movie.

Reduction of credential exposure. Without PAM, contractors will typically be provided elevated credentials to access a network area or database which is relevant to the task at hand. In the process, they may jot down on a piece of paper “Admin 123” to use as a password, or store it in some other insecure fashion. But these practices increase the risk of threats, especially if the password is weak and/or never changes. The SolarWinds attack was linked to password mismanagement. Through PAM, contractors instead log into a bastion host, which is a secured intermediary proxy, using standard user privileges, and then a connection is brokered without exposing the elevated credentials to the user.

Automation of password rotation. This is particularly relevant for the non-human service accounts. When a service account contacts an internal database server, for example, it will use a password to gain access. But the password often remains static – something a CISO has to address. Doing so manually, however, is logistically impractical if not impossible. PAM tools will automatically rotate passwords, as frequently as deemed necessary, sometimes even on a per-usage/session basis.

It’s clear that the government can’t accomplish its mission goals without the support of third-party partners and service accounts, just as they rely upon the talents and capabilities of their own employees and internal cyber resources. But CISOs can’t ignore the risk potential of the external entities which routinely gain access to their networks and digital assets. Through PAM, they ensure every interaction is tracked and audited, while significantly strengthening password management. As a result, they greatly improve the chances that their agency won’t end up as an Exhibit A illustration of what not to do to prevent a compromise.

Identifying Cyber Blind Spots Vital to Zero Trust Progress

The old adage “consistency is key” rings especially true for Federal cybersecurity operations centers (CSOCs) today. Agencies who pay close attention to their operations center but lack visibility and control of cybersecurity blind spots – specifically applications and workloads – are ripe for attack.

In conducting risk management assessments of 96 agencies, the Office of Management and Budget (OMB) concluded that 71 percent were either “at-risk” or at “high risk,” according to the OMB’s 2018 Federal Cybersecurity Risk Determination Report and Action Plan. OMB indicated that a lack of visibility was creating many of the problems, as only 27 percent of agencies reported that they can detect and investigate attempts to access large volumes of data in their networks. This lack of visibility can have critical consequences for agencies long term.

Take the recent SolarWinds attack as an example. Russia-backed actors injected malware into software updates provided by the vendor, affecting up to an estimated 18,000 companies. This malware was able to infiltrate so many organizations by moving laterally within the systems, thereby avoiding detection for months. This attack demonstrated the dangers of a lack of visibility and control within companies and agencies and led to an increased interest in the Zero Trust security philosophy. How can you do something about your attacker if you can’t see them coming?

You Can’t Secure What You Can’t See

Increased visibility into security operations centers is no longer simply “good practice.”

Traditionally, agencies have been hyper-focused on threat intelligence to monitor for external attacks, but attacks like SolarWinds have demonstrated the importance of internal data-driven visibility. Visibility into how workloads and applications connect helps agencies determine what traffic should be allowed, and what is unnecessary (i.e., a risk).

Visibility is the first step toward protecting data centers – it’s a critical component in stopping unnecessary and nefarious movement. Agencies can monitor their environment with software that shows a real-time application dependency map to help visualize communications between workloads and applications.

With this kind of visibility, you can define which connections need to be trusted, and deny the rest – this approach contains and constrains adversaries automatically. It’s this approach, trusting only what’s absolutely necessary and blocking the rest by default, that is most fundamental for agencies’ security. This approach is what we call Zero Trust.

Zero Trust Has Your Back

Zero Trust has recently become the focus for Federal agencies, and for good reason. Acting Department of Defense CIO John Sherman outlined the importance of the philosophy, saying, “One of my key areas is to really increase our focus on Zero Trust and to maintain our strong focus on cyber hygiene and cyber accountability.” Zero Trust accounts for your blind spots and is marked by a series of unique characteristics:

  • Assume the network is always hostile;
  • External and internal threats exist on the network at all times;
  • Locality is not sufficient for deciding trust in a network;
  • Every device, user, and network flow must be authenticated and authorized; and
  • Security policies must be dynamic and determined from as many data sources as possible.

Many Zero Trust concepts are an evolution of established best practices, such as least privilege, defense-in-depth, and assume breach. Federal organizations have reached a tipping point in security, where yesterday’s best practices alone are not enough to shore up the defenses against a siege of external adversaries. With a Zero Trust architecture, agencies can contain and mitigate cyber risk effectively.

We All Have Blind Spots – It’s What We Do About Them That Matters

Accounting for cybersecurity blind spots means increasing visibility, embracing Zero Trust, and specifically, segmenting your environment to limit the impact of a breach. Zero Trust Segmentation reduces the attack surface, making it more difficult for bad actors to move around the network. By granularly segmenting networks, it becomes easier to protect the most sensitive data that agencies have because Zero Trust Segmentation creates a cloaked ring-fence around applications and workloads. This essentially makes them invisible to a would-be attacker.

Avoiding cybersecurity blind spots doesn’t need to be a shot in the dark. Building and implementing a Zero Trust architecture will ensure agencies maintain the vital security measures necessary to secure high-value assets. In a world where breaches are a certainty, a Zero Trust approach prevents a minor cyber incident from becoming a real-world disaster.

Biden FY2022 Budget – Breaking Down the PMA

Earlier this month, in a May 6 column, I offered up a President’s Management Agenda framework – PMA 46 – for the Biden-Harris Administration as we awaited the full FY 2022 budget proposal, which was publicly released today.

While the so-called “skinny budget” released in April outlined plans for the discretionary part of next year’s budget, it didn’t include a number of specifics, including the Analytical Perspectives volume in which one would normally find policy initiatives to include a chapter on serving citizens, streamlining government, modernizing technology, etc. –in other words, what we have come to call the President’s Management Agenda.

Undeterred, I pressed on, drawing on speeches, policy papers, the campaign platform, testimony in confirmation hearings, as well as what was proposed for funding in the budget outline. At that time I proposed what I thought would be several major tenets of the Biden PMA:

  • Continuing initiatives found in previous Administration’s reform programs — acquisition reform (with a focus on agility), performance measurement, financial management, shared services, customer  satisfaction, and citizen services;
  • “Management” issues mentioned in the Acting Director of the Office of Management and Budget’s April 9, 2021 transmittal letter, to include “Made in America” and “green” initiatives such as clean energy technologies, opportunities for small and minority businesses, civil rights and diversity, and bolstering Federal cybersecurity;
  • Innovation – to include key emerging technologies like quantum computing and artificial intelligence;
  • Technology Modernization to support agencies as they modernize, strengthen and secure antiquated information systems. This was reflected not only in additional dollars for the government-wide Technology Modernization Fund but also for specific efforts at Veterans Affairs, the Internal revenue Service, and the Social Security Administration;
  • Human Capital, with the expectation of new initiatives as well as efforts to undo a number of actions taken by the Trump Administration; and
  • Advancing a vision for a 21st Century government that is focused on improving outcomes using data and evidence, re-establishing trust, re-imagining service delivery, evaluating programs, and recruiting and retaining new talent with technical skills in critical and emerging technology areas.

As the weeks have passed, I found reasons to be confident as well as reasons to be concerned. In just the past few days, new Federal CIO Claire Martorana has been on the circuit and laid out a technology agenda that nicely fits within the framework I suggested. Her ambitious agenda for her office and the Federal CIO Council includes innovation, technology modernization, cybersecurity, citizen services, interoperability and collaboration tools, an updated Federal Data Strategy, and telework. Even more significantly perhaps she has spoken about overcoming resistance to change, noting that innovating involves taking risks and that means tolerating failure and looking to long-term reform as well as short-term successes.

But the administration’s management team has a number of key roles still open.  Most notable is a Director for OMB, but also still vacant are such key jobs in that agency as a Chief Financial Officer, a head of the Office of Information and Regulatory Affairs, and the Administrator of the Office of Federal Procurement Policy, not to mention the Director of the Office of Personnel Management, the Administrator of the General Services Administration, and a number of agency chief operating officers.  At the current pace, we may be well into the Fall before we have the complete array of management leaders installed across the full of government.

The complete budget was released just today – May 28 –quite late even for a new Administration, but understandable given the controversy over the election results and the delay in getting transition teams into place.

The Analytical Perspectives volume, where one would usually find a PMA, includes a chapter on “Management,” which is largely devoted to strengthening and rebuilding the workforce and human resources matters such as trends, pay, and benefits.

It also includes a chapter on “Information Technology and Cybersecurity.” That section presents somewhat more detail on the initiatives previously announced and somewhat more granular detail on the funding allocations to individual civilian agencies for IT (a breakout for the Department of Defense will appear separately) as well as the proposed budget for the US Digital Service.

I found it significant that in the main 72-page budget document – along with a section on spending on The Pandemic and the Economy and a to-be-expected lengthy chapter on Biden’s Building Back Better initiative – was a separate six-page section entitled Delivering Results for All Americans Through Equitable, Effective and Accountable Government.  Management does matter and makes the big time!

Inclusion in this key volume does reflect the administration’s “recommitting to good government” as essential to “promoting public trust in government.” Mentioned in passing is the phrase “as the PMA takes shape,” which I read to mean expect more as other officials are nominated and confirmed.

Acquisition also gets a nod here, with a pledge to create a “modern and diverse Federal acquisition system” – joining the almost 200 studies and procurement reform commissions that have been conducted over the last 30-plus years to do this very thing.  The President’s Budget Message, which opens the transmittal, ends with this: “The Budget … will demonstrate to the American people … that their Government is able to deliver for them again.”

Overall, the Biden Management Agenda creates the “steadiness in administration” – as I mentioned previously – that is essential to bring about management change and reform in a Fortune One company, our massive Federal government.  It emphasizes the elements that are driving change in the private sector – Technology, Innovation, Diversity, and Evidence (TIDE).

Now the White House needs to get a full team on the field to execute against this set of goals.  Going “big” with policy, going “big” with spending,” going “big” with speeches and promises, is all good and inspiring.  But managing, executing, and delivering against that policy agenda will be key to both political success and how history judges this presidency.

So how did I do?  I am known for my modesty and understated excellence, so I can’t profess to be a 2021 Carnac the Magnificent (NOTE: Those under 55, please Google Johnny Carson), the great seer, soothsayer, and sage. But I would give myself a solid “B.”  And to those who may differ, I say “may the bird of paradise fly up your nose”.