Smarter Gov Tech, Stronger MerITocracy
This page is not built out yet. If you are seeing this page, please contact an administrator.

Understanding Zero Trust in the Cyber Executive Order for Federal Agencies

Like many before him, President Biden seems to recognize that a crisis presents both danger and opportunity. Facing a barrage of high-profile cyberattacks, the President’s recent Cybersecurity Executive Order also illustrates the profound opportunity in front of his administration to improve the Federal government’s cybersecurity posture by an order of magnitude.

Exploits such as SolarWinds and the DarkSide ransomware attack on the Colonial Pipeline have disrupted national critical infrastructure and put the privacy and safety of millions of individuals at risk. These attacks and others like them also encourage cyber criminals to step up their efforts given the apparent ease with which these targets can be attacked in the name of espionage and profits. Security is no longer keeping up.

The White House’s Cybersecurity EO is therefore refreshing, both in the rigor with which short-term deadlines are imposed, and the clarity with which some clear-cut plans of action are described. Looking more broadly, the order highlights many specific areas of interest, not only for Federal government security, but also for how we should be thinking about security and network architecture everywhere –for every business and government agency, at every level.

Effective Zero Trust Approach Must be Data-Centric, Cloud-Smart

At the highest level, the Executive Order emphasizes that Federal agencies must migrate to cloud services and Zero Trust security concepts.

“To keep pace with today’s dynamic and increasingly sophisticated cyber threat environment, the Federal Government must … [increase} the Federal Government’s visibility into threats, while protecting privacy and civil liberties,” the order says. “The Federal Government must … advance toward Zero Trust Architecture; accelerate movement to secure cloud services… centralize and streamline access to cybersecurity data to drive analytics for identifying and managing cybersecurity risks; and invest in both technology and personnel to match these modernization goals.”

The order also makes it clear there is no time to waste. Agency heads are required to develop plans to implement Zero Trust Architecture within 60 days of the order, and then report on their progress. This is powerful, especially because it insists that Zero Trust principles be applied as part of a security architecture – exactly as our most secure business customers worldwide are already doing.

Judiciously applying Zero Trust also means we must go beyond merely controlling who has access to information, and move toward continuous, real-time access and policy controls that adapt on an ongoing basis based on a number of factors, including the users themselves, the devices they’re operating, the apps they’re accessing, the threats that are present, and the context with which they’re attempting to access data. And that must all be done in a world where users access data from where they are – working from anywhere to stay productive.

Despite the nascent popularity of the term Zero Trust, the big miss on many Zero Trust security initiatives is that they aren’t focused on data protection. Data protection is ultimately about context. By monitoring traffic between users and applications, including application programming interface (API) traffic, we can exert granular control. We can both allow and prevent data access based on a deep understanding of who the user is, what they are trying to do, and why they are trying to do it.

This data-centric approach is the only effective way to manage risk across a mix of third-party applications and a remote-heavy workforce that needs always-on access to cloud apps and data to stay productive. The Executive Order says Federal managers must deal with threats that exist both inside and outside traditional network boundaries. Yesterday’s security and network technologies won’t even start to address the threats created by these trends.

My company is in the cloud security business, focused on protecting data using the real-time context of how that data is being accessed and who is accessing it. The Executive Order provides admirable attention to cloud security concerns, which are what we’re discussing with our customers – some of the biggest and best-known organizations in the world. Importantly, the order also discusses cloud security issues as current issues; no longer is the need to secure cloud infrastructure something seen as “off in the distance.”

And I should commend some Federal CIOs – representing Commerce, the U.S. Patent and Trademark Office, and the Defense Department – who joined us this week at our headquarters in San Jose to explore commercial best practices and emerging SaaS-based cybersecurity technologies that help expedite cloud adoption. Our roundtable discussion allowed community leaders and cybersecurity vendors to hear from Federal CIOs about the pain points of the order and the specific challenges they’re facing across their agencies, and it provided agency leadership with the opportunity to witness firsthand the power behind a true security platform and the value of integration across vendors. I strongly believe this type of continued partnership across public and private sectors will be critical for agencies to successfully and effectively adopt Zero Trust and meet the requirements of the order.

Next Steps

The question now is what the rest of us can do to help the agencies realize and implement the more secure systems that our national security demands. There’s work to do for Congress, for companies like mine, and for states and localities all across the country.

Congress must do at least three things: 1) provide oversight to ensure that agencies follow through; 2) provide robust funding to strengthen and enlarge the Federal cyber workforce; and 3) work with stakeholders to modernize contract language that will identify the nature of cyber incidents that require reporting, the types of information regarding cyber incidents that require reporting, and the time periods within which contractors must report cyber incidents.

Contractors like Netskope that provide cybersecurity services need to be part of that discussion on contract language. But we also need to work with both Congress and the Biden Administration to help those policymakers and procurement officials understand relatively technical issues, such as the use of artificial intelligence or encrypted transmissions to protect data. Through collaboration, smart decisions can be made on securing federal systems while also enabling the right access for a workforce that often accesses those systems from their home computer or mobile device. In the coming weeks, we will launch a new initiative in this regard.

Some of the most important work must be done outside the Beltway. Local education systems must make cybersecurity a core piece of the curriculum so that we can effectively encourage young people to adopt cyber careers early on and think of it as a rewarding, aspirational career path. That can and should be a new American Dream with an inspiring combination of a well-paying career with securing the nation and its cherished freedoms. It is of utmost importance to get this right for the next generation of Americans.

Five Steps to Protect Your Agency Enterprise When Employees Return

Many of us are going back to work in person – and this includes the Federal government. The Office of Management and Budget (OMB), Office of Personnel Management (OPM), and General Services Administration (GSA) announced on June 10 that the 25 percent occupancy restriction for Federal offices has been lifted, and agencies will soon be able to increase the number of employees in their physical workplaces.

While much of the focus, and deservedly so, is on ensuring employees and the workspace meet COVID guidelines, there is another area of concern – cybersecurity. The COVID-19 pandemic forced a hurried shift to remote work in 2020 and agencies had to prioritize employee productivity and remote access. While home and public networks, along with cloud-based applications kept everyone working, they also introduced a hidden threat.

As lockdown restrictions lift and offices prepare to reopen, we must now address the risk posed by an influx of new and returning devices that have been operating with reduced IT oversight for an extended period of time.

As we all started working remotely, often this was replaced by consumer-grade routers with limited security controls on home and public networks, and an IT team fully reliant on a handful of endpoint agents (that can break or be disabled) to ensure device hygiene. Extended periods of remote work with infrequent IT oversight and limited network security controls causes device hygiene and security posture to deteriorate. Dubbed “device decay,” this exposes devices to vulnerabilities and threats, and translates into an increased attack surface for malicious actors to target.

As agencies prepare to reopen after months of low office occupancy, devices with degraded security posture can pose a serious risk to agency networks. They provide an entry point for threat actors looking to infiltrate agency networks, exfiltrate sensitive information or wreak havoc on day-to-day operations. This comes at a time of massive increases in cyberattacks, with the FBI alone handling more than 4,000 cybercrime incidents per day, a four-fold jump from pre-pandemic days.

Device decay manifests itself in different ways across different cohorts of devices:

  • Employee agency devices that started with generally good security posture in pre-pandemic days and have degraded over time – broken agents, missing security patches, unauthorized applications, and configuration drift.
  • New devices, often consumer-grade laptops, that got added into the work ecosystem during the pandemic without gold master images, and that never had the same stringent levels of device hygiene.
  • In-office or remote devices that were switched off because they weren’t needed during the work-from-home phase and haven’t been kept up to date with the latest security patches.
  • Always-on IoT and OT devices such as physical security systems, conference room smart TVs and HVAC systems that have remained idled/unused and gone unattended by IT, with potential exposure to vulnerabilities discovered in multiple TCP/IP stacks used by hundreds of vendors and billions of devices. These devices will take a long time to be patched, if they can be patched at all.

The following best practices can fortify agency network defenses to prepare for returning workers and their devices.

  1. Implement real-time inventory procedures. Managing risk starts with a continuous and accurate inventory process. You need to ensure you have full visibility and detailed insight into all devices on your network, and that you’re able to monitor their state and network interactions in real time.
  2. Assess and remediate all connecting devices. Set up a system to inspect all connecting devices, fix security issues, and continuously monitor for potential device hygiene decay. While many users are still out of the office, use this time to get a head start. First check the idled and always-on in-office systems to ensure they have the latest software releases and security patches installed and running. Assess them for vulnerabilities disclosed while they remained dormant. As degraded and non-compliant devices return to the office, initiate remediation workflows in concert with your security and IT systems.
  3. Automate zero trust policy Adapt your zero trust policies to include device hygiene and fix security issues such as broken security agents, unauthorized apps and missing patches before provisioning least privilege access. Segment and contain non-compliant, vulnerable and high-risk devices to limit their access until they’re remediated.
  4. Continuously monitor and track As devices start returning to the office, they are also expected to be away for extended periods. Continuously monitor all devices while they’re on your network, maintain visibility into their state while off-network, and reassess their hygiene after extended absence. Constant vigilance will allow you to adjust your approach based on the volumes and types of devices connecting to your network and the issues/risks that appear over time.
  5. Train/equip staff to help protect your network. Finally, you should ensure that these security measures are properly reflected in official agency policies. Employees should know the basics such as avoiding the use of unauthorized apps and keeping their devices up to date, so they can assist with combating device decay and help maintain high levels of device and network hygiene.

Managing device decay is not a one-time activity. In the new normal, hybrid work practices will be implemented differently by various agencies and will also vary by groups within agencies. What will be constant across all these work practices is that devices will remain away from the office for extended periods before returning/re-connecting and will be prone to device decay during the away-period.

How PAM Can Protect Feds From Third Party/Service Account Cyber Attacks

For decades, Federal chief information security officers (CISOs) focused on protecting a traditional perimeter and the users within. Today, however, they recognize that there are a seemingly endless number of third-party partners, vendors, and customer accounts, as well as service accounts – accounts which are either not directly tied to employees, or non-human accounts– which could result in compromises.

They need look no further than Russia’s massive hack of SolarWinds software – which led to the accessing of emails at the U.S. Treasury, Justice, Commerce, and other departments – for an Exhibit A illustration of the vulnerabilities of their agency’s entire cyber ecosystem, as opposed to strictly internal digital assets and users.

That expanded security perspective proves necessary due to modern mission requirements and the resources needed to achieve them: Within an agency, multiple external parties and service accounts support every server and system. Constantly monitoring and routinely auditing it all is extremely complex, challenging, and tedious. Hackers are well aware of the situation, and target both third-party partners (i.e., the “people” part of this equation) and service accounts (the non-human, technical component) as lucrative weak links:

The U.S. government is reporting more than 28,500 cybersecurity incidents a year, and 45 percent of breaches result from indirect attacks, according to research from Accenture. It should come as no surprise then that 85 percent of security executives say their organization needs to think beyond defending the enterprise and take steps to protect their entire ecosystem.

“Organizations should look beyond their four walls to protect their operational ecosystems and supply chains,” according to the Accenture report that published the research. “As soon as one breach avenue is foiled, attackers are quick to find other means,” it says.

When asked to assess various technologies and methods, these executives ranked privileged access management (PAM) as one of the top approaches in reducing successful attacks, minimizing breach impact, and shrinking the attack surface. With the defense industrial base (DIB) and perhaps other Federal agencies seeking to adopt Cybersecurity Maturity Model Certification (CMMC) standards as part of their overall strategy, PAM has emerged as a highly effective means toward this goal.

As defined by Gartner, PAM solutions manage and control privileged accounts by isolating, monitoring, recording, and auditing these account sessions, commands, and actions. Third parties and service accounts cannot do their jobs a majority of the time without elevated privileges for access – thus making them a de facto part of the agency enterprise. While such arrangements play an indispensable role in terms of mission performance, productivity, and efficiency, they also expand the attack surface. That’s why CISOs must strongly consider PAM as part of their third-party/service account security strategy, to establish the following capabilities:

Comprehensive auditing. PAM ensures that all service account and privileged activity is audited. You record every session and watch it for anomalous and potentially suspicious interactions/patterns, just as if you were watching a movie.

Reduction of credential exposure. Without PAM, contractors will typically be provided elevated credentials to access a network area or database which is relevant to the task at hand. In the process, they may jot down on a piece of paper “Admin 123” to use as a password, or store it in some other insecure fashion. But these practices increase the risk of threats, especially if the password is weak and/or never changes. The SolarWinds attack was linked to password mismanagement. Through PAM, contractors instead log into a bastion host, which is a secured intermediary proxy, using standard user privileges, and then a connection is brokered without exposing the elevated credentials to the user.

Automation of password rotation. This is particularly relevant for the non-human service accounts. When a service account contacts an internal database server, for example, it will use a password to gain access. But the password often remains static – something a CISO has to address. Doing so manually, however, is logistically impractical if not impossible. PAM tools will automatically rotate passwords, as frequently as deemed necessary, sometimes even on a per-usage/session basis.

It’s clear that the government can’t accomplish its mission goals without the support of third-party partners and service accounts, just as they rely upon the talents and capabilities of their own employees and internal cyber resources. But CISOs can’t ignore the risk potential of the external entities which routinely gain access to their networks and digital assets. Through PAM, they ensure every interaction is tracked and audited, while significantly strengthening password management. As a result, they greatly improve the chances that their agency won’t end up as an Exhibit A illustration of what not to do to prevent a compromise.

Identifying Cyber Blind Spots Vital to Zero Trust Progress

The old adage “consistency is key” rings especially true for Federal cybersecurity operations centers (CSOCs) today. Agencies who pay close attention to their operations center but lack visibility and control of cybersecurity blind spots – specifically applications and workloads – are ripe for attack.

In conducting risk management assessments of 96 agencies, the Office of Management and Budget (OMB) concluded that 71 percent were either “at-risk” or at “high risk,” according to the OMB’s 2018 Federal Cybersecurity Risk Determination Report and Action Plan. OMB indicated that a lack of visibility was creating many of the problems, as only 27 percent of agencies reported that they can detect and investigate attempts to access large volumes of data in their networks. This lack of visibility can have critical consequences for agencies long term.

Take the recent SolarWinds attack as an example. Russia-backed actors injected malware into software updates provided by the vendor, affecting up to an estimated 18,000 companies. This malware was able to infiltrate so many organizations by moving laterally within the systems, thereby avoiding detection for months. This attack demonstrated the dangers of a lack of visibility and control within companies and agencies and led to an increased interest in the Zero Trust security philosophy. How can you do something about your attacker if you can’t see them coming?

You Can’t Secure What You Can’t See

Increased visibility into security operations centers is no longer simply “good practice.”

Traditionally, agencies have been hyper-focused on threat intelligence to monitor for external attacks, but attacks like SolarWinds have demonstrated the importance of internal data-driven visibility. Visibility into how workloads and applications connect helps agencies determine what traffic should be allowed, and what is unnecessary (i.e., a risk).

Visibility is the first step toward protecting data centers – it’s a critical component in stopping unnecessary and nefarious movement. Agencies can monitor their environment with software that shows a real-time application dependency map to help visualize communications between workloads and applications.

With this kind of visibility, you can define which connections need to be trusted, and deny the rest – this approach contains and constrains adversaries automatically. It’s this approach, trusting only what’s absolutely necessary and blocking the rest by default, that is most fundamental for agencies’ security. This approach is what we call Zero Trust.

Zero Trust Has Your Back

Zero Trust has recently become the focus for Federal agencies, and for good reason. Acting Department of Defense CIO John Sherman outlined the importance of the philosophy, saying, “One of my key areas is to really increase our focus on Zero Trust and to maintain our strong focus on cyber hygiene and cyber accountability.” Zero Trust accounts for your blind spots and is marked by a series of unique characteristics:

  • Assume the network is always hostile;
  • External and internal threats exist on the network at all times;
  • Locality is not sufficient for deciding trust in a network;
  • Every device, user, and network flow must be authenticated and authorized; and
  • Security policies must be dynamic and determined from as many data sources as possible.

Many Zero Trust concepts are an evolution of established best practices, such as least privilege, defense-in-depth, and assume breach. Federal organizations have reached a tipping point in security, where yesterday’s best practices alone are not enough to shore up the defenses against a siege of external adversaries. With a Zero Trust architecture, agencies can contain and mitigate cyber risk effectively.

We All Have Blind Spots – It’s What We Do About Them That Matters

Accounting for cybersecurity blind spots means increasing visibility, embracing Zero Trust, and specifically, segmenting your environment to limit the impact of a breach. Zero Trust Segmentation reduces the attack surface, making it more difficult for bad actors to move around the network. By granularly segmenting networks, it becomes easier to protect the most sensitive data that agencies have because Zero Trust Segmentation creates a cloaked ring-fence around applications and workloads. This essentially makes them invisible to a would-be attacker.

Avoiding cybersecurity blind spots doesn’t need to be a shot in the dark. Building and implementing a Zero Trust architecture will ensure agencies maintain the vital security measures necessary to secure high-value assets. In a world where breaches are a certainty, a Zero Trust approach prevents a minor cyber incident from becoming a real-world disaster.

Biden FY2022 Budget – Breaking Down the PMA

Earlier this month, in a May 6 column, I offered up a President’s Management Agenda framework – PMA 46 – for the Biden-Harris Administration as we awaited the full FY 2022 budget proposal, which was publicly released today.

While the so-called “skinny budget” released in April outlined plans for the discretionary part of next year’s budget, it didn’t include a number of specifics, including the Analytical Perspectives volume in which one would normally find policy initiatives to include a chapter on serving citizens, streamlining government, modernizing technology, etc. –in other words, what we have come to call the President’s Management Agenda.

Undeterred, I pressed on, drawing on speeches, policy papers, the campaign platform, testimony in confirmation hearings, as well as what was proposed for funding in the budget outline. At that time I proposed what I thought would be several major tenets of the Biden PMA:

  • Continuing initiatives found in previous Administration’s reform programs — acquisition reform (with a focus on agility), performance measurement, financial management, shared services, customer  satisfaction, and citizen services;
  • “Management” issues mentioned in the Acting Director of the Office of Management and Budget’s April 9, 2021 transmittal letter, to include “Made in America” and “green” initiatives such as clean energy technologies, opportunities for small and minority businesses, civil rights and diversity, and bolstering Federal cybersecurity;
  • Innovation – to include key emerging technologies like quantum computing and artificial intelligence;
  • Technology Modernization to support agencies as they modernize, strengthen and secure antiquated information systems. This was reflected not only in additional dollars for the government-wide Technology Modernization Fund but also for specific efforts at Veterans Affairs, the Internal revenue Service, and the Social Security Administration;
  • Human Capital, with the expectation of new initiatives as well as efforts to undo a number of actions taken by the Trump Administration; and
  • Advancing a vision for a 21st Century government that is focused on improving outcomes using data and evidence, re-establishing trust, re-imagining service delivery, evaluating programs, and recruiting and retaining new talent with technical skills in critical and emerging technology areas.

As the weeks have passed, I found reasons to be confident as well as reasons to be concerned. In just the past few days, new Federal CIO Claire Martorana has been on the circuit and laid out a technology agenda that nicely fits within the framework I suggested. Her ambitious agenda for her office and the Federal CIO Council includes innovation, technology modernization, cybersecurity, citizen services, interoperability and collaboration tools, an updated Federal Data Strategy, and telework. Even more significantly perhaps she has spoken about overcoming resistance to change, noting that innovating involves taking risks and that means tolerating failure and looking to long-term reform as well as short-term successes.

But the administration’s management team has a number of key roles still open.  Most notable is a Director for OMB, but also still vacant are such key jobs in that agency as a Chief Financial Officer, a head of the Office of Information and Regulatory Affairs, and the Administrator of the Office of Federal Procurement Policy, not to mention the Director of the Office of Personnel Management, the Administrator of the General Services Administration, and a number of agency chief operating officers.  At the current pace, we may be well into the Fall before we have the complete array of management leaders installed across the full of government.

The complete budget was released just today – May 28 –quite late even for a new Administration, but understandable given the controversy over the election results and the delay in getting transition teams into place.

The Analytical Perspectives volume, where one would usually find a PMA, includes a chapter on “Management,” which is largely devoted to strengthening and rebuilding the workforce and human resources matters such as trends, pay, and benefits.

It also includes a chapter on “Information Technology and Cybersecurity.” That section presents somewhat more detail on the initiatives previously announced and somewhat more granular detail on the funding allocations to individual civilian agencies for IT (a breakout for the Department of Defense will appear separately) as well as the proposed budget for the US Digital Service.

I found it significant that in the main 72-page budget document – along with a section on spending on The Pandemic and the Economy and a to-be-expected lengthy chapter on Biden’s Building Back Better initiative – was a separate six-page section entitled Delivering Results for All Americans Through Equitable, Effective and Accountable Government.  Management does matter and makes the big time!

Inclusion in this key volume does reflect the administration’s “recommitting to good government” as essential to “promoting public trust in government.” Mentioned in passing is the phrase “as the PMA takes shape,” which I read to mean expect more as other officials are nominated and confirmed.

Acquisition also gets a nod here, with a pledge to create a “modern and diverse Federal acquisition system” – joining the almost 200 studies and procurement reform commissions that have been conducted over the last 30-plus years to do this very thing.  The President’s Budget Message, which opens the transmittal, ends with this: “The Budget … will demonstrate to the American people … that their Government is able to deliver for them again.”

Overall, the Biden Management Agenda creates the “steadiness in administration” – as I mentioned previously – that is essential to bring about management change and reform in a Fortune One company, our massive Federal government.  It emphasizes the elements that are driving change in the private sector – Technology, Innovation, Diversity, and Evidence (TIDE).

Now the White House needs to get a full team on the field to execute against this set of goals.  Going “big” with policy, going “big” with spending,” going “big” with speeches and promises, is all good and inspiring.  But managing, executing, and delivering against that policy agenda will be key to both political success and how history judges this presidency.

So how did I do?  I am known for my modesty and understated excellence, so I can’t profess to be a 2021 Carnac the Magnificent (NOTE: Those under 55, please Google Johnny Carson), the great seer, soothsayer, and sage. But I would give myself a solid “B.”  And to those who may differ, I say “may the bird of paradise fly up your nose”.

The President’s Management Agenda – Biden Going Big Again?

The Biden administration – less than four months after taking office – has thus far developed a reputation for going “big” with its policy and spending aims.

What unites some of those big efforts? How about the word “trillions” – for the American Rescue Plan (approved at $1.9 trillion), the American Jobs Plan (proposed at $2.3 trillion), the American Families Plan (offered at $1.8 trillion), a preliminary Fiscal Year 2022 budget (dangled at $1.5 trillion), and who knows how much in taxes required to pay for them.

At this point, “small” is not the new administration’s watchword.

But very soon – within weeks I suspect – we will be learning much more about some of the nitty-gritty details underlying the administration’s big-picture visions that will help paint a much clearer picture of what the government hopes to accomplish over the next four years.

Here Comes the PMA

As I noted in my last column, we still await President Biden’s complete and full FY 2022 budget proposal. The so-called “skinny budget” released last month outlines his plans for the discretionary part of next year’s budget. But it doesn’t include the almost two-thirds of the budget dedicated to mandatory programs, nor does it feature revenue forecasts or other accompanying documents, like the Analytical Perspectives volume.

It is in this latter document that one would normally find a crucial chapter on the President’s Management Agenda (PMA). Why is the Biden PMA important?

A number of years ago in a special forum I assembled for “The Public Manager” journal, Prof. Donald Kett, then of the University of Maryland, spoke to the very heart of that question:

“No self-respecting president can enter office without a management plan,” he said. “Not that ordinary Americans expect it; most know little and care less about who delivers their public services and how. A management plan, however, conveys important signals to key players. The federal executive branch’s 2.6 million employees look for clues about where their new boss will take them. Private consultants tune their radar in search of new opportunities. Most important, those who follow the broad strategies of government management seek to divine how the new president will approach the job of chief executive, where priorities will lie, and what tactics the president will follow in pursuing them. Management matters; with each new administration, the fresh question is how.”

PMA Predictions

I’d like to offer a President’s Management Agenda framework – PMA 46 – for the Biden-Harris Administration. I hope his new management team at the Office of Management and Budget (OMB), the Office of Personnel Management (OPM), the General Services Administration (GSA), and Federal agency chief operating officers will find it useful.

It draws from President Biden’s speeches, policy papers, and platform as well as the testimony of key advisors during recent confirmation hearings. Also, if one views the budget in part as the “pricing out” of priorities, it is possible to “backward map” from even the “skinny budget” to major elements of a PMA.

Here are some of the big tenets:

Reforms

The private sector knows that reforms take years, but for a long time the public sector has had trouble grasping that lesson.

Historically, a new President would sweep away whatever his predecessor had done and develop an entirely new package. That’s what George W. Bush did with Vice President Al Gore’s reinventing government campaign. It was also what President Bill Clinton did with President George H. W. Bush’s total quality management initiative.

But in recent years, incoming leaders seem to have recognized that in Federal management reform, there is truly nothing new under the sun, and many old promises merit a second chance.

So for the Biden PMA, expect a “round up of the usual subjects” – acquisition reform (likely with more agility), performance measurement, financial management, shared services, customer satisfaction, and citizen services. And that is to be commended, as it creates the “steadiness in administration” that Alexander Hamilton described as essential to a government well-executed (see Professor Paul Light’s excellent volume “A Government Ill Executed,” 2008).

Likely Picks

In the April 9, 2021 letter transmitting the President’s request for FY22 discretionary funding, OMB Acting Director Shalanda Young does note several “management” issues in the summary section.

No big surprises there, but all can be expected to reappear in some form in the PMA: “Made in America”; “green” initiatives such as clean energy technologies; opportunities for small and minority businesses; civil rights and diversity; and bolstering Federal cybersecurity.

Two others are noted as well, but the associated funding requests are so significant and widespread across multiple departments and agencies that they deserve special mention.

Innovation

The first is innovation, to include key emerging technologies like quantum computing and artificial intelligence, as well as supporting research and development.

On the innovation front, the budget requests additional funding for existing programs or the establishment of new programs at the National Institutes of Health, the Departments of Energy, Commerce, and Defense, the National Oceanic and Atmospheric Administration, the National Institute of Standards and Technology, the National Telecommunications and Information Administration, NASA, and other departments and agencies.

While these investments focus on competitiveness and economic growth, they also reflect a restoration of faith in the Federal government’s ability to tackle difficult problems.

Technology Modernization

The second is technology modernization. Again, as under President Trump, information technology is not viewed as a standalone management pillar. Rather it is viewed as a force multiplier and enabler for other key priorities such as enhanced citizen services, data analytics, and so on.

The discretionary request supports agencies as they modernize, strengthen and secure antiquated information systems both in additional funding for the Technology Modernization Fund and through $750 million as a reserve for agency IT enhancements. But it also includes specific modernization efforts at Veterans Affairs, the Internal Revenue Service, and the Social Security Administration.

Human Capital

The human capital component – to include hiring reform, the role of Federal unions, pay and benefits, performance appraisal, integrity of the civil service, and so on – clearly will be a major element of the Biden PMA.

In this area, perhaps more than in any other, the emphasis will be on undoing a number of actions taken by the Trump Administration, as well as formulating a Biden-Harris human resources agenda.

21st Century Vision

Finally – and by no means do I mean to minimize their import by employing a single summary bullet – I would expect to see steps taken to advance a vision of a 21st Century government which is focused on improving outcomes using data and evidence, re-establishing trust, re-imagining service delivery, evaluating programs, and recruiting and retaining new talent with technical skills in critical and emerging technology areas.

We should be able to see how accurate I am in predicting the contents of PMA 46 in less than a month or so.

MeriTalk Insight: TMF Dreaming – and How to Fund IT Fixes Right Now

A billion of anything isn’t quite what it used to be, but it’s still a lot. And when that billion is dollars for the Technology Modernization Fund (TMF) – a great vehicle that has been underutilized because of low funding levels and strict repayment rules – it may yet end up being a real difference-maker across many government agencies looking at IT modernization.

But there’s a ways to go before that new $1 billion of modernization funding becomes available – much less more attractive – to agencies. Also worth keeping in mind: there are 20 or so big Federal agencies, and a few dozen smaller ones, that may be competing for that source of money. So while the new money is great, the math implies that even the enlarged TMF might not turn out to be a winning lottery ticket for any one organization.

With that in mind, let the Old Budget Guy talk about a couple things agencies can do right now to start funding IT modernization that don’t rely on winning the TMF sweepstakes. And how government might consider spending on not just on IT, but also on more streamlined structures, to pull itself kicking and screaming into the 21st century.

Birth of TMF

First, a little recent history necessary to understand where we are now.

The TMF was established back in 2017 as part of the Modernizing Government Technology (MGT) Act after senior officials at the Office of Management and Budget (OMB), the Government Accountability Office (GAO), and others asked why we were letting our IT infrastructure fall to pieces.

Then-Federal CIO Tony Scott called the government’s reliance on outdated technology a “crisis” to rival the Y2K computer glitch. GAO issued reports and testified about agencies that were (and still are) running tens of millions of lines of long-deprecated software code such as COBOL and assembly languages, and about the aging infrastructure itself – switches, routers, servers, desktops, mainframes, etc.

Research performed by a major infrastructure company found that a substantial portion of the government’s IT hardware had already reached LDoS (Last Day of Support), which means it was not receiving updates, security alerts, or patches. An ever greater portion of that infrastructure was projected to reach that same stage in ensuing years.

While the TMF was established to help deal with these problems, initial funding was quite small compared to the problems that need addressing – increased security risks and vulnerability to cyber-attacks; the inability of outmoded systems to support growing demands for greater mobility, collaboration, and analytics; and especially the truly catastrophic blow that a breakdown in crucial technology could be to the business of government.

IT infrastructure – such dull words. But an issue that touches almost everything about how government works – and could work better if given the chance.

The New Billion

Then came President Biden’s American Rescue Plan, and its allocation of $1 billion to the TMF, which in the past three years had received annual appropriations averaging $25 million per year. On top of that, in the so-called “skinny budget” outlining the administration’s FY 2022 funding proposal, the President asked for another $500 million for the fund.

OMB and the General Services Administration (GSA) have, over the last three years or so, developed and matured their processes for TMF business case development, acquisition and oversight. But in that span with limited funding, the TMF has only overseen 11 projects totaling about $125 million.

So the new funding – with possibly more on the way – is a wonderful “problem” to face.

And the temptation – already finding its voice in Congress – is to reach for the “EASY button” and relax or eliminate TMF’s self-sustaining model of requiring agencies awarded funding to modernize their IT infrastructure and pay back the fund within five years.

Making Working Capital Work

While we’re waiting for the TMF funding to shake out, the Old Budget Guy will explain another way for agencies to get money to start skinning that modernization cat.

The legislation that initially created the TMF offered agencies another option to begin to change the imbalance in funding dedicated to IT Operations and Maintenance (O&M), so that more could be invested in IT Development, Modernization and Enhancement (DME).

Crucially, the law authorized the establishment of IT Working Capital Funds (WCF) so that agencies could make more intelligent decisions with their money, and not indulge in end-of-year, use-it-or-lose-it spending splurges. Discouragingly, data from budget and acquisition analysts shows that nothing has changed in final-quarter and final-month buying binges.

According to the data, many agencies obligate up to 40 percent of their contract dollars in July, August and September – the last quarter of the fiscal year. And the data show that in the final WEEK of the fiscal year, between $1 and $2 billion are spent on IT purchases each DAY.

It may seem hard to believe, but only one agency (and a smaller one at that), the Small Business Administration, has taken advantage of the working capital funds authority granted by this law. Before any agency steps in line now to take advantage of the new TMF buffet, I would have them commit to (1) establish their own IT WCF and (2) outline the steps they would take in their own annual budget formulation process to steer investments from O&M to DME.

Reimbursement Outlook

The text of the American Rescue Plan, signed into law on March 11, doesn’t include a reimbursement requirement for the new TMF funding.

Industry representatives, former government officials and even some current OMB executives have spoken – mostly off the record – about how the existing reimbursement model discourages agencies from participating in the fund, arguing that “not all projects have easily quantifiable results.” In this case “easily quantifiable results” equal claimed savings that are real and can be used to pay back the TMF loan.

Having served in government for years, I know some things are hard to do. But if a lot of things – like improved efficiency and effectiveness, cost avoidances, funds put to better use, increased citizen satisfaction, and so on – were all real then we likely wouldn’t be facing a massive deficit and a new low in trust in government.

Measuring success, finding performance metrics for proposals that don’t have easily quantifiable outcomes, demonstrating results from major investments – all those things are hard. They are likely as hard or even harder than setting up a new WCF, redefining an existing one, or curbing late-September buying sprees.

But we should do the right things, in spite of the fact they may be hard. Moreover, those investments that can’t clearly demonstrate results and benefits will only draw political fire that could undermine the very worthy TMF, and be subject to congressional rescission after the 2022 midterm election.

Spending Ideas

Plenty of ideas have bubbled up on where new TMF money could be invested. Examples include citizen services, remote work, cybersecurity, cloud adoption, COVID-19, racial inequity, climate change, and other areas that are already the focus of the American Rescue plan, the FY22 budget request and the administration’s recent infrastructure/jobs proposal.

Rather than double down on existing Biden-Harris priorities, I would propose two alternatives.

The first is streamlining government. Both Presidents Obama and Trump proposed reorganizations that would lead to a 21st century government by using 1950’s or 1960’s administrative methods (e.g., merging the Departments of Education and Labor, relocating offices and personnel from one area to another, creating a new statistical agency by uniting bureaus drawn from multiple existing departments, etc.). Why not invest in virtual restructurings where integration, collaboration, and citizen services could occur in a real 21st century manner?

The second is shared services, an initiative that initially surfaced as a part of President Reagan’s Reform ’88 program and has been endorsed in almost every President’s Management Agenda since then. It is the poster child for the government adage that “when all is said and done, more is said than is done.” Departments have cited the lack of funding as the reason they couldn’t take action. Let’s take away that excuse.

I was an early supporter of the TMF and remain supportive now. Robust funding provides the opportunity to rethink how we fix the government’s aging IT infrastructure. That is at the heart of the problem now. IT infrastructure policy has become a matter of lurching from crisis to crisis, solving problems after the fact rather than preventing them from happening. It’s time to stop being short-term fix addicts, and start really taking the longer view.

MeriTalk Insight: Biden Budget Offers Early Blueprint … With CR in the Forecast

President Biden on April 9 released a massive $1.52 trillion fiscal year 2022 spending plan that reflects his vision of an expanded – and expansive – Federal government that boosts spending for domestic programs and addresses issues such as education, affordable housing, public health, racial inequality, and climate change, among many others.

In the big picture, non-defense spending would rise next year to $769.4 billion, an increase of nearly 16 percent, while spending on defense would increase 1.7 percent to $753 billion. The latter is considerably less than Republicans have called for, but is more than would have been desired by progressive Democrats who pushed for a flat Pentagon budget, or even spending cuts.

Skinny on Details

The preliminary plan has been dubbed a “skinny budget” for a couple of reasons. While it indicates this administration’s priorities, it doesn’t do so in detail, and it covers only the discretionary portion of total Federal spending.

It will be followed later this Spring (timing details unclear at this time) by a full budget proposal that includes mandatory spending programs such as Social Security, Medicare, and interest on the national debt, etc. These latter programs comprise roughly two-thirds of total government expenditures. Missing too are all the appendices that would detail tax increases and the impact over the next decade on deficits, debt, and the nation’s economy.

Also missing is the Analytical Perspectives volume which is where one would normally find a Biden-Harris Presidential Management Agenda (PMA) chapter. The elements of a PMA are hinted at – more about that in a future column. But for those in the “good government” community, as well as government contractors who seek to see what may stay or go in areas like human resources, information technology (IT), financial management, acquisition, and the like, the suspense only grows.

MeriTalk and others have produced detailed coverage of the IT, innovation, and cybersecurity elements of the President’s skinny budget request.

Also worth noting are proposed increases for science, technology, and research at the National Institute of Standards and Technology, the National Institutes of Health, supply chain security for IT and 5G, the National Science Foundation, rural broadband at USDA, the National Telecommunications Administration at Commerce, NASA, and SBA’s Small Business Innovation Research and Technology Transfer programs. Thus, there is joy and celebration in the Federal IT and professional services community! But let’s keep the champagne on ice for a little bit longer.

Curb Your Enthusiasm

I will leave it to the national media political pundits to debate “big picture” concerns that have already surfaced with the budget proposal, but some are plain to see.

Count among those the immediate starkly negative reaction from GOP legislators, bipartisan concern about flat Defense spending amidst growing concern about Russia and China, Republicans’ reincarnated worry about the Federal deficit, progressives’ push for even more investments in civilian agencies, and deeper cuts at DoD, and the very thin margins Democrats hold in both the House and Senate. And on the process front, exactly how many times will Congress be allowed to go to the “reconciliation well” and avoid needing 60 votes for passage?

Here are some of my concerns as OBG – the Old Budget Guy (aka while in government “The Abominable No Man”).

Time Keeps on Slipping

The major concern I have immediately is that we are way behind in the long path to a Federal budget. While budget laws contain formal deadlines, the process follows a loose calendar where slippage and overlap frequently occur. This year, the slippage is greater than usual.

A President normally submits his/her budget proposal to Congress in February. In March and April, Congress hashes out a budget resolution that provides specific guidance to both authorization and appropriations committees so they can spend the summer months holding hearings, reviewing the details of the administration’s requests, and crafting separate bills to fund the government for the next fiscal year that begins October 1.

Now of course the last time anything worked like this was in the late 1990s when Bill Clinton was President. That was the last time Congress passed and the President signed appropriations bills that funded the entire government, and that we didn’t enter the new fiscal year under a full or partial continuing resolution (CR).

Any new President is late in submitting the first budget. But President Biden is even later with the dispute over the election results, the delay by the GSA Administrator in issuing a letter of ascertainment giving the presidential transition team access to resources and information, and the refusal by the former OMB Director to assist the Biden-Harris team in getting a start on developing an FY 2022 budget. Where is that Trump-Pence FY 2022 budget document anyways? That should be a real collectors’ item!

Bottom Line – Hello CR

So what are the chances that we start the new fiscal year in October with an approved budget? To quote my budget predecessor at Commerce: “Slim and none. And Slim just left town.” My advice: start planning today for a CR later this year.

And looking forward to President Biden’s FY 2023 budget request, which will come as we approach the 2022 congressional elections, what might be the budget atmosphere? I’ll say now, quoting again, “Worse than last year, but not as bad as next year.”

How a Key DoD Agency is Protecting Digital Identities

Digital identities are becoming increasingly important elements of today’s connected infrastructure across the public sector. Boosted by the growth in remote working over the past year, protecting their integrity is key to securing critical IT systems and confidential government information.

But as the recent SolarWinds breach demonstrated, compromised identities and the manipulation of privileged access offer a pathway for cybercriminals to gain access to infrastructure and data, with wide ranging and serious consequences.

With the SolarWinds incident widely described as a “watershed” for cybersecurity threats to the United States, it’s clear that many existing approaches to digital identity security are severely lacking. Indeed, Microsoft described the events of last December as a “moment of reckoning” requiring a “strong and global cybersecurity response.”

As a result, attention is now firmly focused on how government organizations can more effectively deliver secure and reliable Identity and Access Management (IAM). However, as the public sector accelerates efforts to digitally transform both internal and external infrastructure, services and access, digital identities are exposed to even further risk.

But what is IAM and why is it important? IAM is the discipline that enables the right individuals or non-human entities (machine identities) to access the right resources at the right times for the right reasons. In doing so, it addresses the mission-critical need to ensure appropriate access to resources across increasingly heterogeneous technology environments and meet increasingly rigorous compliance requirements.

With these key issues in play, momentum is already gathering in Washington for major legal and regulatory change to better protect government organizations and constituents alike. If passed, for example, the Improving Digital Identity Act of 2020 will direct the National Institute of Standards and Technology (NIST) to create new standards for digital identity verification services across government agencies.

While a proactive approach from government and those responsible for designing and policing standards are key to a more secure future for digital identities, what’s also required are more rigorous, multi-layered cybersecurity strategies that don’t rely on a single solution for protection.

Specifically, as traditional network perimeters dissolve across government departments and beyond, the old model of “trust but verify” – which relies on well-defined boundaries – must be discarded. Instead, the default approach must focus on zero trust, or in other words, the “never trust, always verify, enforce least privilege” view of privileged access, from inside or outside the network.

In doing so, Privileged Access Management (PAM), a key component of IAM, can secure networks and prevent the kinds of identity-based cyber-attacks we read about so much in the headlines. Forrester Research estimates that 80 percent of data breaches involve privileged credential abuse. If 2021 is eventually seen as the watershed moment for public sector cybersecurity in general and the protection of digital identities in particular, organizations should grant least privilege access based on verifying who is requesting access, the context of the request, and the risk of the access environment.

But how does the application of PAM in government work in practice? The experiences of an agency within the Department of Defense (DoD) offers some interesting insight.

Using Privileged Access Management in DoD

In the late 90s, DoD adopted a standard identification Common Access Card (CAC) outfitted with a computer chip that supported public key infrastructure (PKI) credentials as its standard identification credential. In 2005, DoD mandated the use of the CAC for initial user workstation authentication across the entire network, as well as for web-based applications. While the use of a token dramatically increased the security of initial logins, privilege elevation by administrators was still accomplished with plain text usernames and passwords.

When an agency within DoD audited its process and found that privileged user authentication and privilege elevation were still being done with usernames and passwords – creating privilege sprawl across the department – alarm bells went off. U.S. Cyber Command issued a communications tasking order that identified the issue, described the actions required to address it, gave a deadline for completion, and began the process of implementing a reporting structure to ensure compliance.

One of the most critical requirements was to centralize all the account information associated with authentication. The team performed a survey of the market to identify potential vendors and solutions. After an evaluation of the few solutions that could meet their requirements – which included extensive functional and security testing both in the lab and the infrastructure – Centrify was selected based on functionality, maturity, and existing familiarity with the product.

Prior to its implementation, the agency had dozens of disparate identity repositories as well as local account stores in many systems, and an entirely separate infrastructure designed to support Linux servers. When someone wanted privilege on any one of those systems, a new account, username, and password were created.

Because administrators need access across multiple systems, the result was identity sprawl. Today, the department has made significant upgrades to its entire infrastructure, including an online, automated approach to privilege.

Agency employees are now provisioned into Active Directory once. If they require elevated privileges, they’re provisioned and deprovisioned quickly and easily with minimal human intervention. While the main driver was security, automating PAM has resulted in considerable cost savings. It has replaced multiple accounts, usernames, and passwords with a single account and a single authentication methodology. Tasks can now be performed without the complexity, risk, and waiting time. That has simplified day-to-day operations and made access to the system much more transparent.

To protect the often confidential information housed by government entities and their mission-critical systems, digital identity security must be prioritized. While there have been credential-driven government agency breaches reported in the last year, it is positive to see key agencies within DoD taking action to combat the associated risks through a centralized identity and least privilege approach. Between this example and the NIST standards moving forward, hopefully more and more agencies will follow suit.

Federal IT Can’t Ignore Threats Posed by Disinformation

Disinformation is undoubtedly on more people’s radars – Federal IT pros included – heading into 2021 and beyond. But just because we know more about it doesn’t mean we are better prepared to face the challenge that disinformation is posing.

With a normal cyber attack, the government is often targeted directly. Disinformation is different. Instead of attacking core infrastructure, bad actors or nation states attack the population by attempting to skew their beliefs.

Disinformation, a form of misinformation that is created specifically to manipulate or mislead people, is becoming more prevalent – in part because it’s easy to create and disperse. The tools behind deepfakes and malicious bots have been democratized, creation can now be automated, and disinformation-as-a-service has emerged. The Kremlin-backed Internet Research Agency, often referred to as a “troll farm,” sows disinformation everywhere – as do many nation states and domestic organizations.

From an agency perspective, the threat of disinformation has two key components. Nation states and bad actors are using disinformation to discredit agencies and target government employees. While Federal CIOs cannot tackle this problem alone, they can take some steps to mitigate the risk these threats pose. Let’s take a closer look at each component, and what Federal IT pros can do about it.

Retaining Agency Credibility

Nation states and bad actors can harm an agency without targeting them directly utilizing a cyber attack. They could, for instance, impact the number of coronavirus vaccines administered by the Department of Veteran Affairs by using disinformation to sow distrust about vaccine effectiveness or safety among veterans. While Federal agencies cannot control the media or its message, they can run awareness campaigns to counter the threat of disinformation, while also creating certified FAQs and resource pages for constituents.

DHS’ Cybersecurity and Infrastructure Security Agency (CISA) has already been implementing this strategy, as seen with its disinformation toolkit specific to COVID-19. Along with other approaches, the toolkit highlights the most reliable sources for pandemic-related information. In 2019, CISA also released an evergreen infographic demonstrating how foreign influences stoke division between Americans through information campaigns.

While Federal CIOs alone cannot regain control of information in the internet age, agencies can consistently remind people that they represent a reputable source – and can be diligent in only driving constituents to other reputable sources. Agencies may even look to more traditional efforts, like marketing, in order to disseminate verified information to their constituents. Although this represents just one step of many needed to curb the threat of disinformation, it will help offset bad actors’ attempts to discredit agencies.

Educating and Protecting Employees  

Disinformation can also lead to insider threats. Social media and other sources of inaccurate information can radicalize employees, who may then feel compelled to steal sensitive data or IP. Just as disinformation is now for sale, insider-threat-as-a-service exists as well. While bad actors and nation states formerly attempted to bribe and extort their way to sensitive information, they can now either serve disinformation to existing employees, or ultimately become employees themselves.

To prepare for the former, agencies need to implement more education and disinformation training programs. Just as employees must complete training related to IT and HR, they should be required to take classes on recognizing disinformation. They should understand the techniques and procedures nation states will leverage to skew the public’s common belief system and know how to validate news sources. Employees need to be trained on the use of verifiable information fused with false information to alter narratives and discredit reliable sources. While IT pros may feel like such education is beyond their purview, it relates directly to insider threats. By helping employees validate sources, you’re actually protecting your data in the long run.

Additionally, in order to combat both types of insider threat, Federal agencies must be adept at continuous monitoring of user behavior. By having a baseline of normal user behavior, agencies will be able to determine if a radicalized employee is attempting to hoard data or access restricted information. The same can be said for someone who joined an agency with malicious intent in mind. There is simply no way to completely eliminate the threat of disinformation and malicious insiders. Thus, Federal IT pros must put behavioral analytics in place so they can quickly identify and respond to potentially dangerous user behavior.

Sowing Chaos and Confusion

In addition to the two key components previously mentioned, there is another role for the Federal government that may assist in limiting disinformation efforts. While not applicable to most agencies, a concerted effort to discredit foreign influencers may play a role in limiting their effectiveness.

Often times, nation states leverage third parties such as organized crime in their efforts to sow confusion. If the U.S. government were to respond to disinformation campaigns with equal efforts against the parties launching them, confidence in those parties may raise questions and limit nation states’ confidence in using these resources. In other words, if organized crime is sowing disinformation on behalf of a nation state, then a carefully orchestrated effort discrediting them in the eyes of the sponsoring nation may limit their effectiveness and ability to operate offensively. This would have the effect of lowering incentives to create these campaigns.

The Bottom Line

The tough reality is that, in an age of social media, there is no silver bullet to combat this real and growing threat. Everyone must be diligent about questioning what they see online, as opposed to simply taking it at face value and internalizing it as facts. Still, Federal IT pros should be most concerned about disinformation undermining their own credibility – and potentially turning their own employees against them.

Awareness is crucial to combating disinformation on both fronts, but it should be supplemented by behavioral analytics. Federal IT pros should proceed as if disinformation is already impacting both their employees and constituents – because it is. This is an all-hands-on-deck issue, but there are many ways to begin combating the threat of disinformation today.

Cloud-Based Collection of Quality Public Health Data in the Time of COVID-19

The United States was once a leader in the collection and utilization of public health data. As the COVID-19 pandemic wears on, the United States must resume its leadership in this domain.

During the pandemic, numerous lives were lost and still more people became chronically ill with “long-haul COVID” due in part to gaps in U.S. public health data systems. Shortcomings with these data systems directly underlay the need to shut down key sectors of the economy, causing millions of lost jobs and bankruptcies. The widespread shutdowns were necessitated by the lack of data on both community prevalence of the disease and COVID-19 immunity status, which in turn forced the need to treat wide geographic areas of the United States as under threat.

To address these gaps and ensure we are ready to meet future public health challenges, we require a centralized, cloud-based system for use in tracking infectious diseases and chronic conditions. We have named this system the Nationwide Reportable Conditions Data System (NRCDS).

At the outset, the NRCDS would be used for reporting data on the 121 “reportable” diseases and conditions such as COVID-19, influenza, mumps, and cancer. Currently, testing entities (such as Quest) and providers (such as doctors’ offices) are required to report data on these conditions to the nation’s 2,300 state and local public health agencies within 24 hours.

The myriad, disparate reporting locations create a tremendous burden for reporting entities that are simply trying to comply with a Federal reporting mandate. Additionally, these thousands of public health agencies largely host their reportable conditions data on-premises in legacy systems. Starting with the Modernizing Government Technology Act of 2017 (MGT), the Federal government has sought to drive government agencies away from such systems toward more efficient cloud-based operations.

The single, unified, cloud-based NRCDS would align with Federal initiatives like MGT and the Cloud Smart Strategy. Initially, NRCDS data would be comprised of the 31 data elements mandated by the CARES Act Section 18115, reportable condition stipulations for COVID-19. However, the Centers for Disease Control (CDC) has expressed interest in expanding such a platform in the future to include the other 120 reportable conditions as well as further data streams including immunizations, Admission-Discharge-Transfer (ADT) events, electronic Case Reporting (eCR), and Electronic Health Record (EHR) data.

Expanding the NRCDS in the ways proposed by the CDC would enable the creation of a rich longitudinal record of infectious disease diagnoses, immunizations, EHR data about chronic health conditions, and healthcare utilization in the United States – COVID-19-related and otherwise. This data system would have endless crucial uses. For example, the monitoring of COVID-19 could take place at all geographical scales–a capability lacking during the current pandemic – closing an important information gap. Furthermore, for public health case management and education purposes, it could be used to monitor and assist individuals who were vulnerable to a given infectious disease.

We propose a second, cloud-based system that is ensconced within the NRCDS – which we have named the COVID Repository – to track functional immunity status to COVID-19. The COVID Repository would contain information on U.S. residents’ COVID-19 and antibody test results as well as vaccinations. Additionally, it would include viral strain (variant) information to the extent available. The length of time that immunity to COVID-19 lasts is still being studied, and it may only last several months. Once this time window is known, because the COVID Repository is a longitudinal record, it would allow the determination of the approximate “expiration date” of immunity after an infection or vaccine. Figure 1 presents the NRCDS and the associated master COVID Repository, in identifiable and de-identified forms, that would be developed from it. A government agency or contractor permitted to handle personally identifiable health information would maintain the database. Information would be shared with the Federal Aviation Administration, the Federal Emergency Management Agency, the Department of Defense, intelligence agencies, and several other government stakeholders.

Figure 1. The centralized, cloud-based Nationwide Reportable Conditions Data System (NRCDS) and COVID Repository

Ideally, the COVID Repository data would be combined with the other longitudinal health data in the NRCDS so the long-term effects of COVID-19 and its effect on other chronic conditions could be established. The data set would assist U.S. agencies with managing the long-haul COVID-19 caseload, allowing insights that could reduce the burden for U.S. health care and social systems. The data would additionally be maintained and made available in de-identified form to research agencies, nongovernmental organizations, and health care companies, making it an invaluable resource for research.

The NRCDS and COVID Repository would allow for much more accurate public health monitoring. They would also be much more efficient than the current legacy systems and save money. As recently as 2017, many government agencies were spending over 75 percent of their budgets on maintaining legacy systems that were becoming siloed as they failed to integrate with newer technologies. Finally, in an era when the nation’s COVID-19 and other health data are among the top targets of international cybertheft efforts, the cloud-based systems’ increased security would safeguard U.S. residents’ personal health information.

David Dastvar serves as chief growth officer with Eagle Technologies. In his 29 years with public sector and Fortune 1000 companies (including GDIT/CSC, Infosys, CDI, Maximus/Attain, and Northrop Grumman), he has developed and managed professional services and solutions for enterprise-level projects requiring a high degree of program management and technical expertise.

Linda Hermer, Ph.D., leads the Research Team at Eagle Technologies. Dr. Hermer earned her undergraduate degree at Harvard University in Neurobiology and Linguistics and her doctoral degree from Cornell University in Psychology. She was an accomplished neuroscientist and cognitive psychologist before dedicating the second half of her career to improving public health 10 years ago. Since then, she has worked to modernize public health and social science research at universities, nonprofit organizations, and for-profit firms.

FITARA 11.0 Results Show Need for Real-Time Data to Boost Cyber Scores

While the latest Federal Information Technology Acquisition Reform Act (FITARA) scorecard shows all agencies have passing total scores, not one agency’s Cyber score changed from the FITARA 10.0 scorecard issued earlier in 2020.

The Cyber category consists of criteria from the Federal Information Security Modernization Act (FISMA) – and while FISMA measures compliance and considers data points such as number of incidents, it does not provide insight into how these actions unify to reduce risk.

Basic cyber hygiene is the root of many security compliance requirements, and while adhering to those requirements as well as other best practice frameworks can help reduce risk, compliance isn’t enough. Agency cyber defenders also need reliable, real-time data for a comprehensive view of the entire environment so they can identify, assess, focus on, and remediate risks.

The best decisions are made with good, high-fidelity data. So, how can agencies work to manage potential cyber risks and increase posture?

Scoring the FITARA Cyber Category

There are two components within the Cyber scores – the score the agency inspector general gives its agency’s posture on cyber maturity model criteria and Cross-Agency Priority (CAP) goals to modernize IT for better productivity and security – covering asset security, personnel access, network and data protection, and cloud email adoption.

The cyber maturity model has evolved over the past several years to address inconsistencies between how inspectors generally evaluate agency security, and agency evaluations under FISMA – aligning more with the five key pillars of the NIST framework. Agencies need to know where they stand on maturity levels for each, and establish a timeframe and a plan to get to the next maturity level.

More updates to FISMA may happen soon. A recently proposed bill, titled the “Federal System Incident Response Act” would update FISMA criteria, “increasing transparency by clarifying how and when agencies must notify impacted individuals and Congress when data breaches occur.”

Strengthening Agency Cyber Posture

Agency IT teams can strengthen their cyber posture and improve FITARA cyber scores by characterizing risks by the severity of a vulnerability, its age, and the value of the data/system exposed to the threat. This approach is the essential methodology used by CISA’s Agency-Wide Adaptive Risk Enumeration (AWARE) risk scoring algorithm and illustrates the clear difference between measuring risk instead of compliance.

In addition, IT teams should focus on achieving comprehensive visibility into all systems across the enterprise (end-user, cloud, and data center).

To get the real time data necessary for risk managers to act upon these threats, IT teams need to assess the current toolset, and refresh with a platform that simplifies, while removing inefficient legacy tools that are costly and don’t do the job. For a distributed workforce, optimizing tools deployed will help them operate in newer cloud and hybrid environments. By doing so, agency leaders will understand the full environment, and reduce the accountability gaps created by disconnected point-solutions.

Agency CIOs should also consider sharing IT plans. While it’s not required to share plans or progress as they work to improve their cyber maturity levels in conjunction with FISMA, CIOs could submit a plan and share for review within the CIO Council, enabling agencies to learn from one another.

Agency IT teams should test data center efficiency while considering new security applications. Reducing the number of servers in use decreases hardware and software costs, saving dollars that can be re-prioritized. It also allows the opportunity for agencies to leverage a single, ubiquitous, endpoint management platform approach that helps gain end-to-end visibility across end users, servers, and cloud environments – as well as identify assets, protect systems, detect and respond to attacks, and recover at scale. This breaks down the data silos and creates the ability for IT teams to receive good, high-fidelity data in near real time to manage risks.

As agencies work to improve overall cyber posture, the focus must be on improving cyber hygiene and reducing risk. To achieve this, the whole of government must accurately evaluate risk, gain comprehensive visibility into systems, share knowledge across agencies, and improve data center efficiency. At the root, this requires agencies to have reliable, real-time data.

My Cup of IT: Please Join Us at MeriTalk’s Inaugural Ball

Folks:

We’re excited to welcome Joe Biden, the 46th President, to office. It won’t be the largest inauguration crowd in history – but, we hope you’ll join us. We’re hosting a Zoom Biden Inauguration ball from 6:00-7:00 p.m. EST, next Wednesday, January 20th. Crack a beer, raise a glass – and celebrate American democracy.

Register here.

Cheers,

Steve

PS: Dress code, black tie – hoodie optional

My Cup of IT: Cooking Cyber Simpler?

With the Solarwinds breach and CDM budget shortfall, it’s never been more important to communicate the importance of cyber security to the Hill and appropriators. Time to change the menu to increase the appetite for cyber security investment.

 

How SOC Automation Supports Analysts in Securing the Country

The security operations center (SOC) has become the critical hub of Federal agencies’ cyber readiness. SOC analysts keep agencies safely up and running – determining the size and impact of incidents, utilizing threat intelligence, implementing response procedures and collaborating with other staff to address issues.

It’s a big job that can mix both complicated analysis and tedious tasks. That’s why it can be a good fit for security orchestration, automation and response (SOAR) platforms, which can optimize a SOC’s output by automating the mundane tasks analysts regularly perform.

Obstacles to SOC Effectiveness

In a SOC, the process of triaging alarms can stretch into more than a week, especially if the tools used to gather related artifacts and data aren’t integrated. Analysts spend hours on highly repetitive tasks, reviewing and comparing alerts across multiple screens and windows. With terabytes of alerts received per day, analysts can’t keep up.

Most SOC teams aggregate data to create actionable, high-fidelity logs that provide a limited view of an incident’s true impact. But agencies’ siloed need-to-know policies on information-sharing can significantly limit SOC analysts’ visibility into the tools generating the vast amounts of threat data. That makes an accurate situational assessment challenging.

Meanwhile, SOC metrics like incidents handled per hour can incentivize the wrong behavior by motivating analysts to focus on false positives or cherry-picking incidents they can close fast. Analysts should be solving actual problems, not processing tickets.

The New Human-Machine Symbiosis

Security orchestration, automation and response (SOAR) platforms can change that dynamic. A SOAR acts as a central hub that connects the many disparate security tools feeding typical alarms. It optimizes the SOC’s output by automating the mundane, tedious processes analysts normally perform – reviewing and assessing threat intelligence data, determining what is actionable and assigning the information to the right analyst for resolution, but nowhere else.

When done manually, those tasks can take more than a week, depending on the complexity of the problem. Meanwhile, the agency remains vulnerable or could even already be under attack. Tightly integrating a SOAR with a threat intelligence platform can reduce the process to hours or even minutes.

While automation can rapidly assess indicators of compromise (IOCs), analysts’ subject matter expertise is vital for reviewing and interpreting the data. SOC analysts can ensure that alarms coming from similar sources are identified so they can avoid wasting effort on what is really the same problem. They must also determine the “blast radius” of an issue, since a single incident can quickly spread once inside the network.

SOAR can perform the analytics instantly, arming analysts with the data they need for preventive and corrective work. That includes the vitally important task of incident root cause analysis, where analysts’ subject matter expertise and skills are perhaps most valuable. Determining how and why an incident occurred is the single best way to ensure it doesn’t happen again.

A Virtuous Talent Circle

Automating processes can also help ensure that junior analysts have the correct insight to make the best determination as quickly as possible and flag issues for more experienced analysts.

Since automation relieves SOC analysts of hours of wearisome and mundane tasks, it gives them time to develop and document processes for the complex work they perform. Automated processes can then guide junior analysts in skills development and growth.

With lower-level tasks being reliably managed with automation, senior analysts will have more capacity to improve the SOC, devise more repeatable complex workflows, improve the root cause analysis process and standardize responses to ensure repeatable outcomes. They’ll also have more bandwidth to share knowledge and coach the juniors – it’s a win for everyone, allowing more time and people for higher level analysis and fewer requirements for the basic level analysis that can now be addressed through automation.

Embracing the Opportunity

Automation can drive these many benefits and more. It begins with automating well-defined processes as they exist. There is no need to re-engineer established practices when automation is introduced. SOC leaders can adopt a SOAR platform if none is already in place, and use it to align metrics to desired mission outcomes.

Over time, revising and enhancing processes and knowledge management systems by leveraging the benefits of automation, will help develop junior engineers while easing the demands on senior team members. That will improve results and retention across the team and lead to a much more successful SOC. Ultimately, that means greater safety and security for the nation.

My Cup of IT: MeriTalk and MeriTocracy?

Is it a noun or a verb? What the heck is a MeriTalk and why were you so dumb to call your publication that? Why not put Fed or Gov in the name – like every title?

Good questions.

So here goes. We gave our publication a different name, because we wanted to stand aside from the other titles – to challenge the status quo in relationships, content, and format. To talk about the outcomes of tech, not just the tech itself.

MeriTalk is named for the notion of MerITocracy – it’s a noun – meaning “government or the holding of power by people selected on the basis of their ability”. MeriTalk is about spotlighting how IT can deliver a society where all citizens have equal access – and those citizens rise based on their abilities. You see, tech can be the great emancipator and give everybody a fair shake. It can provide new transparency and accountability – and take aim at corruption. It should not be used to suppress or perpetuate fake news. Let’s put our tech on the right path for every American.

With the new Biden-Harris administration that’s America’s goal. So, MeriTalk on, dude.

The Clock is TICing: Accelerating Innovation With Cloud Security Modernization

As remote work shifts to hybrid work for the long term, Federal agencies need continued (and even stronger) cloud security.

I recently moderated a panel of leading Federal cyber experts from the Department of Veterans Affairs (VA), General Services Administration (GSA), and Department of State to discuss how Trusted Internet Connection 3.0 is helping agencies accelerate cloud modernization. The updated policy is allowing agencies to move from traditional remote virtual private network solutions to a scalable network infrastructure that supports modern technology and enables digital transformation.

TIC 3.0 Driving Modern Security and Innovation

“TIC 3.0 removes barriers for the adoption of new and emerging technologies, and it is a key enabler for IT modernization and digital transformation,” said Royce Allen, Director of Enterprise Security Architecture at VA.

Traditional networks often do not support the technologies needed for today’s modern cloud and hybrid IT environment. Agencies have had to make drastic shifts in technology to connect their data center and cloud providers, increase bandwidth, improve security, and more to drive innovation.

For example, by following the TIC 3.0 guidance, the VA has been able to expand the number of users it can support on the network at one time to enable more productivity, and open the door to innovative data sharing solutions.

Hospital systems that previously supported 150 to 200 simultaneous users are now supporting up to 500,000 with a zero trust architecture and cloud-based desktop application. The zero trust architecture helped the VA transition from a network-centric environment to an application-centric environment. In this use case, microsegmentation allowed VA to utilize any network, anywhere, including the internet, to meet the TIC 3.0 guidelines and provide massive on-demand scalability to meet pandemic demands.

The Department of State piloted TIC 3.0 use cases to improve application performance and user experience, especially as employees share data and connect with embassies overseas.

State was managing employees in more locations, using a greater variety of devices than ever before – and thus increasing cyber risks. Protections included backhauling all data internationally through domestic MTIPS/TICs. This slowed down application performance and negatively impacted the user experience, especially on SaaS applications. For example, O365 became virtually unusable due to this hairpinning. TIC 3.0 enabled the agency to pilot a solution that allowed for local internet breakouts across the country, increasing network mobility, while still meeting the rigor of FedRAMP authorization and TIC 3.0 guidelines.

The agency now has full visibility of their servers, can securely direct traffic straight to the cloud, and can allow for more data mobility across embassies around the world, while still storing all sensitive data – i.e. public key infrastructure and telemetry data – in a U.S.-based FedRAMP cloud.

Gerald Caron, Director of Enterprise Network Management, Department of State, noted that TIC 3.0 enabled the agency to focus on risk tolerance. “TIC 3.0 is definitely an enabler to modernization…while still leveraging or maintaining secure data protection,” said Caron.

Pushing for Continued Modernization and Aligning Solutions to TIC 3.0 Guidance

We need to continue to work together to modernize the evolving remote work environment and threat landscape. The next step for TIC 3.0 is to provide additional baseline implementation guidance to agencies, including more information on hybrid cloud guidance, examples of risk profiles and risk tolerance, and the latest use cases.

An important aspect of TIC 3.0 is alignment with other contracts and guidance, including GSA’s Enterprise Infrastructure Solutions. The EIS contract is a comprehensive solution-based vehicle to address all aspects of federal agency IT telecommunications and infrastructure requirements. As the government’s primary vehicle for services including high-speed Internet, government hosting services, and security encryption protocols – it’s critically important that the TIC 3.0 guidance is used to provide the foundation for secure connections across solutions.

GSA recently released draft modifications to add the TIC 3.0 service as a sub security service to EIS. Allen Hill, Acting Deputy Assistant Commissioner for Category Management, Office of Information Technology Category (ITC), Federal Acquisition Service, GSA, said he hopes this collaboration will help agencies mature their zero trust architectures.

“Having the TIC 3.0 guidance allowed us to aggressively push the envelope,” said the VA’s Allen.

The Cybersecurity and Infrastructure Security Agency’s efforts over this past year, as well as TIC’s alignment with EIS, are great examples of what we can accomplish through innovation and strong collaboration. The team demonstrated real leadership, quickly putting the TIC 3.0 Interim Telework Guidance in place to support agencies as they scaled up the remote workforce. This progress is a permanent, positive shift for the Federal government – supporting the move to modernize remote access and enable secure cloud services. We’re still learning – but we’ve taken a giant leap forward.

My Cup of IT – GovTech4Biden

Like many of you, I have read the news every day for the last four years. Every day was like a visit to the proctologist – anger, fear, frustration. And, yes, the A word – anxiety.

So, I decided to put up or shut up – and I founded www.govtech4biden.com in June. I discovered that many of you felt the same way – 150-plus in fact. We embarked on a curious, scary, and fulfilling odyssey. We raised more than $100,000 for the Biden-Harris campaign.

On this journey, we hosted all the leading Democratic Congressman and Senators focused on tech. Fittingly, Congressman Gerry Connolly kicked us off – and leading lights on tech and our economy gave us the momentum to raise over $100,000 for the Biden campaign. Congressman Ro Khanna, Congresswoman Mikie Sherrill, Senator Jackie Rosen, the New Democrat Coalition – and closing out with Senators Maggie Hassan, Sheldon Whitehouse, and Chris Van Hollen.

If you’d like to hear more about GovTech4Biden – our political and tech odyssey – and thoughts on the tech agenda for the future, please join us for a webinar on Tuesday, November 24th from 1:00-2:00 p.m. ET./10:00-11:00 a.m. PT.

I’d like to salute the brave folks that banded together to support the Biden-Harris campaign – and provide a voice for the government technology community in the new administration. That took courage – here’s the tribute movie. We look forward to working with the new administration to champion innovation in government and across America.

To those that sent in unkind emails – I’m trying to understand you. Also happy if you’d like to resubscribe to MeriTalk – just shoot me an email.

We look forward to the opportunity to build back better together – and new tech for government is critical to that success.

Open Source Gives Agencies Long-Term Cloud Flexibility That Powers Cloud-based Telework

After a decade-long initiative to expand telework, the COVID-19 pandemic has shifted the federal government’s workforce to cloud-based telework, practically overnight. While improving workforce flexibility seems like the obvious benefit, federal agencies can also take this opportunity to leverage the extensive ecosystem of open source partners and applications to boost their multicloud modernization efforts.

Agencies that work with the global open source development community are able to accelerate service delivery and overcome many of the common barriers to cloud modernization.

“Within the open source community, there remains a strong focus in helping enterprises adapt to cloud computing and improve mission delivery, productivity and security,” says Christine Cox, Regional Vice President Federal Sales for SUSE. Developing applications with open source tools can also help federal agencies future-proof digital services by avoiding vendor lock-in, enhancing their enterprise security and supporting their high-performance computing requirements.

Why open source is important to federal agencies as they continue to telework

Agencies are working to solve unique and complex orchestration challenges to run applications and sensitive data across multiple cloud environments. They need to be able to respond quickly, with agility, and at scale. Open source solutions allow governments to design customized and secure environments as the interoperability of their agencies’ IT systems and the need to share information in real time across multicloud environments becomes more critical.

“Open source technologies like Kubernetes and cloud native technologies enable a broad array of applications because they serve as a reliable connecting mechanism for a myriad of open source innovations — from supporting various types of infrastructures and adding AI/ML capabilities, to making developers’ lives simpler and business applications more streamlined,” said Cox.

Ultimately, open source projects will help lower costs and improve efficiencies by replacing legacy solutions that are increasingly costly to maintain. Up-to-date open source solutions also create a more positive outcome for the end-users at all agencies — be they the warfighter or taxpayers.

How open source helps cloud migration in a remote environment

The archaic procurement practices based on vendor lock-in don’t allow for effective modernization projects, which is why implementing open source code can help agencies adapt tools to their current needs.

“One of the great benefits about SUSE, and open source, is that we offer expanded support, so that regardless of what you’re currently running in your environment, we can be vendor-agnostic,” Cox says.

In order to take greater advantage of open source enterprise solutions, agency leaders should practice a phased approach to projects, with the help of trusted partners who can guide them in their cloud computing efforts. This allows leaders to migrate to hybrid-cloud or multicloud environments in manageable chunks and in a way that eliminates wasteful spending.

Learn more at SUSEgov.com

Congress Should Evolve – Not Eliminate – the FITARA MEGABYTE Category

Following the release of the FITARA Scorecard 10.0 in August, discussion about sunsetting the MEGABYTE category of the scorecard has picked up. But, is that really a good idea?

The MEGABYTE category measures agencies’ success in maintaining a regularly updated inventory of software licenses and analyzing software usage. With most agencies scoring an “A” in that category, the sense seems to be that MEGABYTE’s mission has been accomplished, and it can now rest easy in retirement.

However, just because a goal has been achieved does not mean the method used to achieve the goal should be discarded. A student who graduates Algebra I doesn’t completely declare victory over math for the rest of her academic career; she moves onto Algebra II.

The same principle should apply to the MEGABYTE category. Instead of getting rid of it, Congress should consider building on it to fit the current market dynamics – which are a lot different than they were in 2016, when the MEGABYTE Act became law.

A Changing MEGABYTE for Changing Times

Back then, cloud computing wasn’t quite as ubiquitous as it is today. Agencies were still buying specific licenses for specific needs, owning software, and getting their occasional updates.

As software procurement evolves and changes in the cloud environment, so too will the methods required to accurately track applications and usage – a challenge which could actually make MEGABYTE’s call for accountability more important than ever.

In some cases, agencies may not even know what they’re paying for. As such, they could end up paying more than necessary. Reading a monthly cloud services bill can be the equivalent of scanning a 30-page phone bill, with line after line of details that can be overwhelming. Many time-starved managers might be inclined to simply look at the amount due and hit pay without considering that they may be paying for services their colleagues no longer need or use.

There’s also the prospect of shadow IT, which appears to have been exacerbated by the sudden growth of the remote workforce. Employees could simply be pulling out their credit cards and ordering their own cloud services – not for malicious purposes, but just to make their jobs easier and improve productivity. In the process, agency employees might sign up for non-FedRAMP certified cloud services or blindly agree to terms and conditions that their agency procurement colleagues would not agree to. These actions can open agencies to risk, and must be governed.

A new MEGABYTE for a new era could be a way to measure accountability and success in dealing with these challenges. Agencies, for instance, could be graded on their effective use of cloud services. The insights gained could lead to more efficient use of those services including the potential to cancel services that are no longer needed. Finally, they could be evaluated based on how well they’re able to illuminate the shadow IT that exists within their organizations for a more accurate overview and governance of applications.

Not Yet Time for MEGABYTE to Say Bye

Just because the MEGABYTE category has turned into an “easy A” for most agencies does not mean that it’s time to eliminate it from the FITARA scorecard. Yes, let’s revisit it, but let’s not let it go just yet. Instead, let’s take it to a new level, commensurate with where agencies stand today with their software procurement.

Reimagining Cybersecurity in Government Through Zero Trust

As the seriousness of the coronavirus pandemic became apparent early this year, the first matter of business for the Federal government was simply getting employees online and ensuring they could carry on with their critical work and missions. This is a unique challenge in the government space due to the sheer size of the Federal workforce and the amount of sensitive data those workers require – everything from personally identifiable information to sensitive national security information. And yet, the Department of Defense, for one, was able to spin up secure collaboration capabilities quite quickly thanks to the cloud, while the National Security Agency recently expanded telework for unclassified work.

Connectivity is the starting line for the Federal government, though – not the finish line. Agencies must continue to evolve from a cybersecurity perspective in order to meet new demands created by the pandemic. Even before the pandemic, the Cyberspace Solarium Commission noted the need to “reshape the cyber ecosystem” with a greater emphasis on security. That need has been further cemented by telework. A worker’s laptop may be secure, but it’s likely linked to a personal printer that’s not. Agencies should assume there is zero security on any home network.

Building a New Cyber World

In the midst of the pandemic, MeriTalk surveyed 150 federal IT managers to understand what cyber progress means and how to achieve it. The need for change was clear; only 11 percent of respondents described their current cybersecurity system as ideal. What do Federal IT pros wish was different? The majority of respondents said they would start with a zero trust model, which requires every user to be authenticated before gaining access to applications and data. Indeed, zero trust has, to a large degree, enabled the shift we are currently seeing. But not all zero trust is created equal.

Federal IT pros need to be asking questions like: How do you do microsegmentation in sensitive environments? How do you authenticate access in on-premises and cloud environments in a seamless way? In the government space especially, there is a lot of controlled information that’s unclassified. As such, it’s not sufficient to just verify users at the door before you let them in. Instead, agencies must reauthenticate on an ongoing basis – without causing enormous friction. A zero trust model is only as good as its credentialing capabilities, and ongoing credentialing that doesn’t significantly disrupt workflow requires behavioral analytics.

Agencies must be adept at identifying risk in order for zero trust to be both robust and frictionless. In this new era, they should be evaluating users based on access and actions. This means understanding precisely what normal, safe behavior looks like so they can act in real-time when users deviate from those regular patterns of behavior. Having such granular visibility and control will allow agencies to dynamically adjust and enforce policy based on individual users as opposed to taking a one-size-fits-all approach that hurts workers’ ability to do their jobs.

The Role of the Private Sector

The current shift in the Federal workforce may seem daunting to some, but it represents a huge opportunity for the government and private sector alike. The Cyberspace Solarium Commission highlighted the importance of public-private partnerships – partnerships that can help make modernized, dynamic zero trust solutions the new normal if they can overcome the unique scaling challenge that Federal IT presents. The government must not just embrace commercial providers, but work closely with them to enable such scale, as it could help the government continue to reimagine its workplace.

Shifting to a zero trust model means improved flexibility and continuity, which can help expand the talent pool that agencies attract. Government jobs were previously limited to one location, with no option for remote work. Thus, agencies lost out on great talent that was simply in the wrong part of the country. Now, some agencies are claiming they don’t need DC headquarters at all.

Additionally, more flexible work schedules may also boost employees’ productivity. A two-year Stanford study, for one, showed a productivity boost for work-from-home employees that was equal to a full day’s work. In recent months, the government has seen that firsthand that flexible and secure remote work can happen through the novel application of existing technologies – including zero trust architecture.

The Bottom Line

Agencies must evolve cybersecurity in a way that allows them to embrace remote work without being vulnerable to attack. It’s not enough to get Federal employees online; users and data must be secure as well. The mass shift to telework represents a huge opportunity for the public sector – which is growing both its remote work capabilities and its potential pool for recruitment – and for those in the private sector who can be responsive to this need.

The majority of Federal IT leaders would implement a zero trust model if they could start from scratch. But once again, zero trust is only as good as your credentialing technology and your ability to understand how users interact with data across your systems. The key to seamless and secure connectivity is behavioral analytics, which allows for ongoing authentication that doesn’t hinder users’ ability to do their jobs or leave sensitive information vulnerable.

Driving IT Innovation and Connection With 5G and Edge Computing

The COVID-19 pandemic has influenced the way agencies function, and forced many to redefine what it means to be connected and modernize for mission success.

Agencies have reprioritized automation, artificial intelligence (AI), and virtualization to continue delivering critical services and meeting mission requirements through recent disruptions, and to predict and navigate future disruptions more efficiently. These transformative technologies open the door to accelerated innovation and have the potential to help solve some of today’s most complex problems.

Still, there is work to be done. While nearly half of Federal agencies have experimented with AI, only 12 percent of AI in use is highly sophisticated.[1] Agencies must rely on a solid digital transformation strategy to leverage next-gen technology, including the fifth generation of wireless technology (5G) and edge computing, to drive these innovations in Federal IT – regardless of location or crisis.

Faster Connections, Better Outcomes

Building IT resiliency and a culture of innovation across the public sector requires greater connectivity and data accessibility to power emerging technologies that enable faster service and better-informed decisions. In a traditional 4G environment, users connect to the internet through a device at a given time. In contrast, 5G integrates devices into the environment, allowing them to connect and stay connected at all times.

This constant connectivity enables agencies to generate data in real-time – not just when they sync with the cloud. Imagine some of the real-life applications of this capability. Healthcare providers would have instant, continuous health data to use in patient care. Soldiers on the battlefield would have constant connectivity for more accurate intel and defense strategies. These insights not only drive efficiency and security, but they save on time and resources.

Dell Technologies’ John Roese recently shared the importance of the U.S. driving these innovations – and the positive implications for the Federal space. “By doing so, we can increase market competitiveness, prevent vendor lock-in, and lower costs at a time when governments globally need to prioritize spending. More importantly, we can set the stage for the next wave of wireless,” he explained.

As an open technology, 5G infrastructure is a high-speed, interconnected mesh provided by multiple vendors at the same time. This prevents challenges presented to agencies by vendor lock-in, and reduces costs associated with creating and maintaining individual access points.

However, with perpetual connectivity, devices require a connection point with low latency. As 5G technology progresses, edge computing becomes a powerful necessity. Gartner reported that by 2022, more than 50 percent of enterprise-generated data will be created and processed outside the data center or cloud.

Dell Technologies’ edge capabilities enable agencies to get the data they need and avoid data siloes by applying logic in the edge – immediately. Dell Technologies has also started to specialize in providing 5G-compatible devices built with edge computing in mind.

These capabilities allow data to be processed in real-time, closer to the source. Devices can intelligently communicate anomalies and changes back to the core data center, enabling a better, more capable edge.

As time progresses, the edge will become smarter in making decisions and reducing the amount of data that needs to be transferred back to the core, while also ensuring the core is updated more frequently to support AI and machine learning.

New Challenges Require New Strategies

As the technology landscape changes yet again, agencies face the challenge of investing in new technology – one that has to be built from the ground up. However, as next-gen technologies continue to develop, government has no choice but to keep up.

Whether providing critical services to the public or creating strategies for the battlefield, agencies need access to the best tools and most accurate insights to drive mission success.

Agencies should leverage support from industry partners like Dell Technologies to get the support they need to accelerate their efforts, drive efficiencies, and innovate. As Roese noted, “when the technology industry of the United States is fully present in a technical ecosystem, amazing innovation happens, and true progress occurs.”

At the end of the day, these efforts lead to better, stronger outcomes for all.

Learn more about how Awdata and Dell Technologies are driving Federal innovation and connection with next-gen tech.

[1] Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, 2020

My Cup of IT – Vote for America, First

I’m a foreigner who’s proud of my heritage – and I’m an American patriot ready to stand up for this great country’s principles.

Feeling shipwrecked by politics and pandemic? This has been a year where too many of us have felt separated – now’s the time to come together and celebrate our American democracy. Whether you’re a Republican or a Democrat your voice must be heard – and your ballot counted. Vote is not a four-letter word. Do it early in person, early by mail, or day of (with protection) – but please do it.

I came to this county for opportunity – and I stayed based on the welcome and the sunshine. Please vote in the election, and respect the votes of others when they’re tallied.

Sometimes it takes a community to get to the truth – a word from somebody you know and trust. That’s why I’m speaking up now.

I know you’re a patriot – America’s depending on you. Let’s put America first – and vote.

Evolving the Remote Work Environment With Cloud-Ready Infrastructure

When agencies began full-scale telework earlier this year, many were not anticipating it would evolve into a long-term workplace strategy. However, in the wake of COVID-19, recent calculations estimate $11 billion in Federal cost-savings per year due to telework, as well as a 15 percent increase in productivity. Agencies are determining how they can continue to modernize – and therefore optimize – to support greater efficiency into the future.

Though many agencies began implementing cloud and upgrading infrastructures well before the pandemic, legacy technology presents a unique challenge in the remote landscape. IT teams and employees who were directly connected to a data center now needed remote access to infrastructure, while keeping security a top priority.

How can agencies ensure they have secure and specific connections that serve their needs and also optimize performance? They must adapt, shifting access to where it is needed and augmenting existing technology with solutions that allow flexibility, agility, and the additional security needed within a distributed environment.

A New Approach for the New Normal

To address issues with remote access, many agencies have turned to software-defined networking in a wide area network (SD-WAN) as it provides secure connection between remote users and the data center or cloud. However, long-term success with telework will require more than access. It will require teams to change the way they use the technology they have.

Recently, I spoke on a MeriTalk tech briefing where I discussed how agencies can leverage cloud-ready infrastructures to accelerate modernization with operational cost-savings and increased efficiency. Dell Technologies VxRail with VMware Cloud Foundation is perfectly suited for the distributed workforce, allowing teams of any size, and in any stage of their modernization journey, to build what they need when they need it.

Remote employees don’t have the same access to the on-prem data center’s compute resources as they had when working on-site. VxRail acts as a modern data center, enabling virtual desktops, compute, and storage in one appliance while providing users with secure access and network flexibility.

Teams can design a VxRail component for as many users as needed and then scale by units. With this flexibility, agencies don’t require as much local infrastructure to function optimally and can scale their services faster and more affordably with one-click upgrades and maintenance.

Teams can also bring the local data center and cloud into one management portfolio – whether a multi-cloud or hybrid environment – integrating all of these capabilities into a single platform that is easy to consume.

These technologies offer cybersecurity advantages as well. The VMware Cloud Foundation can utilize VMware’s NSX, a virtualization and security platform. NSX enables teams to create granular micro-segmentation policies between applications, services, and workloads across multi-cloud environments. Agencies can control not only how many users are in their environment and what resources they are allowed to access, but also where and how users connect to those resources.

Create a Culture of Collaboration

The switch to Federal telework has caused agencies to take a closer look at how they can continue to modernize and optimize IT for mission success – no matter where their employees are located.

Beth Cappello, Deputy Chief Information Officer for the Department of Homeland Security, recently noted, “as we go forward … we’ll look back at the fundamentals: people, processes, technologies, and examine what our workforce needs to be successful in this posture.”

Whether using new technologies or augmenting existing technologies, success will come down to collaboration. Agencies should look to collaborate early and often, and bring in developers and key team members to leverage their knowledge and drive efficiency and agility from the start.

This cultural change will allow government to become more flexible and agile in their approach to modernization – exactly what they need to take Federal IT to the next level.

Learn more about how Awdata and Dell Technologies are helping improve Federal telework with cloud-ready solutions.

Shift to Telework: Enabling Secure Cloud Adoption for Long-Term Resiliency

Over the past few months, agencies have strengthened remote work tools, increased capacity, improved performance, and upgraded security to enable continuity of operations as employees work from home and in various new locations.

However, as networks become more distributed across data centers, cloud, and remote connections, the attack surface increases, opening up the network to potential cybersecurity threats. Agencies have been forced to balance operations and security as they shift how users connect to government networks while remote.

The Department of Homeland Security Cybersecurity and Infrastructure Security Agency (DHS – CISA) has played a key role in providing telework guidance through updates to the Trusted Internet Connections 3.0 guidance (TIC 3.0). This was an important step to provide more immediate telework guidance, open the door for modern, hybrid cloud environments, and provide agencies with greater flexibility.

In a recent webinar, I had the opportunity to speak with Beth Cappello, Deputy CIO, DHS, about IT lessons learned from the pandemic and the future of modern security with TIC 3.0 and zero trust.

TIC 3.0 and the Cloud Push

“When you think about TIC 3.0 and you think about the flexibility that it introduces into your environment, that’s the mindset that we have to take going forward,” said Cappello. “No longer can it be a traditional point-to-point brick and mortar fixed infrastructure approach.”

TIC 3.0 has enabled agencies to take advantage of much-needed solutions, such as cloud-based, secure web gateways and zero trust architecture to support secure remote work.

Prior to the pandemic, DHS had begun adopting cloud – moving email to the cloud and allowing for more collaboration tools and data sharing – enabling the agency to transition from about 10,000 to 70,000 remote workers almost overnight. Many other agencies have similar stories – moving away from legacy remote access solutions to cloud and multi-cloud environments that offer more scalability, agility, and security.

IT administrators must be able to recognize where threats are coming from, and diagnose and fix them through “zero-day/zero-minute security.” To do this, they must turn to the cloud. Cloud service providers that operate multi-tenant clouds can offer agencies an important benefit – the cloud effect – which allows providers to globally push hundreds or thousands of patches a day with security updates and protections to every cloud customer and user. Each day, the Zscaler cloud detects 100 million threats and delivers more than 120,000 unique security updates to the cloud.

Secure Connections From Anywhere 

When the pandemic hit, agencies needed to find a way to connect users to applications, security as-a-service providers, O365, and the internet, without having to backhaul traffic into agency data centers and legacy TICs – which often result in latency and a poor user experience. Agencies required better visibility to identify who is connecting to what, see where they are connecting to, and send that telemetry data back to DHS.

Rather than focusing on a physical network perimeter (that no longer exists), the now finalized TIC 3.0 guidance recommends considering each zone within an agency environment to ensure baseline security across dispersed networks.

As telework continues, many agencies are evolving security by adopting zero trust models to connect users without ever placing them on the network. We know bad actors cannot attack what they cannot see – so if there is no IP address or ID to attack on the network, these devices are safe. Instead, agencies must verify users before granting access to authorized applications, connecting users through encrypted micro-tunnels leading to the right application. This allows users to securely connect from any device in any location while preventing east-to-west traffic on the network.

The Move to the Edge

For long-term telework and beyond, the next big shift in security architectures will need to address how agencies can continue optimizing working on devices in any location in the world. As agencies move to 5G and computing moves to the edge, security should too. Secure Access Service Edge (SASE) changes the focus of security from network-based to data-based, protecting users and data in any location and improving the overall user experience.

A SASE cloud architecture can provide a holistic approach to address the “seams” in security by serving as a TIC 3.0 use case and building security functions of zero trust into the model for complete visibility and control across modern, hybrid cloud environments.

For agencies like DHS, who have a variety of sub-agencies and departments of different sizes and missions, cloud is ideal to facilitate secure data sharing and collaboration tools.

“So, when we’re securing our environment, we’re provisioning, monitoring, and managing. We have to be mindful of those seams and mindful of the gaps and ensure that as we’re operating the whole of the enterprise that we are keeping track of how resilient the entire environment is,” said Cappello.

Managing and Securing Federal Data From the Rugged Edge, to the Core, to the Cloud

The Federal government collects and manages more data outside of traditional data centers than ever before from sources including mobile units, sensors, drones, and Artificial Intelligence (AI) applications. Teams need to manage data efficiently and securely across the full continuum – edge to core to cloud.

In some cases, operating at the edge means space constrained, remote, and harsh environments – with limited technical support. Our new Dell EMC VxRail D Series delivers a fully-automated, ruggedized Hyperconverged Infrastructure (HCI) – ideal for demanding federal and military use cases.

VxRail is the only HCI appliance developed with, and fully optimized for, VMware environments. We built the solution working side by side with the VMware team. Both administrators and end users get a consistent environment, including fully automated lifecycle management to ensure continuously validated states. How? More than 100 team members dedicated to testing and quality assurance, and 25,000 test run hours for each major release.

Users can manage traditional and cloud-native applications across a consistent infrastructure – in winds up to 70 mph, temperatures hot enough to fry an egg and cold enough to freeze water, and 40 miles-per-hour sandstorms. Whether you are managing Virtual Desktop Infrastructure (VDI), or mission-critical applications in the field, your team can take advantage of HCI benefits and ease of use.

As Federal teams collect and manage more data, they also have to be able to put that data (structured and unstructured) to work, creating new insights to help leaders deploy the right resources to the right place, anticipate problems more effectively, and achieve new insights.

Dell Technologies recently announced a new PowerScale family, combining the industry’s number one network-attached storage (NAS) file system, OneFS, with Dell EMC’s PowerEdge servers, at a starting point of 11.5 terabytes raw and the capability to scale to multi-petabytes. PowerScale nodes include the F200 (all-flash), F600 (all-NVME), and Isilon nodes. End users can manage PowerScale and Isilon nodes in the same cluster, with a consistent user experience – simplicity at scale.

Federal teams – from FEMA managing disaster relief, to the Department of Justice working on law enforcement programs, to the Department of Defense managing military operations, can start small and grow easily on demand.

PowerScale is OEM-Ready – meaning debranding and custom branding is supported, while VxRail D Series is MIL-STD-810G certified and is available in a STIG hardening package. Both PowerScale and VxRail D Series enjoy the Dell Technologies secure supply chain, dedicated engineering, and project management support.

As the Federal government continues to deploy emerging technology, and collect and manage more and more data outside of the data center, government and industry need to collaborate to continue to drive innovation at the edge, so we can take secure computing capabilities where the mission is – whether that’s a submarine, a field in Kansas, a tent in the desert, or a dining room table.

Cyber Resiliency Means Securing the User

The recent, rapid shift to remote work has been a lifeline for the economy in the wake of the COVID-19 virus. But that shift also took an already-growing attack surface and expanded it. Government agencies were being called to rethink their cybersecurity posture and become more resilient even before the pandemic. Now, the novel coronavirus has added an indisputable level of urgency on that demand.

The Cyberspace Solarium Commission (CSC) was created as part of the National Defense Authorization Act (NDAA) for the 2019 fiscal year. On March 11, its final report was released, articulating a strategy of layered cyber deterrence through more than 80 recommendations. One of its policy pillars was the need to “reshape the cyber ecosystem,” improving the security baseline for people, tech, data, and processes.

Shortly after the report’s release, the virus upended the work environment of most public sector employees, prompting the CSC to publish a follow-on whitepaper evaluating and highlighting key points and adding four new CSC recommendations, focused heavily on the Internet of Things (IoT). This focus, coupled together with the evolving cyber threat, means that “reshaping the cyber ecosystem” requires the government to move beyond investments in legacy technologies, and focus on the one constant that has driven cybersecurity since the beginning – people and their behaviors.

People Are the New Perimeter

The cyber ecosystem has, to some degree, already been dramatically reshaped. The security baseline needs to catch up. Currently, a large percentage of the Federal workforce is working from home – often relying on shared family networks to do so – and that may continue even as the pandemic subsides. In turn, agencies must look beyond the traditional, office-based perimeter as they secure employees and data. Data and users were already beginning to spread beyond walled-off data centers and offices; mass telework has simply pushed it over the edge.

We’ve already seen bad actors take advantage of this new perimeter by targeting unclassified workers via phishing and other attacks. Recent research found that, as of March, more than half a million unwanted emails containing keywords related to coronavirus were being received each day. Attackers are gaining compromised access, with many simply learning the network for now and lying in wait. Even traditionally trustworthy employees are under tremendous stress and may feel less loyal given the current physical disconnect.

In order to achieve the CSC’s vision of more proactive and comprehensive security, organizations must begin to think of people as the new perimeter. This is not a temporary blip, but the new normal. Agencies must invest in cybersecurity beyond the realm of old-school perimeter defenses. Methods like firewalls or data loss prevention strategies are important, but they are not enough. With people as the new perimeter, there is simply no keeping bad actors out. Instead, agencies need to keep them from leaving their network with critical data and IP – which can only be done with a deep understanding of people and data’s behavior at the edge.

Behavioral Analytics Should Be the Baseline

Putting the commission’s guidance into action must mean putting users at the center of the equation. Once again, it’s insufficient to simply rely on blocking access from bad actors. A more proactive and adaptive approach is required. Agencies must first understand which users pose the greatest risk, based on factors such as what types of data they have access to, and then develop dynamic policies that are tailored to that specific risk and are flexible enough to change with evolving circumstances.

Additionally, organizations must have an understanding of what normal behavior looks like for all users – based on information from traditional security systems and other telemetry inputs. By detecting anomalies in these patterns, analysts can identify potential threats from malicious insiders to external bad actors and take rapid and automated action in real-time. Behavioral analytics lets organizations separate truly malicious behavior from simple mistakes or lapses, and tailor the security response accordingly. The aim is to replace broad, rigid rules with individualized, adaptive cybersecurity – creating a far better baseline of security, as the CSC called for.

The Bottom Line

Understanding how people interact with data is key to our nation’s security and should be a part of the push to put the CSC’s recommendations into action. The commission also emphasized collaboration with the private sector, mostly suggesting its resources and capabilities could help private sector actors stay safe. The collaboration should flow in the other direction as well. Capabilities coming from the private sector need to be incorporated into the public sector, especially in the wake of the pandemic.

The federal government cannot simply be investing in legacy tech. Instead, they need to be throwing their weight behind innovative approaches – like behavior-centric security – that will move agencies closer to the CSC’s vision. With people as the new perimeter, a more targeted and adaptive cyber defense must be the new baseline.

Understanding COVID-19 Through High-Performance Computing

COVID-19 has changed daily life as we know it. States are beginning to reopen, despite many case counts continuing to trend upwards, and even the most informed seem to have more questions than answers. Many of the answers we do have, though, are the result of models and simulations run on high-performance computing systems. While we can process and analyze all this data on today’s supercomputers, it will take an exascale machine to process quickly and enable true artificial intelligence (AI).

Modeling complex scenarios, from drug docking to genetic sequencing, requires scaling compute capabilities out instead of up – a method that’s more efficient and cost effective. That method, known as high-performance computing, is the workhorse driving our understanding of COVID-19 today.

High-performance computing is helping universities and government work together to crunch a vast amount of data in a short amount of time – and that data is crucial to both understanding and curbing the current crisis. Let’s take a closer look.

Genomics: While researchers have traced the origins of the novel coronavirus to a seafood market in Wuhan, China, the outbreak in New York specifically appears to have European roots. It also fueled outbreaks across the country, including those in Louisiana, Arizona, and even California. These links have been determined by sequencing the genome of SARS-CoV-2 in order to track mutations, as seen on the website The Next Strain and reported in the New York Times. Thus far, an average of two new mutations appear per month.

Understanding how the virus has mutated is a prerequisite for developing a successful vaccine. However, such research demands tremendous compute power. The average genomics file is hundreds of gigabytes in size, meaning computations require access to a high-performance parallel file system such as Lustre or BeeGFS, etc. Running multiple genomes on each node maximizes throughput.

Molecular dynamics: Thus far, researchers have found 69 promising sites on the proteins around the coronavirus that could be drug targets. The Frontera supercomputer is also working to complete an all-atom model of the virus’s exterior component—encompassing approximately 200 million atoms—which will allow for simulations around effective treatment.

Additionally, some scientists are constructing 3D models of coronavirus proteins in an attempt to identify places on the surface that might be affected by drugs. So far, the spike protein seems to be the main target for antibodies that could provide immunity. Researchers use molecular docking, which is underpinned by high-performance computing, to predict interactions between proteins and other molecules.

To model a protein, a cryo-electron microscope must take hundreds of thousands of molecular images. Without high-performance computing, turning those images into a model and simulating drug interactions would take years. By spreading the problem out across nodes, though, it can be done quickly. The Summit supercomputer, which can complete 200,000 trillion calculations per second, has already screened 8,000 chemical compounds to see how they might attach to the spike protein, identifying 77 that might effectively fight the virus.

Other applications: The potential for high-performance computing and AI to simulate the effects of COVID-19 expand far beyond the genetic or molecular level. Already, neural networks are being trained to identify signs of the virus in chest X-rays, for instance. When large scale AI and high-performance computing are done on the same system, you can feed those massive amounts of data back into the AI algorithm to make it smarter.

The possibilities are nearly endless. We could model the fluid dynamics of a forcefully exhaled group of particles, looking at their size, volume, speed, and spread. We could model how the virus may spread through ventilation systems and air ducts, particularly in assisted living facilities and nursing homes with extremely vulnerable populations. We could simulate the supply chain of a particular product, and its impact when a particular supplier is removed from the equation, or the spread of the virus based on different levels of social distancing.

The bottom line: The current crisis is wildly complex and rapidly evolving. Getting a grasp on the situation requires the ability to not just collect a tremendous amount of data on the novel coronavirus, but to run a variety of models and simulations around it. That can only happen with sophisticated, distributed compute capabilities. Research problems must be broken into grids and spread out across hundreds of nodes that can talk to one another in order to be solved as rapidly as is currently required.

High-performance computing is what’s under the hood of current coronavirus research, from complex maps of its mutations and travel to the identification of possible drug therapies and vaccines. As it powers even faster calculations and feeds data to even more AI, our understanding of the novel coronavirus should continue to evolve—in turn improving our ability to fight it.