The President’s Management Agenda – Biden Going Big Again?
The Biden administration – less than four months after taking office – has thus far developed a reputation for going “big” with its policy and spending aims.
What unites some of those big efforts? How about the word “trillions” – for the American Rescue Plan (approved at $1.9 trillion), the American Jobs Plan (proposed at $2.3 trillion), the American Families Plan (offered at $1.8 trillion), a preliminary Fiscal Year 2022 budget (dangled at $1.5 trillion), and who knows how much in taxes required to pay for them.
At this point, “small” is not the new administration’s watchword.
But very soon – within weeks I suspect – we will be learning much more about some of the nitty-gritty details underlying the administration’s big-picture visions that will help paint a much clearer picture of what the government hopes to accomplish over the next four years.
Here Comes the PMA
As I noted in my last column, we still await President Biden’s complete and full FY 2022 budget proposal. The so-called “skinny budget” released last month outlines his plans for the discretionary part of next year’s budget. But it doesn’t include the almost two-thirds of the budget dedicated to mandatory programs, nor does it feature revenue forecasts or other accompanying documents, like the Analytical Perspectives volume.
It is in this latter document that one would normally find a crucial chapter on the President’s Management Agenda (PMA). Why is the Biden PMA important?
A number of years ago in a special forum I assembled for “The Public Manager” journal, Prof. Donald Kett, then of the University of Maryland, spoke to the very heart of that question:
“No self-respecting president can enter office without a management plan,” he said. “Not that ordinary Americans expect it; most know little and care less about who delivers their public services and how. A management plan, however, conveys important signals to key players. The federal executive branch’s 2.6 million employees look for clues about where their new boss will take them. Private consultants tune their radar in search of new opportunities. Most important, those who follow the broad strategies of government management seek to divine how the new president will approach the job of chief executive, where priorities will lie, and what tactics the president will follow in pursuing them. Management matters; with each new administration, the fresh question is how.”
PMA Predictions
I’d like to offer a President’s Management Agenda framework – PMA 46 – for the Biden-Harris Administration. I hope his new management team at the Office of Management and Budget (OMB), the Office of Personnel Management (OPM), the General Services Administration (GSA), and Federal agency chief operating officers will find it useful.
It draws from President Biden’s speeches, policy papers, and platform as well as the testimony of key advisors during recent confirmation hearings. Also, if one views the budget in part as the “pricing out” of priorities, it is possible to “backward map” from even the “skinny budget” to major elements of a PMA.
Here are some of the big tenets:
Reforms
The private sector knows that reforms take years, but for a long time the public sector has had trouble grasping that lesson.
Historically, a new President would sweep away whatever his predecessor had done and develop an entirely new package. That’s what George W. Bush did with Vice President Al Gore’s reinventing government campaign. It was also what President Bill Clinton did with President George H. W. Bush’s total quality management initiative.
But in recent years, incoming leaders seem to have recognized that in Federal management reform, there is truly nothing new under the sun, and many old promises merit a second chance.
So for the Biden PMA, expect a “round up of the usual subjects” – acquisition reform (likely with more agility), performance measurement, financial management, shared services, customer satisfaction, and citizen services. And that is to be commended, as it creates the “steadiness in administration” that Alexander Hamilton described as essential to a government well-executed (see Professor Paul Light’s excellent volume “A Government Ill Executed,” 2008).
Likely Picks
In the April 9, 2021 letter transmitting the President’s request for FY22 discretionary funding, OMB Acting Director Shalanda Young does note several “management” issues in the summary section.
No big surprises there, but all can be expected to reappear in some form in the PMA: “Made in America”; “green” initiatives such as clean energy technologies; opportunities for small and minority businesses; civil rights and diversity; and bolstering Federal cybersecurity.
Two others are noted as well, but the associated funding requests are so significant and widespread across multiple departments and agencies that they deserve special mention.
Innovation
The first is innovation, to include key emerging technologies like quantum computing and artificial intelligence, as well as supporting research and development.
On the innovation front, the budget requests additional funding for existing programs or the establishment of new programs at the National Institutes of Health, the Departments of Energy, Commerce, and Defense, the National Oceanic and Atmospheric Administration, the National Institute of Standards and Technology, the National Telecommunications and Information Administration, NASA, and other departments and agencies.
While these investments focus on competitiveness and economic growth, they also reflect a restoration of faith in the Federal government’s ability to tackle difficult problems.
Technology Modernization
The second is technology modernization. Again, as under President Trump, information technology is not viewed as a standalone management pillar. Rather it is viewed as a force multiplier and enabler for other key priorities such as enhanced citizen services, data analytics, and so on.
The discretionary request supports agencies as they modernize, strengthen and secure antiquated information systems both in additional funding for the Technology Modernization Fund and through $750 million as a reserve for agency IT enhancements. But it also includes specific modernization efforts at Veterans Affairs, the Internal Revenue Service, and the Social Security Administration.
Human Capital
The human capital component – to include hiring reform, the role of Federal unions, pay and benefits, performance appraisal, integrity of the civil service, and so on – clearly will be a major element of the Biden PMA.
In this area, perhaps more than in any other, the emphasis will be on undoing a number of actions taken by the Trump Administration, as well as formulating a Biden-Harris human resources agenda.
21st Century Vision
Finally – and by no means do I mean to minimize their import by employing a single summary bullet – I would expect to see steps taken to advance a vision of a 21st Century government which is focused on improving outcomes using data and evidence, re-establishing trust, re-imagining service delivery, evaluating programs, and recruiting and retaining new talent with technical skills in critical and emerging technology areas.
We should be able to see how accurate I am in predicting the contents of PMA 46 in less than a month or so.
MeriTalk Insight: TMF Dreaming – and How to Fund IT Fixes Right Now
A billion of anything isn’t quite what it used to be, but it’s still a lot. And when that billion is dollars for the Technology Modernization Fund (TMF) – a great vehicle that has been underutilized because of low funding levels and strict repayment rules – it may yet end up being a real difference-maker across many government agencies looking at IT modernization.
But there’s a ways to go before that new $1 billion of modernization funding becomes available – much less more attractive – to agencies. Also worth keeping in mind: there are 20 or so big Federal agencies, and a few dozen smaller ones, that may be competing for that source of money. So while the new money is great, the math implies that even the enlarged TMF might not turn out to be a winning lottery ticket for any one organization.
With that in mind, let the Old Budget Guy talk about a couple things agencies can do right now to start funding IT modernization that don’t rely on winning the TMF sweepstakes. And how government might consider spending on not just on IT, but also on more streamlined structures, to pull itself kicking and screaming into the 21st century.
Birth of TMF
First, a little recent history necessary to understand where we are now.
The TMF was established back in 2017 as part of the Modernizing Government Technology (MGT) Act after senior officials at the Office of Management and Budget (OMB), the Government Accountability Office (GAO), and others asked why we were letting our IT infrastructure fall to pieces.
Then-Federal CIO Tony Scott called the government’s reliance on outdated technology a “crisis” to rival the Y2K computer glitch. GAO issued reports and testified about agencies that were (and still are) running tens of millions of lines of long-deprecated software code such as COBOL and assembly languages, and about the aging infrastructure itself – switches, routers, servers, desktops, mainframes, etc.
Research performed by a major infrastructure company found that a substantial portion of the government’s IT hardware had already reached LDoS (Last Day of Support), which means it was not receiving updates, security alerts, or patches. An ever greater portion of that infrastructure was projected to reach that same stage in ensuing years.
While the TMF was established to help deal with these problems, initial funding was quite small compared to the problems that need addressing – increased security risks and vulnerability to cyber-attacks; the inability of outmoded systems to support growing demands for greater mobility, collaboration, and analytics; and especially the truly catastrophic blow that a breakdown in crucial technology could be to the business of government.
IT infrastructure – such dull words. But an issue that touches almost everything about how government works – and could work better if given the chance.
The New Billion
Then came President Biden’s American Rescue Plan, and its allocation of $1 billion to the TMF, which in the past three years had received annual appropriations averaging $25 million per year. On top of that, in the so-called “skinny budget” outlining the administration’s FY 2022 funding proposal, the President asked for another $500 million for the fund.
OMB and the General Services Administration (GSA) have, over the last three years or so, developed and matured their processes for TMF business case development, acquisition and oversight. But in that span with limited funding, the TMF has only overseen 11 projects totaling about $125 million.
So the new funding – with possibly more on the way – is a wonderful “problem” to face.
And the temptation – already finding its voice in Congress – is to reach for the “EASY button” and relax or eliminate TMF’s self-sustaining model of requiring agencies awarded funding to modernize their IT infrastructure and pay back the fund within five years.
Making Working Capital Work
While we’re waiting for the TMF funding to shake out, the Old Budget Guy will explain another way for agencies to get money to start skinning that modernization cat.
The legislation that initially created the TMF offered agencies another option to begin to change the imbalance in funding dedicated to IT Operations and Maintenance (O&M), so that more could be invested in IT Development, Modernization and Enhancement (DME).
Crucially, the law authorized the establishment of IT Working Capital Funds (WCF) so that agencies could make more intelligent decisions with their money, and not indulge in end-of-year, use-it-or-lose-it spending splurges. Discouragingly, data from budget and acquisition analysts shows that nothing has changed in final-quarter and final-month buying binges.
According to the data, many agencies obligate up to 40 percent of their contract dollars in July, August and September – the last quarter of the fiscal year. And the data show that in the final WEEK of the fiscal year, between $1 and $2 billion are spent on IT purchases each DAY.
It may seem hard to believe, but only one agency (and a smaller one at that), the Small Business Administration, has taken advantage of the working capital funds authority granted by this law. Before any agency steps in line now to take advantage of the new TMF buffet, I would have them commit to (1) establish their own IT WCF and (2) outline the steps they would take in their own annual budget formulation process to steer investments from O&M to DME.
Reimbursement Outlook
The text of the American Rescue Plan, signed into law on March 11, doesn’t include a reimbursement requirement for the new TMF funding.
Industry representatives, former government officials and even some current OMB executives have spoken – mostly off the record – about how the existing reimbursement model discourages agencies from participating in the fund, arguing that “not all projects have easily quantifiable results.” In this case “easily quantifiable results” equal claimed savings that are real and can be used to pay back the TMF loan.
Having served in government for years, I know some things are hard to do. But if a lot of things – like improved efficiency and effectiveness, cost avoidances, funds put to better use, increased citizen satisfaction, and so on – were all real then we likely wouldn’t be facing a massive deficit and a new low in trust in government.
Measuring success, finding performance metrics for proposals that don’t have easily quantifiable outcomes, demonstrating results from major investments – all those things are hard. They are likely as hard or even harder than setting up a new WCF, redefining an existing one, or curbing late-September buying sprees.
But we should do the right things, in spite of the fact they may be hard. Moreover, those investments that can’t clearly demonstrate results and benefits will only draw political fire that could undermine the very worthy TMF, and be subject to congressional rescission after the 2022 midterm election.
Spending Ideas
Plenty of ideas have bubbled up on where new TMF money could be invested. Examples include citizen services, remote work, cybersecurity, cloud adoption, COVID-19, racial inequity, climate change, and other areas that are already the focus of the American Rescue plan, the FY22 budget request and the administration’s recent infrastructure/jobs proposal.
Rather than double down on existing Biden-Harris priorities, I would propose two alternatives.
The first is streamlining government. Both Presidents Obama and Trump proposed reorganizations that would lead to a 21st century government by using 1950’s or 1960’s administrative methods (e.g., merging the Departments of Education and Labor, relocating offices and personnel from one area to another, creating a new statistical agency by uniting bureaus drawn from multiple existing departments, etc.). Why not invest in virtual restructurings where integration, collaboration, and citizen services could occur in a real 21st century manner?
The second is shared services, an initiative that initially surfaced as a part of President Reagan’s Reform ’88 program and has been endorsed in almost every President’s Management Agenda since then. It is the poster child for the government adage that “when all is said and done, more is said than is done.” Departments have cited the lack of funding as the reason they couldn’t take action. Let’s take away that excuse.
I was an early supporter of the TMF and remain supportive now. Robust funding provides the opportunity to rethink how we fix the government’s aging IT infrastructure. That is at the heart of the problem now. IT infrastructure policy has become a matter of lurching from crisis to crisis, solving problems after the fact rather than preventing them from happening. It’s time to stop being short-term fix addicts, and start really taking the longer view.
MeriTalk Insight: Biden Budget Offers Early Blueprint … With CR in the Forecast
President Biden on April 9 released a massive $1.52 trillion fiscal year 2022 spending plan that reflects his vision of an expanded – and expansive – Federal government that boosts spending for domestic programs and addresses issues such as education, affordable housing, public health, racial inequality, and climate change, among many others.
In the big picture, non-defense spending would rise next year to $769.4 billion, an increase of nearly 16 percent, while spending on defense would increase 1.7 percent to $753 billion. The latter is considerably less than Republicans have called for, but is more than would have been desired by progressive Democrats who pushed for a flat Pentagon budget, or even spending cuts.
Skinny on Details
The preliminary plan has been dubbed a “skinny budget” for a couple of reasons. While it indicates this administration’s priorities, it doesn’t do so in detail, and it covers only the discretionary portion of total Federal spending.
It will be followed later this Spring (timing details unclear at this time) by a full budget proposal that includes mandatory spending programs such as Social Security, Medicare, and interest on the national debt, etc. These latter programs comprise roughly two-thirds of total government expenditures. Missing too are all the appendices that would detail tax increases and the impact over the next decade on deficits, debt, and the nation’s economy.
Also missing is the Analytical Perspectives volume which is where one would normally find a Biden-Harris Presidential Management Agenda (PMA) chapter. The elements of a PMA are hinted at – more about that in a future column. But for those in the “good government” community, as well as government contractors who seek to see what may stay or go in areas like human resources, information technology (IT), financial management, acquisition, and the like, the suspense only grows.
MeriTalk and others have produced detailed coverage of the IT, innovation, and cybersecurity elements of the President’s skinny budget request.
Also worth noting are proposed increases for science, technology, and research at the National Institute of Standards and Technology, the National Institutes of Health, supply chain security for IT and 5G, the National Science Foundation, rural broadband at USDA, the National Telecommunications Administration at Commerce, NASA, and SBA’s Small Business Innovation Research and Technology Transfer programs. Thus, there is joy and celebration in the Federal IT and professional services community! But let’s keep the champagne on ice for a little bit longer.
Curb Your Enthusiasm
I will leave it to the national media political pundits to debate “big picture” concerns that have already surfaced with the budget proposal, but some are plain to see.
Count among those the immediate starkly negative reaction from GOP legislators, bipartisan concern about flat Defense spending amidst growing concern about Russia and China, Republicans’ reincarnated worry about the Federal deficit, progressives’ push for even more investments in civilian agencies, and deeper cuts at DoD, and the very thin margins Democrats hold in both the House and Senate. And on the process front, exactly how many times will Congress be allowed to go to the “reconciliation well” and avoid needing 60 votes for passage?
Here are some of my concerns as OBG – the Old Budget Guy (aka while in government “The Abominable No Man”).
Time Keeps on Slipping
The major concern I have immediately is that we are way behind in the long path to a Federal budget. While budget laws contain formal deadlines, the process follows a loose calendar where slippage and overlap frequently occur. This year, the slippage is greater than usual.
A President normally submits his/her budget proposal to Congress in February. In March and April, Congress hashes out a budget resolution that provides specific guidance to both authorization and appropriations committees so they can spend the summer months holding hearings, reviewing the details of the administration’s requests, and crafting separate bills to fund the government for the next fiscal year that begins October 1.
Now of course the last time anything worked like this was in the late 1990s when Bill Clinton was President. That was the last time Congress passed and the President signed appropriations bills that funded the entire government, and that we didn’t enter the new fiscal year under a full or partial continuing resolution (CR).
Any new President is late in submitting the first budget. But President Biden is even later with the dispute over the election results, the delay by the GSA Administrator in issuing a letter of ascertainment giving the presidential transition team access to resources and information, and the refusal by the former OMB Director to assist the Biden-Harris team in getting a start on developing an FY 2022 budget. Where is that Trump-Pence FY 2022 budget document anyways? That should be a real collectors’ item!
Bottom Line – Hello CR
So what are the chances that we start the new fiscal year in October with an approved budget? To quote my budget predecessor at Commerce: “Slim and none. And Slim just left town.” My advice: start planning today for a CR later this year.
And looking forward to President Biden’s FY 2023 budget request, which will come as we approach the 2022 congressional elections, what might be the budget atmosphere? I’ll say now, quoting again, “Worse than last year, but not as bad as next year.”
How a Key DoD Agency is Protecting Digital Identities
Digital identities are becoming increasingly important elements of today’s connected infrastructure across the public sector. Boosted by the growth in remote working over the past year, protecting their integrity is key to securing critical IT systems and confidential government information.
But as the recent SolarWinds breach demonstrated, compromised identities and the manipulation of privileged access offer a pathway for cybercriminals to gain access to infrastructure and data, with wide ranging and serious consequences.
With the SolarWinds incident widely described as a “watershed” for cybersecurity threats to the United States, it’s clear that many existing approaches to digital identity security are severely lacking. Indeed, Microsoft described the events of last December as a “moment of reckoning” requiring a “strong and global cybersecurity response.”
As a result, attention is now firmly focused on how government organizations can more effectively deliver secure and reliable Identity and Access Management (IAM). However, as the public sector accelerates efforts to digitally transform both internal and external infrastructure, services and access, digital identities are exposed to even further risk.
But what is IAM and why is it important? IAM is the discipline that enables the right individuals or non-human entities (machine identities) to access the right resources at the right times for the right reasons. In doing so, it addresses the mission-critical need to ensure appropriate access to resources across increasingly heterogeneous technology environments and meet increasingly rigorous compliance requirements.
With these key issues in play, momentum is already gathering in Washington for major legal and regulatory change to better protect government organizations and constituents alike. If passed, for example, the Improving Digital Identity Act of 2020 will direct the National Institute of Standards and Technology (NIST) to create new standards for digital identity verification services across government agencies.
While a proactive approach from government and those responsible for designing and policing standards are key to a more secure future for digital identities, what’s also required are more rigorous, multi-layered cybersecurity strategies that don’t rely on a single solution for protection.
Specifically, as traditional network perimeters dissolve across government departments and beyond, the old model of “trust but verify” – which relies on well-defined boundaries – must be discarded. Instead, the default approach must focus on zero trust, or in other words, the “never trust, always verify, enforce least privilege” view of privileged access, from inside or outside the network.
In doing so, Privileged Access Management (PAM), a key component of IAM, can secure networks and prevent the kinds of identity-based cyber-attacks we read about so much in the headlines. Forrester Research estimates that 80 percent of data breaches involve privileged credential abuse. If 2021 is eventually seen as the watershed moment for public sector cybersecurity in general and the protection of digital identities in particular, organizations should grant least privilege access based on verifying who is requesting access, the context of the request, and the risk of the access environment.
But how does the application of PAM in government work in practice? The experiences of an agency within the Department of Defense (DoD) offers some interesting insight.
Using Privileged Access Management in DoD
In the late 90s, DoD adopted a standard identification Common Access Card (CAC) outfitted with a computer chip that supported public key infrastructure (PKI) credentials as its standard identification credential. In 2005, DoD mandated the use of the CAC for initial user workstation authentication across the entire network, as well as for web-based applications. While the use of a token dramatically increased the security of initial logins, privilege elevation by administrators was still accomplished with plain text usernames and passwords.
When an agency within DoD audited its process and found that privileged user authentication and privilege elevation were still being done with usernames and passwords – creating privilege sprawl across the department – alarm bells went off. U.S. Cyber Command issued a communications tasking order that identified the issue, described the actions required to address it, gave a deadline for completion, and began the process of implementing a reporting structure to ensure compliance.
One of the most critical requirements was to centralize all the account information associated with authentication. The team performed a survey of the market to identify potential vendors and solutions. After an evaluation of the few solutions that could meet their requirements – which included extensive functional and security testing both in the lab and the infrastructure – Centrify was selected based on functionality, maturity, and existing familiarity with the product.
Prior to its implementation, the agency had dozens of disparate identity repositories as well as local account stores in many systems, and an entirely separate infrastructure designed to support Linux servers. When someone wanted privilege on any one of those systems, a new account, username, and password were created.
Because administrators need access across multiple systems, the result was identity sprawl. Today, the department has made significant upgrades to its entire infrastructure, including an online, automated approach to privilege.
Agency employees are now provisioned into Active Directory once. If they require elevated privileges, they’re provisioned and deprovisioned quickly and easily with minimal human intervention. While the main driver was security, automating PAM has resulted in considerable cost savings. It has replaced multiple accounts, usernames, and passwords with a single account and a single authentication methodology. Tasks can now be performed without the complexity, risk, and waiting time. That has simplified day-to-day operations and made access to the system much more transparent.
To protect the often confidential information housed by government entities and their mission-critical systems, digital identity security must be prioritized. While there have been credential-driven government agency breaches reported in the last year, it is positive to see key agencies within DoD taking action to combat the associated risks through a centralized identity and least privilege approach. Between this example and the NIST standards moving forward, hopefully more and more agencies will follow suit.
Federal IT Can’t Ignore Threats Posed by Disinformation
Disinformation is undoubtedly on more people’s radars – Federal IT pros included – heading into 2021 and beyond. But just because we know more about it doesn’t mean we are better prepared to face the challenge that disinformation is posing.
With a normal cyber attack, the government is often targeted directly. Disinformation is different. Instead of attacking core infrastructure, bad actors or nation states attack the population by attempting to skew their beliefs.
Disinformation, a form of misinformation that is created specifically to manipulate or mislead people, is becoming more prevalent – in part because it’s easy to create and disperse. The tools behind deepfakes and malicious bots have been democratized, creation can now be automated, and disinformation-as-a-service has emerged. The Kremlin-backed Internet Research Agency, often referred to as a “troll farm,” sows disinformation everywhere – as do many nation states and domestic organizations.
From an agency perspective, the threat of disinformation has two key components. Nation states and bad actors are using disinformation to discredit agencies and target government employees. While Federal CIOs cannot tackle this problem alone, they can take some steps to mitigate the risk these threats pose. Let’s take a closer look at each component, and what Federal IT pros can do about it.
Retaining Agency Credibility
Nation states and bad actors can harm an agency without targeting them directly utilizing a cyber attack. They could, for instance, impact the number of coronavirus vaccines administered by the Department of Veteran Affairs by using disinformation to sow distrust about vaccine effectiveness or safety among veterans. While Federal agencies cannot control the media or its message, they can run awareness campaigns to counter the threat of disinformation, while also creating certified FAQs and resource pages for constituents.
DHS’ Cybersecurity and Infrastructure Security Agency (CISA) has already been implementing this strategy, as seen with its disinformation toolkit specific to COVID-19. Along with other approaches, the toolkit highlights the most reliable sources for pandemic-related information. In 2019, CISA also released an evergreen infographic demonstrating how foreign influences stoke division between Americans through information campaigns.
While Federal CIOs alone cannot regain control of information in the internet age, agencies can consistently remind people that they represent a reputable source – and can be diligent in only driving constituents to other reputable sources. Agencies may even look to more traditional efforts, like marketing, in order to disseminate verified information to their constituents. Although this represents just one step of many needed to curb the threat of disinformation, it will help offset bad actors’ attempts to discredit agencies.
Educating and Protecting Employees
Disinformation can also lead to insider threats. Social media and other sources of inaccurate information can radicalize employees, who may then feel compelled to steal sensitive data or IP. Just as disinformation is now for sale, insider-threat-as-a-service exists as well. While bad actors and nation states formerly attempted to bribe and extort their way to sensitive information, they can now either serve disinformation to existing employees, or ultimately become employees themselves.
To prepare for the former, agencies need to implement more education and disinformation training programs. Just as employees must complete training related to IT and HR, they should be required to take classes on recognizing disinformation. They should understand the techniques and procedures nation states will leverage to skew the public’s common belief system and know how to validate news sources. Employees need to be trained on the use of verifiable information fused with false information to alter narratives and discredit reliable sources. While IT pros may feel like such education is beyond their purview, it relates directly to insider threats. By helping employees validate sources, you’re actually protecting your data in the long run.
Additionally, in order to combat both types of insider threat, Federal agencies must be adept at continuous monitoring of user behavior. By having a baseline of normal user behavior, agencies will be able to determine if a radicalized employee is attempting to hoard data or access restricted information. The same can be said for someone who joined an agency with malicious intent in mind. There is simply no way to completely eliminate the threat of disinformation and malicious insiders. Thus, Federal IT pros must put behavioral analytics in place so they can quickly identify and respond to potentially dangerous user behavior.
Sowing Chaos and Confusion
In addition to the two key components previously mentioned, there is another role for the Federal government that may assist in limiting disinformation efforts. While not applicable to most agencies, a concerted effort to discredit foreign influencers may play a role in limiting their effectiveness.
Often times, nation states leverage third parties such as organized crime in their efforts to sow confusion. If the U.S. government were to respond to disinformation campaigns with equal efforts against the parties launching them, confidence in those parties may raise questions and limit nation states’ confidence in using these resources. In other words, if organized crime is sowing disinformation on behalf of a nation state, then a carefully orchestrated effort discrediting them in the eyes of the sponsoring nation may limit their effectiveness and ability to operate offensively. This would have the effect of lowering incentives to create these campaigns.
The Bottom Line
The tough reality is that, in an age of social media, there is no silver bullet to combat this real and growing threat. Everyone must be diligent about questioning what they see online, as opposed to simply taking it at face value and internalizing it as facts. Still, Federal IT pros should be most concerned about disinformation undermining their own credibility – and potentially turning their own employees against them.
Awareness is crucial to combating disinformation on both fronts, but it should be supplemented by behavioral analytics. Federal IT pros should proceed as if disinformation is already impacting both their employees and constituents – because it is. This is an all-hands-on-deck issue, but there are many ways to begin combating the threat of disinformation today.
Cloud-Based Collection of Quality Public Health Data in the Time of COVID-19
The United States was once a leader in the collection and utilization of public health data. As the COVID-19 pandemic wears on, the United States must resume its leadership in this domain.
During the pandemic, numerous lives were lost and still more people became chronically ill with “long-haul COVID” due in part to gaps in U.S. public health data systems. Shortcomings with these data systems directly underlay the need to shut down key sectors of the economy, causing millions of lost jobs and bankruptcies. The widespread shutdowns were necessitated by the lack of data on both community prevalence of the disease and COVID-19 immunity status, which in turn forced the need to treat wide geographic areas of the United States as under threat.
To address these gaps and ensure we are ready to meet future public health challenges, we require a centralized, cloud-based system for use in tracking infectious diseases and chronic conditions. We have named this system the Nationwide Reportable Conditions Data System (NRCDS).
At the outset, the NRCDS would be used for reporting data on the 121 “reportable” diseases and conditions such as COVID-19, influenza, mumps, and cancer. Currently, testing entities (such as Quest) and providers (such as doctors’ offices) are required to report data on these conditions to the nation’s 2,300 state and local public health agencies within 24 hours.
The myriad, disparate reporting locations create a tremendous burden for reporting entities that are simply trying to comply with a Federal reporting mandate. Additionally, these thousands of public health agencies largely host their reportable conditions data on-premises in legacy systems. Starting with the Modernizing Government Technology Act of 2017 (MGT), the Federal government has sought to drive government agencies away from such systems toward more efficient cloud-based operations.
The single, unified, cloud-based NRCDS would align with Federal initiatives like MGT and the Cloud Smart Strategy. Initially, NRCDS data would be comprised of the 31 data elements mandated by the CARES Act Section 18115, reportable condition stipulations for COVID-19. However, the Centers for Disease Control (CDC) has expressed interest in expanding such a platform in the future to include the other 120 reportable conditions as well as further data streams including immunizations, Admission-Discharge-Transfer (ADT) events, electronic Case Reporting (eCR), and Electronic Health Record (EHR) data.
Expanding the NRCDS in the ways proposed by the CDC would enable the creation of a rich longitudinal record of infectious disease diagnoses, immunizations, EHR data about chronic health conditions, and healthcare utilization in the United States – COVID-19-related and otherwise. This data system would have endless crucial uses. For example, the monitoring of COVID-19 could take place at all geographical scales–a capability lacking during the current pandemic – closing an important information gap. Furthermore, for public health case management and education purposes, it could be used to monitor and assist individuals who were vulnerable to a given infectious disease.
We propose a second, cloud-based system that is ensconced within the NRCDS – which we have named the COVID Repository – to track functional immunity status to COVID-19. The COVID Repository would contain information on U.S. residents’ COVID-19 and antibody test results as well as vaccinations. Additionally, it would include viral strain (variant) information to the extent available. The length of time that immunity to COVID-19 lasts is still being studied, and it may only last several months. Once this time window is known, because the COVID Repository is a longitudinal record, it would allow the determination of the approximate “expiration date” of immunity after an infection or vaccine. Figure 1 presents the NRCDS and the associated master COVID Repository, in identifiable and de-identified forms, that would be developed from it. A government agency or contractor permitted to handle personally identifiable health information would maintain the database. Information would be shared with the Federal Aviation Administration, the Federal Emergency Management Agency, the Department of Defense, intelligence agencies, and several other government stakeholders.

Ideally, the COVID Repository data would be combined with the other longitudinal health data in the NRCDS so the long-term effects of COVID-19 and its effect on other chronic conditions could be established. The data set would assist U.S. agencies with managing the long-haul COVID-19 caseload, allowing insights that could reduce the burden for U.S. health care and social systems. The data would additionally be maintained and made available in de-identified form to research agencies, nongovernmental organizations, and health care companies, making it an invaluable resource for research.
The NRCDS and COVID Repository would allow for much more accurate public health monitoring. They would also be much more efficient than the current legacy systems and save money. As recently as 2017, many government agencies were spending over 75 percent of their budgets on maintaining legacy systems that were becoming siloed as they failed to integrate with newer technologies. Finally, in an era when the nation’s COVID-19 and other health data are among the top targets of international cybertheft efforts, the cloud-based systems’ increased security would safeguard U.S. residents’ personal health information.
David Dastvar serves as chief growth officer with Eagle Technologies. In his 29 years with public sector and Fortune 1000 companies (including GDIT/CSC, Infosys, CDI, Maximus/Attain, and Northrop Grumman), he has developed and managed professional services and solutions for enterprise-level projects requiring a high degree of program management and technical expertise.
Linda Hermer, Ph.D., leads the Research Team at Eagle Technologies. Dr. Hermer earned her undergraduate degree at Harvard University in Neurobiology and Linguistics and her doctoral degree from Cornell University in Psychology. She was an accomplished neuroscientist and cognitive psychologist before dedicating the second half of her career to improving public health 10 years ago. Since then, she has worked to modernize public health and social science research at universities, nonprofit organizations, and for-profit firms.
State Unemployment Fraud: Tip of the Iceberg?
FITARA 11.0 Results Show Need for Real-Time Data to Boost Cyber Scores
While the latest Federal Information Technology Acquisition Reform Act (FITARA) scorecard shows all agencies have passing total scores, not one agency’s Cyber score changed from the FITARA 10.0 scorecard issued earlier in 2020.
The Cyber category consists of criteria from the Federal Information Security Modernization Act (FISMA) – and while FISMA measures compliance and considers data points such as number of incidents, it does not provide insight into how these actions unify to reduce risk.
Basic cyber hygiene is the root of many security compliance requirements, and while adhering to those requirements as well as other best practice frameworks can help reduce risk, compliance isn’t enough. Agency cyber defenders also need reliable, real-time data for a comprehensive view of the entire environment so they can identify, assess, focus on, and remediate risks.
The best decisions are made with good, high-fidelity data. So, how can agencies work to manage potential cyber risks and increase posture?
Scoring the FITARA Cyber Category
There are two components within the Cyber scores – the score the agency inspector general gives its agency’s posture on cyber maturity model criteria and Cross-Agency Priority (CAP) goals to modernize IT for better productivity and security – covering asset security, personnel access, network and data protection, and cloud email adoption.
The cyber maturity model has evolved over the past several years to address inconsistencies between how inspectors generally evaluate agency security, and agency evaluations under FISMA – aligning more with the five key pillars of the NIST framework. Agencies need to know where they stand on maturity levels for each, and establish a timeframe and a plan to get to the next maturity level.
More updates to FISMA may happen soon. A recently proposed bill, titled the “Federal System Incident Response Act” would update FISMA criteria, “increasing transparency by clarifying how and when agencies must notify impacted individuals and Congress when data breaches occur.”
Strengthening Agency Cyber Posture
Agency IT teams can strengthen their cyber posture and improve FITARA cyber scores by characterizing risks by the severity of a vulnerability, its age, and the value of the data/system exposed to the threat. This approach is the essential methodology used by CISA’s Agency-Wide Adaptive Risk Enumeration (AWARE) risk scoring algorithm and illustrates the clear difference between measuring risk instead of compliance.
In addition, IT teams should focus on achieving comprehensive visibility into all systems across the enterprise (end-user, cloud, and data center).
To get the real time data necessary for risk managers to act upon these threats, IT teams need to assess the current toolset, and refresh with a platform that simplifies, while removing inefficient legacy tools that are costly and don’t do the job. For a distributed workforce, optimizing tools deployed will help them operate in newer cloud and hybrid environments. By doing so, agency leaders will understand the full environment, and reduce the accountability gaps created by disconnected point-solutions.
Agency CIOs should also consider sharing IT plans. While it’s not required to share plans or progress as they work to improve their cyber maturity levels in conjunction with FISMA, CIOs could submit a plan and share for review within the CIO Council, enabling agencies to learn from one another.
Agency IT teams should test data center efficiency while considering new security applications. Reducing the number of servers in use decreases hardware and software costs, saving dollars that can be re-prioritized. It also allows the opportunity for agencies to leverage a single, ubiquitous, endpoint management platform approach that helps gain end-to-end visibility across end users, servers, and cloud environments – as well as identify assets, protect systems, detect and respond to attacks, and recover at scale. This breaks down the data silos and creates the ability for IT teams to receive good, high-fidelity data in near real time to manage risks.
As agencies work to improve overall cyber posture, the focus must be on improving cyber hygiene and reducing risk. To achieve this, the whole of government must accurately evaluate risk, gain comprehensive visibility into systems, share knowledge across agencies, and improve data center efficiency. At the root, this requires agencies to have reliable, real-time data.
My Cup of IT: Please Join Us at MeriTalk’s Inaugural Ball
Folks:
We’re excited to welcome Joe Biden, the 46th President, to office. It won’t be the largest inauguration crowd in history – but, we hope you’ll join us. We’re hosting a Zoom Biden Inauguration ball from 6:00-7:00 p.m. EST, next Wednesday, January 20th. Crack a beer, raise a glass – and celebrate American democracy.
Cheers,
Steve
PS: Dress code, black tie – hoodie optional
My Cup of IT: Cooking Cyber Simpler?
How SOC Automation Supports Analysts in Securing the Country
The security operations center (SOC) has become the critical hub of Federal agencies’ cyber readiness. SOC analysts keep agencies safely up and running – determining the size and impact of incidents, utilizing threat intelligence, implementing response procedures and collaborating with other staff to address issues.
It’s a big job that can mix both complicated analysis and tedious tasks. That’s why it can be a good fit for security orchestration, automation and response (SOAR) platforms, which can optimize a SOC’s output by automating the mundane tasks analysts regularly perform.
Obstacles to SOC Effectiveness
In a SOC, the process of triaging alarms can stretch into more than a week, especially if the tools used to gather related artifacts and data aren’t integrated. Analysts spend hours on highly repetitive tasks, reviewing and comparing alerts across multiple screens and windows. With terabytes of alerts received per day, analysts can’t keep up.
Most SOC teams aggregate data to create actionable, high-fidelity logs that provide a limited view of an incident’s true impact. But agencies’ siloed need-to-know policies on information-sharing can significantly limit SOC analysts’ visibility into the tools generating the vast amounts of threat data. That makes an accurate situational assessment challenging.
Meanwhile, SOC metrics like incidents handled per hour can incentivize the wrong behavior by motivating analysts to focus on false positives or cherry-picking incidents they can close fast. Analysts should be solving actual problems, not processing tickets.
The New Human-Machine Symbiosis
Security orchestration, automation and response (SOAR) platforms can change that dynamic. A SOAR acts as a central hub that connects the many disparate security tools feeding typical alarms. It optimizes the SOC’s output by automating the mundane, tedious processes analysts normally perform – reviewing and assessing threat intelligence data, determining what is actionable and assigning the information to the right analyst for resolution, but nowhere else.
When done manually, those tasks can take more than a week, depending on the complexity of the problem. Meanwhile, the agency remains vulnerable or could even already be under attack. Tightly integrating a SOAR with a threat intelligence platform can reduce the process to hours or even minutes.
While automation can rapidly assess indicators of compromise (IOCs), analysts’ subject matter expertise is vital for reviewing and interpreting the data. SOC analysts can ensure that alarms coming from similar sources are identified so they can avoid wasting effort on what is really the same problem. They must also determine the “blast radius” of an issue, since a single incident can quickly spread once inside the network.
SOAR can perform the analytics instantly, arming analysts with the data they need for preventive and corrective work. That includes the vitally important task of incident root cause analysis, where analysts’ subject matter expertise and skills are perhaps most valuable. Determining how and why an incident occurred is the single best way to ensure it doesn’t happen again.
A Virtuous Talent Circle
Automating processes can also help ensure that junior analysts have the correct insight to make the best determination as quickly as possible and flag issues for more experienced analysts.
Since automation relieves SOC analysts of hours of wearisome and mundane tasks, it gives them time to develop and document processes for the complex work they perform. Automated processes can then guide junior analysts in skills development and growth.
With lower-level tasks being reliably managed with automation, senior analysts will have more capacity to improve the SOC, devise more repeatable complex workflows, improve the root cause analysis process and standardize responses to ensure repeatable outcomes. They’ll also have more bandwidth to share knowledge and coach the juniors – it’s a win for everyone, allowing more time and people for higher level analysis and fewer requirements for the basic level analysis that can now be addressed through automation.
Embracing the Opportunity
Automation can drive these many benefits and more. It begins with automating well-defined processes as they exist. There is no need to re-engineer established practices when automation is introduced. SOC leaders can adopt a SOAR platform if none is already in place, and use it to align metrics to desired mission outcomes.
Over time, revising and enhancing processes and knowledge management systems by leveraging the benefits of automation, will help develop junior engineers while easing the demands on senior team members. That will improve results and retention across the team and lead to a much more successful SOC. Ultimately, that means greater safety and security for the nation.
My Cup of IT: MeriTalk and MeriTocracy?
Is it a noun or a verb? What the heck is a MeriTalk and why were you so dumb to call your publication that? Why not put Fed or Gov in the name – like every title?
Good questions.
So here goes. We gave our publication a different name, because we wanted to stand aside from the other titles – to challenge the status quo in relationships, content, and format. To talk about the outcomes of tech, not just the tech itself.
MeriTalk is named for the notion of MerITocracy – it’s a noun – meaning “government or the holding of power by people selected on the basis of their ability”. MeriTalk is about spotlighting how IT can deliver a society where all citizens have equal access – and those citizens rise based on their abilities. You see, tech can be the great emancipator and give everybody a fair shake. It can provide new transparency and accountability – and take aim at corruption. It should not be used to suppress or perpetuate fake news. Let’s put our tech on the right path for every American.
With the new Biden-Harris administration that’s America’s goal. So, MeriTalk on, dude.
The Clock is TICing: Accelerating Innovation With Cloud Security Modernization
As remote work shifts to hybrid work for the long term, Federal agencies need continued (and even stronger) cloud security.
I recently moderated a panel of leading Federal cyber experts from the Department of Veterans Affairs (VA), General Services Administration (GSA), and Department of State to discuss how Trusted Internet Connection 3.0 is helping agencies accelerate cloud modernization. The updated policy is allowing agencies to move from traditional remote virtual private network solutions to a scalable network infrastructure that supports modern technology and enables digital transformation.
TIC 3.0 Driving Modern Security and Innovation
“TIC 3.0 removes barriers for the adoption of new and emerging technologies, and it is a key enabler for IT modernization and digital transformation,” said Royce Allen, Director of Enterprise Security Architecture at VA.
Traditional networks often do not support the technologies needed for today’s modern cloud and hybrid IT environment. Agencies have had to make drastic shifts in technology to connect their data center and cloud providers, increase bandwidth, improve security, and more to drive innovation.
For example, by following the TIC 3.0 guidance, the VA has been able to expand the number of users it can support on the network at one time to enable more productivity, and open the door to innovative data sharing solutions.
Hospital systems that previously supported 150 to 200 simultaneous users are now supporting up to 500,000 with a zero trust architecture and cloud-based desktop application. The zero trust architecture helped the VA transition from a network-centric environment to an application-centric environment. In this use case, microsegmentation allowed VA to utilize any network, anywhere, including the internet, to meet the TIC 3.0 guidelines and provide massive on-demand scalability to meet pandemic demands.
The Department of State piloted TIC 3.0 use cases to improve application performance and user experience, especially as employees share data and connect with embassies overseas.
State was managing employees in more locations, using a greater variety of devices than ever before – and thus increasing cyber risks. Protections included backhauling all data internationally through domestic MTIPS/TICs. This slowed down application performance and negatively impacted the user experience, especially on SaaS applications. For example, O365 became virtually unusable due to this hairpinning. TIC 3.0 enabled the agency to pilot a solution that allowed for local internet breakouts across the country, increasing network mobility, while still meeting the rigor of FedRAMP authorization and TIC 3.0 guidelines.
The agency now has full visibility of their servers, can securely direct traffic straight to the cloud, and can allow for more data mobility across embassies around the world, while still storing all sensitive data – i.e. public key infrastructure and telemetry data – in a U.S.-based FedRAMP cloud.
Gerald Caron, Director of Enterprise Network Management, Department of State, noted that TIC 3.0 enabled the agency to focus on risk tolerance. “TIC 3.0 is definitely an enabler to modernization…while still leveraging or maintaining secure data protection,” said Caron.
Pushing for Continued Modernization and Aligning Solutions to TIC 3.0 Guidance
We need to continue to work together to modernize the evolving remote work environment and threat landscape. The next step for TIC 3.0 is to provide additional baseline implementation guidance to agencies, including more information on hybrid cloud guidance, examples of risk profiles and risk tolerance, and the latest use cases.
An important aspect of TIC 3.0 is alignment with other contracts and guidance, including GSA’s Enterprise Infrastructure Solutions. The EIS contract is a comprehensive solution-based vehicle to address all aspects of federal agency IT telecommunications and infrastructure requirements. As the government’s primary vehicle for services including high-speed Internet, government hosting services, and security encryption protocols – it’s critically important that the TIC 3.0 guidance is used to provide the foundation for secure connections across solutions.
GSA recently released draft modifications to add the TIC 3.0 service as a sub security service to EIS. Allen Hill, Acting Deputy Assistant Commissioner for Category Management, Office of Information Technology Category (ITC), Federal Acquisition Service, GSA, said he hopes this collaboration will help agencies mature their zero trust architectures.
“Having the TIC 3.0 guidance allowed us to aggressively push the envelope,” said the VA’s Allen.
The Cybersecurity and Infrastructure Security Agency’s efforts over this past year, as well as TIC’s alignment with EIS, are great examples of what we can accomplish through innovation and strong collaboration. The team demonstrated real leadership, quickly putting the TIC 3.0 Interim Telework Guidance in place to support agencies as they scaled up the remote workforce. This progress is a permanent, positive shift for the Federal government – supporting the move to modernize remote access and enable secure cloud services. We’re still learning – but we’ve taken a giant leap forward.
My Cup of IT – GovTech4Biden
Like many of you, I have read the news every day for the last four years. Every day was like a visit to the proctologist – anger, fear, frustration. And, yes, the A word – anxiety.
So, I decided to put up or shut up – and I founded www.govtech4biden.com in June. I discovered that many of you felt the same way – 150-plus in fact. We embarked on a curious, scary, and fulfilling odyssey. We raised more than $100,000 for the Biden-Harris campaign.
On this journey, we hosted all the leading Democratic Congressman and Senators focused on tech. Fittingly, Congressman Gerry Connolly kicked us off – and leading lights on tech and our economy gave us the momentum to raise over $100,000 for the Biden campaign. Congressman Ro Khanna, Congresswoman Mikie Sherrill, Senator Jackie Rosen, the New Democrat Coalition – and closing out with Senators Maggie Hassan, Sheldon Whitehouse, and Chris Van Hollen.
If you’d like to hear more about GovTech4Biden – our political and tech odyssey – and thoughts on the tech agenda for the future, please join us for a webinar on Tuesday, November 24th from 1:00-2:00 p.m. ET./10:00-11:00 a.m. PT.
I’d like to salute the brave folks that banded together to support the Biden-Harris campaign – and provide a voice for the government technology community in the new administration. That took courage – here’s the tribute movie. We look forward to working with the new administration to champion innovation in government and across America.
To those that sent in unkind emails – I’m trying to understand you. Also happy if you’d like to resubscribe to MeriTalk – just shoot me an email.
We look forward to the opportunity to build back better together – and new tech for government is critical to that success.
Open Source Gives Agencies Long-Term Cloud Flexibility That Powers Cloud-based Telework
After a decade-long initiative to expand telework, the COVID-19 pandemic has shifted the federal government’s workforce to cloud-based telework, practically overnight. While improving workforce flexibility seems like the obvious benefit, federal agencies can also take this opportunity to leverage the extensive ecosystem of open source partners and applications to boost their multicloud modernization efforts.
Agencies that work with the global open source development community are able to accelerate service delivery and overcome many of the common barriers to cloud modernization.
“Within the open source community, there remains a strong focus in helping enterprises adapt to cloud computing and improve mission delivery, productivity and security,” says Christine Cox, Regional Vice President Federal Sales for SUSE. Developing applications with open source tools can also help federal agencies future-proof digital services by avoiding vendor lock-in, enhancing their enterprise security and supporting their high-performance computing requirements.
Why open source is important to federal agencies as they continue to telework
Agencies are working to solve unique and complex orchestration challenges to run applications and sensitive data across multiple cloud environments. They need to be able to respond quickly, with agility, and at scale. Open source solutions allow governments to design customized and secure environments as the interoperability of their agencies’ IT systems and the need to share information in real time across multicloud environments becomes more critical.
“Open source technologies like Kubernetes and cloud native technologies enable a broad array of applications because they serve as a reliable connecting mechanism for a myriad of open source innovations — from supporting various types of infrastructures and adding AI/ML capabilities, to making developers’ lives simpler and business applications more streamlined,” said Cox.
Ultimately, open source projects will help lower costs and improve efficiencies by replacing legacy solutions that are increasingly costly to maintain. Up-to-date open source solutions also create a more positive outcome for the end-users at all agencies — be they the warfighter or taxpayers.
How open source helps cloud migration in a remote environment
The archaic procurement practices based on vendor lock-in don’t allow for effective modernization projects, which is why implementing open source code can help agencies adapt tools to their current needs.
“One of the great benefits about SUSE, and open source, is that we offer expanded support, so that regardless of what you’re currently running in your environment, we can be vendor-agnostic,” Cox says.
In order to take greater advantage of open source enterprise solutions, agency leaders should practice a phased approach to projects, with the help of trusted partners who can guide them in their cloud computing efforts. This allows leaders to migrate to hybrid-cloud or multicloud environments in manageable chunks and in a way that eliminates wasteful spending.
Learn more at SUSEgov.com
Congress Should Evolve – Not Eliminate – the FITARA MEGABYTE Category
Following the release of the FITARA Scorecard 10.0 in August, discussion about sunsetting the MEGABYTE category of the scorecard has picked up. But, is that really a good idea?
The MEGABYTE category measures agencies’ success in maintaining a regularly updated inventory of software licenses and analyzing software usage. With most agencies scoring an “A” in that category, the sense seems to be that MEGABYTE’s mission has been accomplished, and it can now rest easy in retirement.
However, just because a goal has been achieved does not mean the method used to achieve the goal should be discarded. A student who graduates Algebra I doesn’t completely declare victory over math for the rest of her academic career; she moves onto Algebra II.
The same principle should apply to the MEGABYTE category. Instead of getting rid of it, Congress should consider building on it to fit the current market dynamics – which are a lot different than they were in 2016, when the MEGABYTE Act became law.
A Changing MEGABYTE for Changing Times
Back then, cloud computing wasn’t quite as ubiquitous as it is today. Agencies were still buying specific licenses for specific needs, owning software, and getting their occasional updates.
As software procurement evolves and changes in the cloud environment, so too will the methods required to accurately track applications and usage – a challenge which could actually make MEGABYTE’s call for accountability more important than ever.
In some cases, agencies may not even know what they’re paying for. As such, they could end up paying more than necessary. Reading a monthly cloud services bill can be the equivalent of scanning a 30-page phone bill, with line after line of details that can be overwhelming. Many time-starved managers might be inclined to simply look at the amount due and hit pay without considering that they may be paying for services their colleagues no longer need or use.
There’s also the prospect of shadow IT, which appears to have been exacerbated by the sudden growth of the remote workforce. Employees could simply be pulling out their credit cards and ordering their own cloud services – not for malicious purposes, but just to make their jobs easier and improve productivity. In the process, agency employees might sign up for non-FedRAMP certified cloud services or blindly agree to terms and conditions that their agency procurement colleagues would not agree to. These actions can open agencies to risk, and must be governed.
A new MEGABYTE for a new era could be a way to measure accountability and success in dealing with these challenges. Agencies, for instance, could be graded on their effective use of cloud services. The insights gained could lead to more efficient use of those services including the potential to cancel services that are no longer needed. Finally, they could be evaluated based on how well they’re able to illuminate the shadow IT that exists within their organizations for a more accurate overview and governance of applications.
Not Yet Time for MEGABYTE to Say Bye
Just because the MEGABYTE category has turned into an “easy A” for most agencies does not mean that it’s time to eliminate it from the FITARA scorecard. Yes, let’s revisit it, but let’s not let it go just yet. Instead, let’s take it to a new level, commensurate with where agencies stand today with their software procurement.
Reimagining Cybersecurity in Government Through Zero Trust
As the seriousness of the coronavirus pandemic became apparent early this year, the first matter of business for the Federal government was simply getting employees online and ensuring they could carry on with their critical work and missions. This is a unique challenge in the government space due to the sheer size of the Federal workforce and the amount of sensitive data those workers require – everything from personally identifiable information to sensitive national security information. And yet, the Department of Defense, for one, was able to spin up secure collaboration capabilities quite quickly thanks to the cloud, while the National Security Agency recently expanded telework for unclassified work.
Connectivity is the starting line for the Federal government, though – not the finish line. Agencies must continue to evolve from a cybersecurity perspective in order to meet new demands created by the pandemic. Even before the pandemic, the Cyberspace Solarium Commission noted the need to “reshape the cyber ecosystem” with a greater emphasis on security. That need has been further cemented by telework. A worker’s laptop may be secure, but it’s likely linked to a personal printer that’s not. Agencies should assume there is zero security on any home network.
Building a New Cyber World
In the midst of the pandemic, MeriTalk surveyed 150 federal IT managers to understand what cyber progress means and how to achieve it. The need for change was clear; only 11 percent of respondents described their current cybersecurity system as ideal. What do Federal IT pros wish was different? The majority of respondents said they would start with a zero trust model, which requires every user to be authenticated before gaining access to applications and data. Indeed, zero trust has, to a large degree, enabled the shift we are currently seeing. But not all zero trust is created equal.
Federal IT pros need to be asking questions like: How do you do microsegmentation in sensitive environments? How do you authenticate access in on-premises and cloud environments in a seamless way? In the government space especially, there is a lot of controlled information that’s unclassified. As such, it’s not sufficient to just verify users at the door before you let them in. Instead, agencies must reauthenticate on an ongoing basis – without causing enormous friction. A zero trust model is only as good as its credentialing capabilities, and ongoing credentialing that doesn’t significantly disrupt workflow requires behavioral analytics.
Agencies must be adept at identifying risk in order for zero trust to be both robust and frictionless. In this new era, they should be evaluating users based on access and actions. This means understanding precisely what normal, safe behavior looks like so they can act in real-time when users deviate from those regular patterns of behavior. Having such granular visibility and control will allow agencies to dynamically adjust and enforce policy based on individual users as opposed to taking a one-size-fits-all approach that hurts workers’ ability to do their jobs.
The Role of the Private Sector
The current shift in the Federal workforce may seem daunting to some, but it represents a huge opportunity for the government and private sector alike. The Cyberspace Solarium Commission highlighted the importance of public-private partnerships – partnerships that can help make modernized, dynamic zero trust solutions the new normal if they can overcome the unique scaling challenge that Federal IT presents. The government must not just embrace commercial providers, but work closely with them to enable such scale, as it could help the government continue to reimagine its workplace.
Shifting to a zero trust model means improved flexibility and continuity, which can help expand the talent pool that agencies attract. Government jobs were previously limited to one location, with no option for remote work. Thus, agencies lost out on great talent that was simply in the wrong part of the country. Now, some agencies are claiming they don’t need DC headquarters at all.
Additionally, more flexible work schedules may also boost employees’ productivity. A two-year Stanford study, for one, showed a productivity boost for work-from-home employees that was equal to a full day’s work. In recent months, the government has seen that firsthand that flexible and secure remote work can happen through the novel application of existing technologies – including zero trust architecture.
The Bottom Line
Agencies must evolve cybersecurity in a way that allows them to embrace remote work without being vulnerable to attack. It’s not enough to get Federal employees online; users and data must be secure as well. The mass shift to telework represents a huge opportunity for the public sector – which is growing both its remote work capabilities and its potential pool for recruitment – and for those in the private sector who can be responsive to this need.
The majority of Federal IT leaders would implement a zero trust model if they could start from scratch. But once again, zero trust is only as good as your credentialing technology and your ability to understand how users interact with data across your systems. The key to seamless and secure connectivity is behavioral analytics, which allows for ongoing authentication that doesn’t hinder users’ ability to do their jobs or leave sensitive information vulnerable.
Driving IT Innovation and Connection With 5G and Edge Computing
The COVID-19 pandemic has influenced the way agencies function, and forced many to redefine what it means to be connected and modernize for mission success.
Agencies have reprioritized automation, artificial intelligence (AI), and virtualization to continue delivering critical services and meeting mission requirements through recent disruptions, and to predict and navigate future disruptions more efficiently. These transformative technologies open the door to accelerated innovation and have the potential to help solve some of today’s most complex problems.
Still, there is work to be done. While nearly half of Federal agencies have experimented with AI, only 12 percent of AI in use is highly sophisticated.[1] Agencies must rely on a solid digital transformation strategy to leverage next-gen technology, including the fifth generation of wireless technology (5G) and edge computing, to drive these innovations in Federal IT – regardless of location or crisis.
Faster Connections, Better Outcomes
Building IT resiliency and a culture of innovation across the public sector requires greater connectivity and data accessibility to power emerging technologies that enable faster service and better-informed decisions. In a traditional 4G environment, users connect to the internet through a device at a given time. In contrast, 5G integrates devices into the environment, allowing them to connect and stay connected at all times.
This constant connectivity enables agencies to generate data in real-time – not just when they sync with the cloud. Imagine some of the real-life applications of this capability. Healthcare providers would have instant, continuous health data to use in patient care. Soldiers on the battlefield would have constant connectivity for more accurate intel and defense strategies. These insights not only drive efficiency and security, but they save on time and resources.
Dell Technologies’ John Roese recently shared the importance of the U.S. driving these innovations – and the positive implications for the Federal space. “By doing so, we can increase market competitiveness, prevent vendor lock-in, and lower costs at a time when governments globally need to prioritize spending. More importantly, we can set the stage for the next wave of wireless,” he explained.
As an open technology, 5G infrastructure is a high-speed, interconnected mesh provided by multiple vendors at the same time. This prevents challenges presented to agencies by vendor lock-in, and reduces costs associated with creating and maintaining individual access points.
However, with perpetual connectivity, devices require a connection point with low latency. As 5G technology progresses, edge computing becomes a powerful necessity. Gartner reported that by 2022, more than 50 percent of enterprise-generated data will be created and processed outside the data center or cloud.
Dell Technologies’ edge capabilities enable agencies to get the data they need and avoid data siloes by applying logic in the edge – immediately. Dell Technologies has also started to specialize in providing 5G-compatible devices built with edge computing in mind.
These capabilities allow data to be processed in real-time, closer to the source. Devices can intelligently communicate anomalies and changes back to the core data center, enabling a better, more capable edge.
As time progresses, the edge will become smarter in making decisions and reducing the amount of data that needs to be transferred back to the core, while also ensuring the core is updated more frequently to support AI and machine learning.
New Challenges Require New Strategies
As the technology landscape changes yet again, agencies face the challenge of investing in new technology – one that has to be built from the ground up. However, as next-gen technologies continue to develop, government has no choice but to keep up.
Whether providing critical services to the public or creating strategies for the battlefield, agencies need access to the best tools and most accurate insights to drive mission success.
Agencies should leverage support from industry partners like Dell Technologies to get the support they need to accelerate their efforts, drive efficiencies, and innovate. As Roese noted, “when the technology industry of the United States is fully present in a technical ecosystem, amazing innovation happens, and true progress occurs.”
At the end of the day, these efforts lead to better, stronger outcomes for all.
Learn more about how Awdata and Dell Technologies are driving Federal innovation and connection with next-gen tech.
[1] Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, 2020
My Cup of IT – Vote for America, First
I’m a foreigner who’s proud of my heritage – and I’m an American patriot ready to stand up for this great country’s principles.
Feeling shipwrecked by politics and pandemic? This has been a year where too many of us have felt separated – now’s the time to come together and celebrate our American democracy. Whether you’re a Republican or a Democrat your voice must be heard – and your ballot counted. Vote is not a four-letter word. Do it early in person, early by mail, or day of (with protection) – but please do it.
I came to this county for opportunity – and I stayed based on the welcome and the sunshine. Please vote in the election, and respect the votes of others when they’re tallied.
Sometimes it takes a community to get to the truth – a word from somebody you know and trust. That’s why I’m speaking up now.
I know you’re a patriot – America’s depending on you. Let’s put America first – and vote.
Evolving the Remote Work Environment With Cloud-Ready Infrastructure
When agencies began full-scale telework earlier this year, many were not anticipating it would evolve into a long-term workplace strategy. However, in the wake of COVID-19, recent calculations estimate $11 billion in Federal cost-savings per year due to telework, as well as a 15 percent increase in productivity. Agencies are determining how they can continue to modernize – and therefore optimize – to support greater efficiency into the future.
Though many agencies began implementing cloud and upgrading infrastructures well before the pandemic, legacy technology presents a unique challenge in the remote landscape. IT teams and employees who were directly connected to a data center now needed remote access to infrastructure, while keeping security a top priority.
How can agencies ensure they have secure and specific connections that serve their needs and also optimize performance? They must adapt, shifting access to where it is needed and augmenting existing technology with solutions that allow flexibility, agility, and the additional security needed within a distributed environment.
A New Approach for the New Normal
To address issues with remote access, many agencies have turned to software-defined networking in a wide area network (SD-WAN) as it provides secure connection between remote users and the data center or cloud. However, long-term success with telework will require more than access. It will require teams to change the way they use the technology they have.
Recently, I spoke on a MeriTalk tech briefing where I discussed how agencies can leverage cloud-ready infrastructures to accelerate modernization with operational cost-savings and increased efficiency. Dell Technologies VxRail with VMware Cloud Foundation is perfectly suited for the distributed workforce, allowing teams of any size, and in any stage of their modernization journey, to build what they need when they need it.
Remote employees don’t have the same access to the on-prem data center’s compute resources as they had when working on-site. VxRail acts as a modern data center, enabling virtual desktops, compute, and storage in one appliance while providing users with secure access and network flexibility.
Teams can design a VxRail component for as many users as needed and then scale by units. With this flexibility, agencies don’t require as much local infrastructure to function optimally and can scale their services faster and more affordably with one-click upgrades and maintenance.
Teams can also bring the local data center and cloud into one management portfolio – whether a multi-cloud or hybrid environment – integrating all of these capabilities into a single platform that is easy to consume.
These technologies offer cybersecurity advantages as well. The VMware Cloud Foundation can utilize VMware’s NSX, a virtualization and security platform. NSX enables teams to create granular micro-segmentation policies between applications, services, and workloads across multi-cloud environments. Agencies can control not only how many users are in their environment and what resources they are allowed to access, but also where and how users connect to those resources.
Create a Culture of Collaboration
The switch to Federal telework has caused agencies to take a closer look at how they can continue to modernize and optimize IT for mission success – no matter where their employees are located.
Beth Cappello, Deputy Chief Information Officer for the Department of Homeland Security, recently noted, “as we go forward … we’ll look back at the fundamentals: people, processes, technologies, and examine what our workforce needs to be successful in this posture.”
Whether using new technologies or augmenting existing technologies, success will come down to collaboration. Agencies should look to collaborate early and often, and bring in developers and key team members to leverage their knowledge and drive efficiency and agility from the start.
This cultural change will allow government to become more flexible and agile in their approach to modernization – exactly what they need to take Federal IT to the next level.
Learn more about how Awdata and Dell Technologies are helping improve Federal telework with cloud-ready solutions.
Shift to Telework: Enabling Secure Cloud Adoption for Long-Term Resiliency
Over the past few months, agencies have strengthened remote work tools, increased capacity, improved performance, and upgraded security to enable continuity of operations as employees work from home and in various new locations.
However, as networks become more distributed across data centers, cloud, and remote connections, the attack surface increases, opening up the network to potential cybersecurity threats. Agencies have been forced to balance operations and security as they shift how users connect to government networks while remote.
The Department of Homeland Security Cybersecurity and Infrastructure Security Agency (DHS – CISA) has played a key role in providing telework guidance through updates to the Trusted Internet Connections 3.0 guidance (TIC 3.0). This was an important step to provide more immediate telework guidance, open the door for modern, hybrid cloud environments, and provide agencies with greater flexibility.
In a recent webinar, I had the opportunity to speak with Beth Cappello, Deputy CIO, DHS, about IT lessons learned from the pandemic and the future of modern security with TIC 3.0 and zero trust.
TIC 3.0 and the Cloud Push
“When you think about TIC 3.0 and you think about the flexibility that it introduces into your environment, that’s the mindset that we have to take going forward,” said Cappello. “No longer can it be a traditional point-to-point brick and mortar fixed infrastructure approach.”
TIC 3.0 has enabled agencies to take advantage of much-needed solutions, such as cloud-based, secure web gateways and zero trust architecture to support secure remote work.
Prior to the pandemic, DHS had begun adopting cloud – moving email to the cloud and allowing for more collaboration tools and data sharing – enabling the agency to transition from about 10,000 to 70,000 remote workers almost overnight. Many other agencies have similar stories – moving away from legacy remote access solutions to cloud and multi-cloud environments that offer more scalability, agility, and security.
IT administrators must be able to recognize where threats are coming from, and diagnose and fix them through “zero-day/zero-minute security.” To do this, they must turn to the cloud. Cloud service providers that operate multi-tenant clouds can offer agencies an important benefit – the cloud effect – which allows providers to globally push hundreds or thousands of patches a day with security updates and protections to every cloud customer and user. Each day, the Zscaler cloud detects 100 million threats and delivers more than 120,000 unique security updates to the cloud.
Secure Connections From Anywhere
When the pandemic hit, agencies needed to find a way to connect users to applications, security as-a-service providers, O365, and the internet, without having to backhaul traffic into agency data centers and legacy TICs – which often result in latency and a poor user experience. Agencies required better visibility to identify who is connecting to what, see where they are connecting to, and send that telemetry data back to DHS.
Rather than focusing on a physical network perimeter (that no longer exists), the now finalized TIC 3.0 guidance recommends considering each zone within an agency environment to ensure baseline security across dispersed networks.
As telework continues, many agencies are evolving security by adopting zero trust models to connect users without ever placing them on the network. We know bad actors cannot attack what they cannot see – so if there is no IP address or ID to attack on the network, these devices are safe. Instead, agencies must verify users before granting access to authorized applications, connecting users through encrypted micro-tunnels leading to the right application. This allows users to securely connect from any device in any location while preventing east-to-west traffic on the network.
The Move to the Edge
For long-term telework and beyond, the next big shift in security architectures will need to address how agencies can continue optimizing working on devices in any location in the world. As agencies move to 5G and computing moves to the edge, security should too. Secure Access Service Edge (SASE) changes the focus of security from network-based to data-based, protecting users and data in any location and improving the overall user experience.
A SASE cloud architecture can provide a holistic approach to address the “seams” in security by serving as a TIC 3.0 use case and building security functions of zero trust into the model for complete visibility and control across modern, hybrid cloud environments.
For agencies like DHS, who have a variety of sub-agencies and departments of different sizes and missions, cloud is ideal to facilitate secure data sharing and collaboration tools.
“So, when we’re securing our environment, we’re provisioning, monitoring, and managing. We have to be mindful of those seams and mindful of the gaps and ensure that as we’re operating the whole of the enterprise that we are keeping track of how resilient the entire environment is,” said Cappello.
Managing and Securing Federal Data From the Rugged Edge, to the Core, to the Cloud
The Federal government collects and manages more data outside of traditional data centers than ever before from sources including mobile units, sensors, drones, and Artificial Intelligence (AI) applications. Teams need to manage data efficiently and securely across the full continuum – edge to core to cloud.
In some cases, operating at the edge means space constrained, remote, and harsh environments – with limited technical support. Our new Dell EMC VxRail D Series delivers a fully-automated, ruggedized Hyperconverged Infrastructure (HCI) – ideal for demanding federal and military use cases.
VxRail is the only HCI appliance developed with, and fully optimized for, VMware environments. We built the solution working side by side with the VMware team. Both administrators and end users get a consistent environment, including fully automated lifecycle management to ensure continuously validated states. How? More than 100 team members dedicated to testing and quality assurance, and 25,000 test run hours for each major release.
Users can manage traditional and cloud-native applications across a consistent infrastructure – in winds up to 70 mph, temperatures hot enough to fry an egg and cold enough to freeze water, and 40 miles-per-hour sandstorms. Whether you are managing Virtual Desktop Infrastructure (VDI), or mission-critical applications in the field, your team can take advantage of HCI benefits and ease of use.
As Federal teams collect and manage more data, they also have to be able to put that data (structured and unstructured) to work, creating new insights to help leaders deploy the right resources to the right place, anticipate problems more effectively, and achieve new insights.
Dell Technologies recently announced a new PowerScale family, combining the industry’s number one network-attached storage (NAS) file system, OneFS, with Dell EMC’s PowerEdge servers, at a starting point of 11.5 terabytes raw and the capability to scale to multi-petabytes. PowerScale nodes include the F200 (all-flash), F600 (all-NVME), and Isilon nodes. End users can manage PowerScale and Isilon nodes in the same cluster, with a consistent user experience – simplicity at scale.
Federal teams – from FEMA managing disaster relief, to the Department of Justice working on law enforcement programs, to the Department of Defense managing military operations, can start small and grow easily on demand.
PowerScale is OEM-Ready – meaning debranding and custom branding is supported, while VxRail D Series is MIL-STD-810G certified and is available in a STIG hardening package. Both PowerScale and VxRail D Series enjoy the Dell Technologies secure supply chain, dedicated engineering, and project management support.
As the Federal government continues to deploy emerging technology, and collect and manage more and more data outside of the data center, government and industry need to collaborate to continue to drive innovation at the edge, so we can take secure computing capabilities where the mission is – whether that’s a submarine, a field in Kansas, a tent in the desert, or a dining room table.
Cyber Resiliency Means Securing the User
The recent, rapid shift to remote work has been a lifeline for the economy in the wake of the COVID-19 virus. But that shift also took an already-growing attack surface and expanded it. Government agencies were being called to rethink their cybersecurity posture and become more resilient even before the pandemic. Now, the novel coronavirus has added an indisputable level of urgency on that demand.
The Cyberspace Solarium Commission (CSC) was created as part of the National Defense Authorization Act (NDAA) for the 2019 fiscal year. On March 11, its final report was released, articulating a strategy of layered cyber deterrence through more than 80 recommendations. One of its policy pillars was the need to “reshape the cyber ecosystem,” improving the security baseline for people, tech, data, and processes.
Shortly after the report’s release, the virus upended the work environment of most public sector employees, prompting the CSC to publish a follow-on whitepaper evaluating and highlighting key points and adding four new CSC recommendations, focused heavily on the Internet of Things (IoT). This focus, coupled together with the evolving cyber threat, means that “reshaping the cyber ecosystem” requires the government to move beyond investments in legacy technologies, and focus on the one constant that has driven cybersecurity since the beginning – people and their behaviors.
People Are the New Perimeter
The cyber ecosystem has, to some degree, already been dramatically reshaped. The security baseline needs to catch up. Currently, a large percentage of the Federal workforce is working from home – often relying on shared family networks to do so – and that may continue even as the pandemic subsides. In turn, agencies must look beyond the traditional, office-based perimeter as they secure employees and data. Data and users were already beginning to spread beyond walled-off data centers and offices; mass telework has simply pushed it over the edge.
We’ve already seen bad actors take advantage of this new perimeter by targeting unclassified workers via phishing and other attacks. Recent research found that, as of March, more than half a million unwanted emails containing keywords related to coronavirus were being received each day. Attackers are gaining compromised access, with many simply learning the network for now and lying in wait. Even traditionally trustworthy employees are under tremendous stress and may feel less loyal given the current physical disconnect.
In order to achieve the CSC’s vision of more proactive and comprehensive security, organizations must begin to think of people as the new perimeter. This is not a temporary blip, but the new normal. Agencies must invest in cybersecurity beyond the realm of old-school perimeter defenses. Methods like firewalls or data loss prevention strategies are important, but they are not enough. With people as the new perimeter, there is simply no keeping bad actors out. Instead, agencies need to keep them from leaving their network with critical data and IP – which can only be done with a deep understanding of people and data’s behavior at the edge.
Behavioral Analytics Should Be the Baseline
Putting the commission’s guidance into action must mean putting users at the center of the equation. Once again, it’s insufficient to simply rely on blocking access from bad actors. A more proactive and adaptive approach is required. Agencies must first understand which users pose the greatest risk, based on factors such as what types of data they have access to, and then develop dynamic policies that are tailored to that specific risk and are flexible enough to change with evolving circumstances.
Additionally, organizations must have an understanding of what normal behavior looks like for all users – based on information from traditional security systems and other telemetry inputs. By detecting anomalies in these patterns, analysts can identify potential threats from malicious insiders to external bad actors and take rapid and automated action in real-time. Behavioral analytics lets organizations separate truly malicious behavior from simple mistakes or lapses, and tailor the security response accordingly. The aim is to replace broad, rigid rules with individualized, adaptive cybersecurity – creating a far better baseline of security, as the CSC called for.
The Bottom Line
Understanding how people interact with data is key to our nation’s security and should be a part of the push to put the CSC’s recommendations into action. The commission also emphasized collaboration with the private sector, mostly suggesting its resources and capabilities could help private sector actors stay safe. The collaboration should flow in the other direction as well. Capabilities coming from the private sector need to be incorporated into the public sector, especially in the wake of the pandemic.
The federal government cannot simply be investing in legacy tech. Instead, they need to be throwing their weight behind innovative approaches – like behavior-centric security – that will move agencies closer to the CSC’s vision. With people as the new perimeter, a more targeted and adaptive cyber defense must be the new baseline.
Understanding COVID-19 Through High-Performance Computing
COVID-19 has changed daily life as we know it. States are beginning to reopen, despite many case counts continuing to trend upwards, and even the most informed seem to have more questions than answers. Many of the answers we do have, though, are the result of models and simulations run on high-performance computing systems. While we can process and analyze all this data on today’s supercomputers, it will take an exascale machine to process quickly and enable true artificial intelligence (AI).
Modeling complex scenarios, from drug docking to genetic sequencing, requires scaling compute capabilities out instead of up – a method that’s more efficient and cost effective. That method, known as high-performance computing, is the workhorse driving our understanding of COVID-19 today.
High-performance computing is helping universities and government work together to crunch a vast amount of data in a short amount of time – and that data is crucial to both understanding and curbing the current crisis. Let’s take a closer look.
Genomics: While researchers have traced the origins of the novel coronavirus to a seafood market in Wuhan, China, the outbreak in New York specifically appears to have European roots. It also fueled outbreaks across the country, including those in Louisiana, Arizona, and even California. These links have been determined by sequencing the genome of SARS-CoV-2 in order to track mutations, as seen on the website The Next Strain and reported in the New York Times. Thus far, an average of two new mutations appear per month.
Understanding how the virus has mutated is a prerequisite for developing a successful vaccine. However, such research demands tremendous compute power. The average genomics file is hundreds of gigabytes in size, meaning computations require access to a high-performance parallel file system such as Lustre or BeeGFS, etc. Running multiple genomes on each node maximizes throughput.
Molecular dynamics: Thus far, researchers have found 69 promising sites on the proteins around the coronavirus that could be drug targets. The Frontera supercomputer is also working to complete an all-atom model of the virus’s exterior component—encompassing approximately 200 million atoms—which will allow for simulations around effective treatment.
Additionally, some scientists are constructing 3D models of coronavirus proteins in an attempt to identify places on the surface that might be affected by drugs. So far, the spike protein seems to be the main target for antibodies that could provide immunity. Researchers use molecular docking, which is underpinned by high-performance computing, to predict interactions between proteins and other molecules.
To model a protein, a cryo-electron microscope must take hundreds of thousands of molecular images. Without high-performance computing, turning those images into a model and simulating drug interactions would take years. By spreading the problem out across nodes, though, it can be done quickly. The Summit supercomputer, which can complete 200,000 trillion calculations per second, has already screened 8,000 chemical compounds to see how they might attach to the spike protein, identifying 77 that might effectively fight the virus.
Other applications: The potential for high-performance computing and AI to simulate the effects of COVID-19 expand far beyond the genetic or molecular level. Already, neural networks are being trained to identify signs of the virus in chest X-rays, for instance. When large scale AI and high-performance computing are done on the same system, you can feed those massive amounts of data back into the AI algorithm to make it smarter.
The possibilities are nearly endless. We could model the fluid dynamics of a forcefully exhaled group of particles, looking at their size, volume, speed, and spread. We could model how the virus may spread through ventilation systems and air ducts, particularly in assisted living facilities and nursing homes with extremely vulnerable populations. We could simulate the supply chain of a particular product, and its impact when a particular supplier is removed from the equation, or the spread of the virus based on different levels of social distancing.
The bottom line: The current crisis is wildly complex and rapidly evolving. Getting a grasp on the situation requires the ability to not just collect a tremendous amount of data on the novel coronavirus, but to run a variety of models and simulations around it. That can only happen with sophisticated, distributed compute capabilities. Research problems must be broken into grids and spread out across hundreds of nodes that can talk to one another in order to be solved as rapidly as is currently required.
High-performance computing is what’s under the hood of current coronavirus research, from complex maps of its mutations and travel to the identification of possible drug therapies and vaccines. As it powers even faster calculations and feeds data to even more AI, our understanding of the novel coronavirus should continue to evolve—in turn improving our ability to fight it.
Empowering remote teams to collaborate in a WFH world
Many more people are working at home these days, and although much of this started with COVID-19, remote work from home (WFH) could become standard procedure for businesses around the world.
Mission Success Demands an Outstanding User Experience
By Sarah Sanchez, Vice President, Managed Solutions, SAIC
Our government is facing a challenging moment. Confronted with an unprecedented pandemic, agencies are having to dramatically ramp up their services for our country while large portions of the workforce cannot be in the office. Now more than ever, it is essential that government employees have access to IT systems and tools that are optimized to help them achieve their critical missions.
Fortunately, technology is accelerating in ways that can reduce the friction of obtaining IT support and empower government workers to keep their focus on the challenges at hand. By carefully designing systems and leveraging artificial intelligence, machine learning, and other automation tools, we can bring government users the most user-friendly experience possible.
I think SAIC’s Chief Technology Officer Charles Onstott said it well recently. “Agencies should be looking to deliver the outstanding user experience their workforce and citizens expect as an outcome of their digital transformation efforts,” he noted. “Effective integration of technologies like ServiceNow and Splunk will deliver not only improved operations, but will elevate the overall experience of the agency’s services.”
Delivering an outstanding user experience means more than simple web design or ticket resolution. It demands accountability for delivering quality services to end users by utilizing innovative methods and mechanisms that support effective collaboration across organizations to resolve complex issues. How do we give government employees easy access to the tools they need to do their jobs? How do we make processes more intuitive, information more accessible, and empower work across all the platforms – including mobile devices – that employees have become accustomed to using in their everyday lives? How do we create the environment to facilitate and incentivize innovation that moves our country forward?
At SAIC, we’ve developed an approach that we call U-Centric, because we’re focused entirely on improving that user experience. By unlocking the full functionality of technology platforms like ServiceNow and Splunk, we streamline IT services and transform the end-to-end customer service experience that increases value and improves efficiency. We’re also making agencies more secure through process automation and advanced analytics, and we’re driving costs down and improving the customer experience with self-help and artificial intelligence.
Successful delivery starts with understanding how a user wants to interact with the systems they use, and recognizing that one size does not fit all. U-Centric is built on the premise of providing Omni-Channel access to services, giving the user the ability to choose how they engage with their IT services. This is fundamental to the understanding that IT systems exist to support the user in accomplishing their mission critical job.
U-Centric includes capabilities such as persona-based portals, which contain easy-to-access dashboards to collect and convey useful information quickly. It also utilizes self-help features such as knowledge articles and step-by-step how-to videos, robotic process automation, automated workflow processes, and self-healing systems. It is transparent, so that agencies can have visibility into how data is being collected and utilized in support of these efforts and facilitates informed decisions. And it is built upon the critical business systems and rules that underpin agency work, so that new systems can be as seamless to the user.
The results of our U-Centric approach in these efforts have been fantastic. By streamlining workflows and building user-friendly systems, we’ve reduced the human labor hours dedicated to redundant processes and IT support. In one recent example, we reduced the work hours necessary to complete account administration from more than 16 hours to under 30 minutes. That not only means that IT support staff are more efficient and can serve more users, but it also reduces the time users spend dealing with IT support issues so they can stay focused on the vital tasks at hand.
Now, with our recent acquisition of Unisys Federal, we can do even more. We’ve incorporated their top-flight end user services programs to our existing capabilities, and I believe that combined strength makes us the best in the industry in delivering a comprehensive, seamless user experience for government customers.
By focusing relentlessly on serving user needs and making full use of available workflow tools, we can ensure that technology serves the government employees who serve our country, and facilitate the innovation we need. I’m excited about what’s possible, and I’m proud that SAIC is playing a leading role in this effort.
The Right Policy to Protect Remote Workers
In March, the White House released guidance that encouraged government agencies to maximize telework opportunities for those at high risk of contracting the coronavirus, as well as all employees located in the D.C. area. Though there are still many government employees not yet authorized to telework, this guidance marks a turning point.
Telework modes of operation are not new – and neither are the threats that accompany them. But the attack surface has grown significantly in the past month. A large number of workers are operating in insecure environments ripe for phishing and malware attacks, while new tools like video conferencing solutions can be targeted for malicious use or expose data to attacks.
Old, binary policies are insufficient to meet the new security challenge. Previously, policy could be split between enterprise and remote workers. But when everyone from senior to entry-level employees are all working from home, more granular policy controls are required. Those controls still rely on the same bread-and-butter IT best practices, though, from hardware-based security to patching and data protection. Here are some security controls government IT pros should implement today to ensure their newly remote workforce isn’t a tremendous liability.
Managing Unsecured Environments
BYOD users, naturally, manage and own their own devices, and these devices live in unsecured environments and are exposed to attacks on the network. Consider a user who has four kids simultaneously logging into distinct telelearning systems on the same network he is now using for government work. How secure are the laptops, links, and teachers those kids are accessing? The reality is that network security is only as good as the link your kid clicked on last. As such, IT needs to push the latest patches as a requirement, enable multi-factor authentication (MFA) and enterprise rights management, and enforce good access control.
These best practices apply to workers who took a managed enterprise device home as well. Those devices also need protection against everything happening on the local wi-fi, in addition to enterprise access control (EAC). Before EAC, users connected to a network—and were only authenticated once they were already in. EAC, on the other hand, stops you at the front gate, verifying not just the user, but also that they have the proper local security software agents and updates. EAC was popular when the BYOD trend first gained steam, but many people saw it as too intrusive to be sustainable. Now, EAC is a key tool for helping to better manage laptops living in unsecured environments.
Cloud Services and SaaS
Implementing security for VDI systems and cloud services includes some security basics as well: data protections, virtualization security for both the enterprise data center and at the access points, application security, secure boot, and so on. With software-as-a-service (SaaS), client access to cloud services should be protected through MFA and complemented with network transport encryption to offer protection on both sides. Appropriate data protection in enterprise rights management (ERM) can control access to the data through the cloud services and back to the data center. Understanding how clients are using the services and what data they are accessing is where the ERM decisions come into play.
Monitoring Threat Intelligence
IT pros also need to take a renewed focus on managing the threat of mistakes, misuse, and malicious insiders. There is always the risk of a user doing something careless or malicious, but that risk is exacerbated now; people are stressed and more apt to use shortcuts and make bad decisions. Normally, protecting against such risks means monitoring for anomalous use, like an employee working at midnight. But in the new world order, everyone’s hours are off. Many employees are working unpredictable “shifts” in an attempt to balance childcare and other responsibilities. Agencies need to be able to sift through these anomalous behaviors quickly and extend their threat intelligence and monitoring capabilities to the new edge where the users are now.
Policy-based access control and enforcement for applications and data at both the enterprise and the cloud level are also important to thwart misuse and abuse by users who are already authenticated. Enforcing ERM along with encryption, for instance, can further protect data so it can’t leave a laptop, or prevent it from being copied onto a USB.
The bottom line is that agencies now have to think differently about security issues related to teleworking. IT pros must monitor threats and secure everything from services to endpoints. While the modes of operation for telework are the same, the threat surface has grown. Policy controls must be far more granular in order to be effective.
The Best Things We Build Are Built to Last
We’ve spent the last several months in a bit of a surreal version of normal but there is light at the end of the proverbial tunnel. When we emerge from the current environment, the reality is that we will be better off from a security perspective than we were when we went in. The additional need to increase the capacity of access of cloud-based apps, VPN or “other” have required us to think a lot harder about the security that comes along with this extra access to the point where “building it in” makes a lot more sense than “bolting it on.”
Basic security hygiene items like DNS security and multi-factor authentication(MFA) can be the first, and best line of defense for any access environment which certainly includes an extreme telework scenario. The good news is that the protections don’t stop when our access environments return to “normal.” Since these security capabilities are part of a Zero Trust lifestyle, we get to carry these protections forward as they now have become our best practices.
We were gonna get there eventually, but we were forced to step on the gas
Some of the biggest challenges some Federal agencies have faced, beyond the capacity issue, is trying to figure out how to marry the legacy technologies we have kept running by sheer will, with the more cloud and mobile-focused innovative technologies that make the most sense for a more remote deployment. Agencies have been moving in this direction for years but the “extreme telework scenario” has accelerated this, to the point of making it uncomfortable and sometimes painful. One example of this is the legacy government agency authentication and user authorization. We’ve spent the past decade building out the “I” in PKI (public key infrastructure) and this works fairly well in our old world (users sitting in offices accessing applications from a desktop with a smartcard reader), it doesn’t work so well in this new normal. The good news is that there is a compromise to be made. A way to leverage the existing investments and make them work in a more innovative world.
Duo has been focused on being a security enabler for agencies as they make their journey to a cloud and mobile world, but we also realize that there has been lots of work and resources invested in the smartcard infrastructure that has powered our identity, credentialing, and access management (ICAM) systems. We have partnered with experts in this arena, folks like CyberArmed, to leverage that investment and to leverage the strong identity proofing that solutions like this provide.
CyberArmed’s Duo Derived Solution
NIST has shown us the way
When NIST, smartly, separated the LOA structure of 800-63 into proofing (IAL) and authentication (AAL), they provided guidance to allow agencies the flexibility to deploy the right tools for the right job and also allowed those agencies to apply a risk based, Zero Trust approach to secure access. The Office of Management and Budget (OMB) followed suit and aligned their updated ICAM guidance (M-19-17) to provide agencies with the flexibility to make risk based deployment decisions. This flexibility helps agencies to be more agile in support of whatever might be thrown at them, while still providing strong, consistent identity security. This identity focus is exactly what we need as we make our cloud journeys.
Now that we’re getting back to a small amount of normal, we need to take stock in the things we’ve been able to accomplish and the investments we’ve made to shore up our security and prepare us for the accelerated cloud & mobile journey. The things we’ve done will not be in vain.
How Organizations can Respond to Risk in Real Time
The NIST Cybersecurity Framework, initially issued in early 2014, outlines five functions with regard to cybersecurity risk: identify, protect, detect, respond, and recover. Of these functions, those on the far left encapsulate measures that could be considered pre-breach; those on the right, post-breach. Far too often, however, government agencies tip the scales too far to the left.
While the NIST Cybersecurity Framework offers a solid foundation, security teams remain overwhelmed in reactive strategies – a tremendous problem considering those steps limit an organization’s ability to become more proactive in identifying and operationalizing actions before the concern becomes significant.
Traditional approaches to data protection usually entail buying and implementing tools that are binary and reactive. A particular event is seen as good or bad – with walls, blocks, and policies put in place to deal with the latter. This leaves government systems drowning in alarms and alerts, while limiting employees’ ability to do their jobs. If your policy is to block all outbound email attachments that include anything proprietary or sensitive, for instance, HR can’t do something as simple as send out a job offer.
By continuously identifying potential indicators of risk at an individual level, organizations can instead take a proactive security posture – one in which responding to and recovering from threats is an ongoing effort, not a piecemeal one. Here are three key components of a truly proactive approach.
Continuous Risk Evaluation
Users are continuously interacting with data, which means organizations must be continuously monitoring those interactions for threats, as opposed to scrambling once a breach has been flagged. Risk is fluid and omnipresent; removing risk wholesale is impossible. Instead, the goal should be to detect and respond to excessive risk, and that can only be done through continuous evaluation. This is especially important as agencies rely on a growing amount of data, which is stored everywhere and accessed anywhere.
Continuous risk evaluation means cybersecurity doesn’t end after a user’s behavior is labeled as “good” and access or sharing is granted (or vice versa) – as would be the case with a traditional, static approach. Instead, risk profiling continues beyond that initial decision, monitoring what a user does when granted access and whether their behavior is trustworthy. Gartner, for one, defines this approach as Continuous Adaptive Risk and Trust Assessment (CARTA).
Leverage Data Analytics
In order for risk levels to be assessed, organizations must have full-stack visibility – into all devices and interactions taking place on its system – and the ability to make sense of a tremendous amount of behavioral data. How does a series of behaviors by Employee A stack up against a different series of behaviors by Employee B? Where’s the risk and how do we mitigate it? Analytics are required to not just answer such questions, but answer them quickly.
Multiple data analytics techniques can help organizations flag excessive risk: baselining and anomaly detection, similarity analysis, pattern matching, signatures, machine learning, and deep learning, to name a few. The key is to focus analysis on how users interact with data. Remember, risk is fluid. The risk of a behavior – even an unusual one – will depend on the sensitivity of the data being accessed or shared.
Automate the Response to Risk
Data analytics can reduce the time to identify a threat, but it’s also important to automate threat response. Once again, too many organizations simply respond to a growing number of alerts by throwing headcount at them. Instead, data loss protection should be programmatic, with policy automated at the individual level.
Resources should be thrown only at the highest immediate risks, while routine security decisions should be handled automatically. With automation, organizations can actually reduce their headcount without compromising security – saving money while achieving precise, real-time risk mitigation.
The Bottom Line
The far-right of the NIST Cybersecurity Framework must ideally focus on proactive detection, response and remediation – steps that must happen concurrently and continuously. Identifying valuable risk insights and turning them into actionable protective measures remains challenging in government environments, especially with more data and devices on networks than ever. But with continuous evaluation, analytics, and automation, it can be done. Too many organizations are drowning in alarms and alerts, while struggling to review and triage security content, adjust system policies, and remediate risk. By taking a holistic, proactive approach, organizations can identify and respond to risks in real-time, adapting their security as needed.