This page is not built out yet. If you are seeing this page, please contact an administrator.

The Impact of NIST’s PQC Standardization on the Federal Cybersecurity Ecosystem

By Kaniah Konkoly-Thege, Chief Legal Counsel, SVP Government Relations at Quantinuum

A lot has taken place in the quantum industry since the National Institute of Standards and Technology (NIST) announced its selection of PQC algorithms for standardization in 2022. From technology to global policy, advancements are causing experts to predict a faster timeline to reaching fault-tolerant quantum computers. Technology advances may also accelerate the timeline for when a future quantum computer could overwhelm the encryption tools we currently rely upon to protect everything from national security information to banking and healthcare data. The newly-released NIST PQC standards are a critical step toward protecting data in the quantum age and warrant the attention of the entire federal cybersecurity ecosystem.

Since NIST Algorithm Selection (2022): Industry Progress

Over the last decade, the quantum information science ecosystem has moved from early-stage scientific exploration and investigation into applied commercial research and development. Governments worldwide now view quantum as a strategic technology critical for both economic and national security.

As an industry composed of mostly start-ups and tech giants, several important advancements took place over the past two years. Quantum capabilities in hardware have, for certain problems, moved beyond the limit of what supercomputers can simulate; software integration has advanced quantum computing out of the current noisy intermediate-scale quantum (NISQ) level to Level 2 resilient quantum computing; and from a cybersecurity perspective, standardization of a new cryptographic system has now been released by NIST, the world’s leading standards organization.

Advancements in Global Government Investment

According to U.S. National Security Advisor Jake Sullivan, “… advancements in science and technology are poised to define the geopolitical landscape of the 21st century … Preserving our edge in science and technology is not a ‘domestic issue’ or ‘national security’ issue. It’s both.” This sentiment has been reflected by government officials all over the world. As quantum technology has advanced from the lab to the marketplace, the need to fund quantum research and commercialization, while also fortifying critical systems and data to withstand future cyberattacks from a quantum computer, has become even more stark.

The World Economic Forum estimates that governments have invested over $40 billion USD in quantum technologies as of January 2024, with over $15 billion invested by China alone. 2023 was the first year where government funding outpaced private funding in a sign that governments increasingly view quantum as an integral piece of international competitiveness from both an industrial and a military policy perspective.

The global race to lead in quantum technologies is very much ongoing. According to Sullivan, the U.S. must “… ensure that emerging technologies work for, not against, our democracies and security.” The NIST announcement is further proof that the time for governments and companies to invest in quantum solutions is now.

Advancements in U.S. Government Policy

As NIST’s initial PQC algorithm competition advanced, a myriad of U.S. government actions have been released with the goal of protecting government data and cybersecurity systems vis-à-vis fault-tolerant quantum computers:

  • In May of 2022, President Biden issued the National Security Memorandum on Promoting United States Leadership in Quantum Computing While Mitigating Risks to Vulnerable Cryptographic Systems (NSM-10).
  • The Quantum Computing Cybersecurity Preparedness Act was signed into law in December of 2022. The Act acknowledges the threat to encryption posed by fault-tolerant quantum computers and seeks to mitigate that threat by strengthening U.S. government agency systems and instructing the Office of Management and Budget to issue further guidance one year after NIST issues its PQC standards (which imputes a deadline of August 13, 2025).
  • A key piece of legislation, the National Quantum Initiative Act (NQIA) enacted by Congress in December 2018, authorized over $1.2 billion to support quantum research and development. The NQIA expired on September 30, 2023, and while the NQIA Reauthorization was unanimously reported out of the House Committee on Science, Space, and Technology earlier this year, it is currently awaiting a vote in the House of Representatives and would then need to be taken up by the Senate.

While government investment does not directly equate to the regulation of quantum, it is clear that the NQIA and other government funding sources have and will continue to influence the behavior of companies in the quantum ecosystem. Government strategies and funding schemes often function as soft-law regulations that serve the purpose of signaling government priorities and guiding private investment, research initiatives, workforce development, and diplomatic decisions across the globe.

Quantum, like many emerging technologies, sits at the crossroads of technology and international relations, and the funding and scaling of quantum businesses will likely be heavily affected by geopolitics and government strategies over the coming decade.

PQC Milestone: Post-Standardization Begins

NIST’s standardization announcement on Aug. 13 marks the start of a new era, one of planning and implementation. Specifically, this milestone is critical to federal agencies and agency partners who are mandated under NSM-10 to transition to quantum-resistant cryptography by 2035. According to the mandate, some key post-standardization requirements take effect:

  • Federal civilian agencies must start regular reporting of timelines and plans to make the transition. Federal partners are also advised to prepare themselves to support PQC as soon as possible after the standardization takes place, according to NSA/CISA.
  • The Secretary of Commerce will be proposing (within 90 days) a timeline for the deprecation of quantum-vulnerable cryptography standards. The goal will be to move “the maximum number of systems off quantum-vulnerable cryptography” over the next decade.
  • Heads of agencies operating or maintaining National Security Systems (NSS) must submit (within one year) an initial plan to transition to quantum-resistant cryptography in all NSS.

This is more than a box-checking exercise, as these standards will take on force of law for federal agencies and agency partners who are mandated under NSM-10 to transition to quantum-resistant cryptography by 2035. Additionally, the PQC algorithms are likely to become “market standard” in the private sector and be encompassed in the definition of “adequate cybersecurity measures” in commercial contracts, audits, and due diligence exercises.

Cryptographic Agility and Resilience

The U.S. government’s quantum computing cybersecurity preparedness must remain flexible, reflecting the evolving nature of the technological breakthroughs across the industry as well as the ever-increasing capacities of threat actors who may seek to capitalize upon quantum.

PQC migration is a necessary and critical step toward protecting vulnerable digital systems from powerful quantum computers in the future. The migration to PQC will take years and a hybrid approach that utilizes today’s algorithms, such as RSA, alongside PQC algorithms is prudent to maintain adequate security should any issues arise during the PQC transition. Maintaining this cryptographic agility will be key to ensuring cybersecurity against threats, classical and quantum alike.

To achieve true resilience against quantum attacks, government agencies and private organizations should consider a layered-defense strategy that includes PQC and cybersecurity solutions that leverage quantum mechanics, such as provable quantum entropy for encryption key generation. When combined with PQC algorithms, these quantum-derived technologies can help protect against a far fuller range of threats posed by quantum computers.

The Unfunded Mandate Challenge

According to the OMB report delivered to Congress last month, the total government-wide cost required to perform a migration of prioritized information systems to PQC between 2025 and 2035 will be approximately $7.1 billion in 2024 dollars. This total does not include funding for National Security Systems which was to be estimated separately.

Prior to the standardization, NSM-10 discouraged the procurement of any commercial quantum-resistant cryptographic solutions. Now that the initial standardization is complete, federal agencies will be authorized to procure such solutions. The question then becomes, how will these procurements be funded?  It is vital for Congress to reauthorize the NQIA and to fund programs to further commercialize quantum computing while also pulling policy levers such as tax incentives, loan guarantees and strategic investments across U.S. government agencies to procure solutions to identify vulnerabilities and transition to PQC algorithms over the coming decade.

What Next and When?

Beyond the standardization of these initial PQC algorithms, NIST has made further calls for digital signature algorithm candidates, seeking to diversify its algorithms to increase the probability that these solutions will remain secure as the technology continues to develop. It will be several years before these additional signature algorithms are standardized.

While there is no set formula for assessing the risk and timing of the quantum threat, federal agencies and partners can rely on progress in the following three areas as indicators: hardware progression, error correction, and algorithm development. Given where we stand today, the need to complete agency migration to PQC to effectively protect sensitive defense and critical infrastructure systems and information needs to be prioritized, as technological developments could necessitate such quantum secure solutions sooner than 2035.

While exact timelines remain unknown, federal agencies should focus on enhancing cryptographic agility so the U.S. remains resilient against potential quantum computing threats. For the hundreds of cybersecurity partners supporting U.S. government systems, it is important to consider that those who have not yet integrated PQC algorithms to make their offerings quantum-secure should expect to have such offerings listed as vulnerable systems on inventory reports each year until they are compliant, or risk losing their government contracts.

Generative AI is Revolutionizing Federal Government Operations

By Darren Guccione,  CEO and Co-Founder of Keeper Security

Generative Artificial Intelligence (GenAI) offers innovative applications that could help the federal government adapt to changing times. Some use cases are experimental, exploring new ideas, while others are already delivering positive results by enhancing human effort for difficult projects. GenAI also has the potential to streamline tedious, yet essential, tasks through automation.

Within federal agencies, pilot programs are testing how artificial intelligence that can generate content could modernize outdated processes. Some forward-thinking trials are hatching ambitious concepts by tapping into machine-learning capabilities. In some agencies, generative models have already been deployed to boost mission-critical initiatives that had become stagnant. However, one of GenAI’s most transformative uses may be automating repetitive tasks that can bog down workflows.

GenAI Enhancing Federal Operations

Within the federal government, GenAI is being utilized in various scenarios to enhance operations. It assists in deciphering complex regulatory frameworks that can be difficult to interpret. GenAI is also helping to reduce costs by optimizing processes and minimizing errors that can lead to inefficiencies. Another valuable application is retaining the knowledge of skilled civil servants approaching retirement by capturing their expertise in digital form.

For federal IT professionals, GenAI presents an opportunity to modernize outdated systems and bring emerging technologies into government operations. It represents an avenue for positive change – leveraging advanced AI to enhance efficiency and knowledge management across agencies.

Navigating Evolving Defense Cybersecurity Regulations

Regulatory frameworks in the federal sector continue to expand and evolve, making it crucial for agencies and contractors to remain up-to-date with the latest changes. This includes maintaining high levels of information security to maintain compliance with programs such as the Federal Risk and Authorization Management Program (FedRAMP), System and Organization Controls (SOC), the International Organization for Standardization (ISO), the Cybersecurity Maturity Model Certification (CMMC) and Defense Federal Acquisition Regulation Supplement (DFARS).

Recently, FedRAMP updated its guidance with new controls through NIST 800-53 Revision 5 (Rev.5), which requires cloud service providers selling to the federal government to enhance their cybersecurity practices even further. These rules present a complex landscape to navigate.

AI-Driven Compliance Assistants

A GenAI chatbot trained on regulatory frameworks can distill compliance requirements into easily digestible information. This can help users grasp the sometimes complex regulatory compliance language without needing to be experts, thus reducing the risk of non-compliance.

AI-driven compliance assistants make strategic recommendations regarding regulatory requirements, processes and practices, which help drive efficiencies. Cloud service providers in the federal marketplace would benefit from AI-driven compliance assistants as they must comply with regulatory frameworks to do business with the government.

Example: CMS Leveraging Large Language Models

The Centers for Medicare & Medicaid Services (CMS) illustrates how GenAI helps train new CMS staff in aligning and translating technical documentation from federal marketplaces with CMS accounting systems. This challenge proves daunting for new civil servants navigating the complex federal healthcare systems.

The Financial Management Systems Group / Division of Program and Data Management used a Large Language Model (LLM) to address several challenges: capturing legacy knowledge from a retiring workforce, maintaining operations during hiring freezes, training new recruits, and processing millions of claims to ensure Medicare recipients receive their benefits. Using Llama 2, Meta’s open-source LLM, the finance team obtains fast, precise information with context for the CMS consumer-facing front end.

GenAI Usage and Trust in the Government Sector

Microsoft’s Azure AI services, powered by OpenAI’s language models, now offer benefits for the U.S. government sector, indicating that AI systems will continue to streamline tasks and processes, like those faced by the CMS. In 2022, Microsoft announced plans to make Azure OpenAI Service available for U.S. government customers, bringing large language model capabilities to Azure Government cloud.

Additionally, OpenAI’s latest GPT-4 model has advanced capabilities that support its use in highly regulated environments. While many government agencies have already experimented with AI for smaller workloads, GPT-4’s enhanced performance on sensitive tasks with limited input/output data will be advantageous. GenAI capabilities can streamline numerous processes and repetitive tasks, allowing organizations to focus more on mission-critical operations.

Government technologists are exploring the use of large language models like GPT-4 to assist in drafting control language for various cybersecurity frameworks that ensure contractor compliance with standards. As a Governance, Risk and Compliance (GRC) tool, GenAI can help draft policies and interpret evolving compliance requirements. However, technology leaders and GRC staff still play crucial roles in testing, verifying and documenting the AI-assisted outputs.

Best Practices for AI Safety and Security

To ensure the safe and appropriate use of GenAI tools, agencies should implement stringent best practices for AI safety and security. Top references include the Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, and the NIST AI Risk Management Framework: AI RMF 100-1.

Conclusion

Government agencies are leveraging numerous use cases of GenAI technology, but only a few have scratched the surface of what’s possible. In this sector, organizations must exercise an extremely high level of caution and diligence when applying AI tools to ensure ethical and technically rigorous best practices for the safe and effective use of GenAI.

Hundreds of resources help organizations understand regulatory compliance frameworks. Zoya Schaller, CISSP, CGRC, Director of Cybersecurity Compliance at Keeper Security, used OpenAI to create a Cybersecurity Regulatory Compliance Advisor to answer questions about compliance with FedRAMP, SOC, ISO, CMMC and DFARS.

Implementing GenAI capabilities in a secure and strategic manner will be critical for long-term success and continued innovation. This will help U.S. government entities stay competitive, maintain a cutting-edge position with emerging technologies, and meet the stringent standards and practices required for the ethical and safe adoption of artificial intelligence.

While the potential benefits are substantial, the public sector’s adoption of GenAI must proceed with prudence and adherence to robust governance frameworks. Upholding the public trust through responsible AI oversight, as well as human oversight over critical processes, is paramount as these technologies are judiciously integrated into government operations.

NIST’s new PQC Algorithms and What They Mean for Federal Agencies

By: Dr. Matthew McFadden, Vice President of Cyber, GDIT

The cybersecurity landscape is evolving rapidly with last week’s release of new post-quantum cryptography (PQC) algorithms by the National Institute of Standards and Technology (NIST). These algorithms mark a critical step forward in preparing for the post-quantum era, providing a roadmap for agencies to begin their transition to quantum-resistant encryption. NIST is encouraging agencies to begin transitioning to the new standards as soon as possible.

One of the most fundamental aspects of cybersecurity is the act of encryption. Without encryption, it is nearly impossible to safeguard the protection of data – even concepts such as zero trust cannot fully protect data without it. Encryption has become second nature and a mandatory requirement within almost all cybersecurity standards today. However, the challenge now is that PQC is becoming a necessity as the threat of “harvest now and decrypt later” is emerging as a potential risk.

Almost every part of an information system depends on some form of public-key cryptography. Current algorithms for public-key cryptography are vulnerable to being decrypted by quantum computing, which has the potential to break these algorithms. This means that adversaries, if they have recorded, extracted, or stolen data, may be able to decrypt this information either now or when quantum computers become more advanced. The true capabilities of our adversaries may be uncertain, which magnifies the threat. This includes sensitive emails, websites used to transmit or store data, or even any data traversing the internet – all of which rely on the encryption provided by public-key cryptography.

Public-key cryptography is deeply integrated into agency information systems, so keeping an accurate inventory of it will be a continuous task. Agencies will need to regularly update their discovery and assessment methods and migrate systems, hardware, and software to ensure they are patched, updated, and replaced. This ongoing process will require continuous investment, which will be essential during and after the migration to meet PQC standards.

The transition of federal agency systems based on Office of Management and Budget and Office of National Cyber Director inventories is projected to cost approximately $7.1 billion between 2025 and 2035, as outlined in the OMB’s Report on Post-Quantum Cryptography. This report highlights the significant funding that may be required for agencies to move away from quantum-vulnerable cryptography. While much of the focus has been on high-value assets, non-critical functions, operational technology, and IoT devices must also be considered. Understanding and quantifying the true scope of migration is an ongoing challenge.

The OMB report outlines four key strategies for PQC to be successful:

  1. Comprehensive and ongoing cryptographic inventory is a key baseline for successful migration to PQC.
  2. The threat of “harvest now, decrypt later” attacks means that the migration to PQC must start before a cryptographically relevant quantum computer (CRQC) is known to be operational.
  3. Agencies must prioritize systems and data for PQC migration.
  4. Systems that will not be able to support PQC algorithms must be identified as early as possible.

To ensure the long-term defense of critical information systems and the data they store and process, it is crucial to implement and prioritize migration to Post-Quantum Cryptography now that the NIST-approved algorithms are available. By engaging with industry experts and leveraging the latest tools and technologies, agencies can streamline the PQC migration process. Migrating public-key cryptography to PQC will require deliberate planning, and agencies need a trusted partner to ensure their cryptography strategy is innovative and ready for the post-quantum future.

Addressing the U.S. Quantum Labor Shortage Before It’s Too Late

By Dr. Celia Merzbacher, Executive Director of Quantum Economic Development Consortium (QED-C)

There is a global race underway to lead in the critical and emerging technology (CET) area of quantum technologies. Nations are launching national initiatives and making significant investments of public funds in this area knowing that their economic and national security is at stake. Yet, without adequate talent, these initiatives will falter or fail. The competition for talent is a global competition that the U.S. can’t afford to lose.

The Quantum Labor Landscape

Occupations related to emerging technology sectors such as quantum are inherently new and are often multidisciplinary. They are characterized by a rapid growth in demand that outstrips supply, thereby creating a shortage of qualified workers.

A survey of QED-C members in early 2024 aimed to better understand current hiring trends and challenges as it relates to the shortage. Many respondents planned on hiring quantum workers in 2023, with some planning to hire up to 12. However, out of the 22 companies that responded with a hiring goal, almost 60 percent said they did not meet that goal by the end of the year. And when they did manage to hire, almost 80 percent said that it took between two and six months to fill an open position and 16 percent said it took up to a year.

The challenges to hiring talented workers remained similar to those identified in QED-C’s 2020 survey. In 2024, the largest barriers to hiring quantum talent include the following:

  • Lack of candidates with required/desired qualifications (61 percent of survey respondents);
  • Lack of candidates with the right to work in the U.S. (39 percent); and
  • The amount of time required to get a visa (22 percent).

67 percent of respondents agreed that it has become more difficult to recruit qualified quantum workers in the past year, and 92 percent agreed that there is a shortage of U.S. citizens and permanent residents with quantum qualifications.

Some countries have implemented programs aimed to expedite and lower barriers for skilled workers to immigrate. Canada’s Express Entry, for example, streamlines the skilled worker application management process. The target for Express Entry is 110,700 permanent resident admissions in 2024, rising to 117,500 in 2025 and 2026.

Through collective industry data and awareness efforts in the U.S., the lack of qualified talent is being recognized, and efforts to attract and retain top quantum researchers and workers are part of programs such as the National Quantum Initiative. Yet, without expediting and lowering barriers for skilled workers to immigrate, the U.S. will not be able to compete for top talent.

What is a Quantum Worker?

The occupation “quantum technologist” can be described as a technical professional with skills related to the design, research, development, production, manufacture, or sale of quantum-based hardware or software systems or critical enabling components. The skills, expertise, education or training required depends on the specific role.

As an emerging technology, advancement in quantum technologies spans from basic research at universities and national laboratories to applied research, product development and manufacture at businesses small and large. Occupations in all of these sectors are essential to U.S. competitiveness. Most positions at universities and national laboratories require a PhD and possibly post-doctoral training, in fields ranging from physics or chemistry to engineering and computer science. Jobs in industry however have more diverse requirements in terms of skills or knowledge and preferred degree.

A study based on a 2020 survey of QED-C member companies found that:

  • Companies in the quantum ecosystem include hardware and software developers, engineering research firms, suppliers of enabling technologies, and service providers (e.g. patent firms);
  • There are many different jobs or work roles;
  • Different jobs have different skills and knowledge requirements; and
  • The preferred degree for various quantum jobs ranges from associate to bachelor to master to PhD.

The study showed that most jobs do not have quantum in the title, most frequently sought skills are not quantum-specific (e.g. experience in programming or working with lasers), and most jobs do not require a PhD, a bachelor’s or master’s degree is preferred.

Addressing the Shortfall in Quantum Workers Using Schedule A

In a recent RFI, The Department of Labor (DOL) Employment and Training Administration (ETA) asked for input on how to modernize the Schedule A occupation shortage list. Schedule A is an expedited pathway to obtain permanent residency, also known as a green card. But currently, no STEM occupation is listed on Schedule A. Challenges facing the U.S. quantum ecosystem and other emerging technology sectors would be greatly mitigated if the Schedule A occupation list were updated to include those that are essential to the critical and emerging technology (CET) areas identified by the National Science and Technology Council.

With a growing proportion of U.S. STEM programs populated by international students, companies will continue to be reliant on workers who require work authorization to remain in the United States. This is a primary pipeline of top-notch talent who hopefully will become permanent residents and eventually citizens. Using Schedule A to address the shortage of qualified STEM workers, particularly for positions related to quantum, is one way that the Department of Labor could help businesses to attract and retain qualified workers.

QED-C, which partners with many government agencies, can provide information about the skills and knowledge that employers in CET industries require and regarding the supply and demand of qualified talent. At the core of a successful quantum business or a top-notch university quantum research program are talented people. There is fierce competition for qualified workers in all parts of the quantum ecosystem. Together, we must address the U.S. quantum labor shortage before it’s too late.

QED-C is an industry-driven consortium of stakeholders, managed by SRI, that aims to enable and grow the quantum industry. In accordance with the 2018 National Quantum Initiative Act, the consortium was established by the National Institute of Standards and Technology (NIST) in the U.S. Department of Commerce.  Today, QED-C has more than 240 members from across the quantum ecosystem, including corporations, universities, and national laboratories.  More than 40 government agencies are engaged in QED-C as a means to achieve their respective missions.

How a Community Vigil Approach and Secure by Design are Critical to Software Cybersecurity

By Travis Galloway, head of government affairs, SolarWinds

The threat landscape in cybersecurity continues to evolve at breakneck speed, with new challenges emerging daily. Among the most pervasive threats stem from sophisticated cyberattacks sponsored by nation-states. These attacks are a growing menace to private businesses and public agencies alike, promising severe consequences for our collective security.

Private sector businesses recognize that they can be targets of advanced nation-state hackers seeking to achieve their geopolitical goals. For instance, China has been known to engage in extensive cyber espionage campaigns aimed at stealing technology and intellectual property to overtake the U.S. economy and its businesses. It was recently reported that the Chinese-sponsored Volt Typhoon group also persistently targets vulnerabilities in critical systems like electric grids, water systems, and ports.

These evolving cybersecurity threats to our nation were a central theme at the recent SolarWinds Day: A Trusted Vision for Government IT panel event, where SolarWinds President and CEO Sudhakar Ramakrishna was joined by Congressman Raja Krishnamoorthi, D-Ill., and Christopher D. Roberti from the U.S. Chamber of Commerce.

“It’s a very scary thing if you think about it,” said Rep. Krishnamoorthi, reflecting on the growing nation-state threats facing the United States. “Operation Typhoon is meant to preposition malware in our utilities in water systems, electric grids, you name it.”

During the event, the panelists highlighted several important takeaways for shoring up our shared cyber defenses, supply chain, and other critical infrastructure. The discussion emphasized the importance of public-private partnerships, adopting a “Secure by Design” framework that ensures security is an integral part of the entire software development process, and promoting ongoing cybersecurity education to ensure all levels of an organization share the responsibility of bolstering our shared defenses.

The Enduring Importance of Public-Private Partnerships

Today, no single entity can combat sophisticated cybercriminals and nation-state adversaries alone. As the Cybersecurity and Infrastructure Security Agency (CISA) has highlighted, a collaborative effort among governments, private entities, and individuals is necessary if we are to address ongoing cyberwarfare successfully.

“No company, no matter how big or sophisticated, has a chance against a nation-state adversary,” said Roberti. “Therefore, the U.S. government needs to use its authority and capabilities together with the knowledge and resources of the private sector to tackle the threat.”

Part of this is the increasingly popular concept of a “community vigil,” where government agencies, private sector businesses, and other stakeholders work together to create a secure digital environment. This ongoing collaboration between governments and businesses underscores the significance of community-focused strategies to enhance national cybersecurity. By fostering transparent communication and resource sharing, organizations can reinforce our collective defenses – and harness a collective intelligence much greater than what any could achieve alone.

Setting New Standards Through a Secure by Design Framework

Cyber resilience involves more than just preventing cyberattacks. It also means being able to recover quickly and having strategies in place to detect and manage any breaches. This can be achieved by embracing Secure By Design guiding principles for software security and cyber resiliency. Informed by years of experience from industry-leading cybersecurity experts, the SolarWinds Secure by Design initiative is a gold-plated cybersecurity approach to software build systems and processes that provides an effective and novel defense for thwarting advanced supply chain cyber threats.

The proactive Secure by Design approach embeds security into software systems right from the start. By addressing security early in development, organizations can mitigate risks before they progress into serious threats. There are several ways to put Secure by Design into practical use. Engaging with governmental and regulatory bodies like CISA can enhance the flow of cyber threat information between public and private sectors.

Recently, CISA introduced the Secure Software Development self-attestation form to help organizations declare their cybersecurity commitments in a standardized, formal, and consistent manner. The form serves multiple purposes: it encourages organizations to evaluate their security postures, provides valuable data for benchmarking industry standards, and fosters a transparent environment where enterprises can learn from each other’s best practices.

Integrating Cyber Resilience as a Universal Responsibility

Driving a cultural shift to a Secure by Design posture requires clear communication from the top down that cybersecurity is a shared responsibility, where everyone plays a part in safeguarding the organization’s digital assets. This means that every individual in an organization, regardless of their level of technical expertise, must be aware of their role in maintaining a secure digital environment.

“Security information should be like a utility,” said Ramakrishna. “(Something) everyone should have access to (in order) to protect themselves. It shouldn’t be the job of just a few people.”

However, for this approach to work, ongoing education on the importance of cybersecurity and the steps that can be taken to ensure its success is essential for both technical and non-technical employees. This will help promote awareness and ensure that everyone understands their role and is prepared to play it well.

For executives at the highest level, this education is essential for making informed decisions, understanding how cybersecurity affects the business as a whole, and leading by example in promoting a culture of security awareness. For technical and IT staff, continuous learning helps them stay ahead of emerging threats, understand complex security tools and technologies, and implement best practices for threat mitigation. These initiatives make cybersecurity easier for everyone, even employees in non-technical roles, to understand and stress the importance of good digital practices to reduce human error – one of the primary causes of breaches.

Moving forward, embracing all of these strategies will not only enhance our collective cyber defenses but also set new industry standards – ones that prioritize security and resiliency of the systems we all rely on. As we continue to navigate a landscape marked by the ever-growing sophistication of cyber threats, our success in safeguarding the digital world will depend on our ability to adapt, innovate, and unite under this critical common goal.

Addressing the Talent Shortage: How Digital Government Improves Satisfaction, Retention

By Kelly Davis-Felner, Chief Marketing Officer, PayIt.

Public sector leaders face ongoing challenges to do more with less, and the talent crunch is a key factor in that. So while the need to access public services continues to increase, many agencies are simultaneously struggling to meet staffing needs.

In fact, it was reported in 2023 that 650,000 public sector jobs remained vacant, impacted by the “Great Resignation” during the pandemic and the “silver tsunami” of public sector retirements that outpace hiring. And in 2022, a MissionSquare Research Institute survey revealed that 52% of state and local government employees were considering leaving their jobs.

Bridging the Talent gap with Technology

Staff vacancies can make it harder to drive modernization projects forward, as IT leadership is faced with competing priorities. Yet, government leaders have cited digital service delivery as a top priority for 2024. With increased demands, understaffed agencies find their workers feeling overworked and overwhelmed – add in the frustration of fielding support calls when technology solutions aren’t working effectively and there can be a real morale problem.

But, when technology is working well for staff, it’s a game-changer.

In fact, in a recent survey conducted by PayIt and The Harris Poll, we learned that 83 percent of respondents expect the transition to digital government to enhance or positively impact the overall job satisfaction and engagement of employees in their organization.

Reducing redundant manual tasks and providing better tools to support customers improves employee satisfaction. And, ongoing technical education and experience from adopting and using modern software makes frontline staff more valuable assets to their agencies.

Fort Smith, Ark. serves as a great example of this principle at work. “Since launching our digital customer experience – PayIt Fort Smith – we’ve been able to increase resident adoption and reduce the number of manual transactions and support calls our staff processes,” notes Maria Miller, Fort Smith Citizen Services Program Manager. “Customers are happy with the options that are offered by PayIt Fort Smith as well as the customer-friendly aspect of the experience.”

With more agency workers fulfilled in their roles, retention can be improved and residents and staff alike ultimately have a better overall experience.

How to Keep Modernization on Track

Even departments facing a worker shortage can still move to an agile, digital-first approach – and each step taken toward modernizing will make future upgrades that much easier. You can ignite a domino effect in your agency when you:

  • Modernize the resident interface first. Providing a way for people to self-serve frees up staff, improving the productivity of the whole team (regardless of size).
  • Buy, don’t build. Building in-house will take a lot of time and money, and if the department is already short-staffed, it’s probably impractical. Instead, work with a vendor and with a platform that’s ready to go and can integrate into antiquated systems – and do so quickly.
  • Look for a partner that can function as an extension of your team. If the agency staff is already at capacity, prioritize finding a vendor that comes armed with dedicated and specialized assistance (e.g., customer support, engineers, or other technical support).
  • Start small and grow iteratively. Find success in increments: digitize one service, and then add on. This approach allows you to do more with less – and each new service or feature builds on the previous launch, allowing the team to apply learnings and streamline the process.

Attracting and Retaining Talent

Investing in effective modernization efforts helps to reduce the workload, but state and local government staff are still a critical bridge between residents and the governments that serve them. So, what can governments do to ensure the “human touch” is still present and available for residents?

There isn’t a single solution that will help relieve staffing shortages, but there are a few shifts that agencies can make to attract fresh applicants (and keep experienced talent):

  • Highlight the pros of working in the public sector. Although the private sector often pays better, it can be mundane and unpredictable. Many people want impactful, meaningful work so they can make a difference in their communities. The public sector also tends to offer more stable careers. Lean into those themes when hiring.
  • Offer training and career development opportunities. A McKinsey survey reported a lack of career development opportunities as a top reason they’d leave a job. Show people that public sector jobs offer training such as conferences or technical certifications to upskill staff.
  • Promote flexibility and other benefits. People are looking for things like remote or hybrid roles, child-care reimbursement, great health insurance, and generous vacation time – offering such benefits can help close the wage gap.
  • Take equity and inclusion seriously. Multiple studies have reported that people are more productive in diverse environments – and they’re more likely to stay with a company that shares inclusive values.

How Agencies can use Digital Information to Their Advantage

The key here is consistent training and education for staff so they enhance their work with the tech solutions that governments are implementing. Upskilling or re-training staff with 21st-century skills gives the agency an edge – both in retaining talent and serving the community with modern digital services. Plus, by empowering staff with automation tools and allowing residents to self-serve, employees can devote more time to meaningful projects and programs.

According to Miller, “One of the primary benefits of taking some of the repetitive, manual tasks off our staff’s plates is that they can do what they do best: use their knowledge of the City to provide the best customer service, educate customers, and be proactive about addressing our residents’ needs — and at the end of the day, that’s why many of them came to work in the public sector.”

Here’s What We Can Learn (and Do) About Cybercrime from FBI’s Latest Internet Crime Report

By James Turgal, Vice President of Cyber Risk, Strategy and Board Relations, Optiv

The FBI recently released its annual Internet Crime Report for 2023, based on complaints received by the Internet Crime Complaint Center (IC3). The report paints a concerning picture of the cybersecurity landscape in the United States. With a record-breaking 880,418 cybercrime complaints filed in 2023, resulting in potential total losses that exceeded $12.5 billion, the need for a collective effort to strengthen national cybersecurity defenses is more critical than it’s ever been.

The 880,418 complaints are a nearly 10 percent increase in complaints received, and the $12.5 billion represents a 22 percent increase in losses suffered, compared to 2022. As alarming as these figures appear to be, they are likely much higher in reality, as there are many victims of cybercrime that don’t report to authorities. For instance, in the FBI’s report, they cite the Hive ransomware group and the fact that about 20 percent of Hive’s victims reported their incidents to law enforcement. If that 20 percent number remained consistent across the board, that would mean that there were more than 4.4 million cyber incidents in the past year. That number is simply too high.

Diving deeper into the report, there are several key areas that demand more attention if we want to bring cybercrime numbers down in 2024.

Investment Fraud Leads the Pack

Investment fraud emerged as the report’s most damaging cybercrime in 2023. Losses surged 38 percent to a whopping $4.57 billion, highlighting a troubling rise in sophisticated financial scams. This dwarfs all other cybercrimes tracked by the IC3. Business Email Compromise (BEC) scams resulted in $2.9 billion in losses, while tech support scams disproportionately impacted the elderly, resulting in $924 million in losses. Though overall losses were lower for tech support scams, the impact on individual victims, especially those with limited technical knowledge, can be devastating.

To combat these issues and improve numbers moving forward, we need a targeted approach. Investment awareness campaigns for younger adults, cybersecurity training for businesses, and tech-support literacy initiatives for seniors are crucial. Collaboration between law enforcement, financial institutions, and cybersecurity experts is also essential to disrupt fraudulent operations and hold attackers accountable. Most importantly, we all must remember that if anything seems too good to be true, it probably is. No matter the situation, it’s always better to take your time and ask people you trust before giving away your personal information.

Ransomware Back on the Rise

Following a brief period of decline in 2022, ransomware attacks came roaring back in 2023, with a 74 percent increase in reported losses ($59.6 million) and an 18 percent increase in complaints reported (2,825). These significant increases underscore the growing sophistication of cybercriminals who are exploiting a growing number of vulnerabilities for substantial financial gain. Specifically, the FBI has observed emerging trends, such as the deployment of multiple ransomware variants against the same victim and the use of data-destruction tactics to increase pressure on victims to negotiate.

Ransomware’s resurgence demands a multi-faceted defense. Organizations must prioritize layered security, implementing robust controls across email, network, data, and endpoint protection. Breaking down security silos and integrating tools with an XDR platform is essential. This holistic view allows for a deeper understanding of attacker tactics, including emerging trends like double extortion and multiplatform threats. Frameworks like MITRE ATT&CK can further pinpoint vulnerabilities, while monitoring for activity associated with common attacker tools helps detect suspicious behavior.  Furthermore, regularly analyzing lessons learned and adapting your security controls is crucial for staying ahead of evolving threats.

Phishing Remains Relentless

While phishing came in 21st on the list of most lucrative crime types, with losses totaling $18.7 million, it was once again the most prevalent cybercrime overall with nearly 300,000 complaints to the IC3 in 2023. This was almost five and a half times the second most popular complaint – personal data breaches.

This highlights the constant threat that phishing poses due to a general lack of cybersecurity awareness. We can all be vigilant by clicking with care and staying wary of suspicious emails, texts, and messages on social media, no matter where we are. To fortify our defenses, public awareness campaigns and education is key. Regular training programs can equip individuals and organizations alike to identify phishing attempts. Organizations should also leverage eLearning courses and run simulated phishing exercises to further train their employees and keep phishing top of mind. Additionally, implementing multi-factor authentication adds an extra security layer which can make a big difference. Working together – individuals, organizations, and cybersecurity experts – we can significantly reduce the effectiveness of phishing attacks.

Looking Ahead

The FBI’s report highlights the growing threat of cybercrime, but it doesn’t have to define our future. By paying closer attention to the biggest threats, prioritizing cybersecurity awareness campaigns, fostering collaboration between public and private sectors, and implementing robust security measures, we can begin to turn the tide. Let’s make 2024 the year we collectively outsmart cybercriminals and create a safer digital landscape for everyone.

James Turgal is the former executive assistant director for the FBI Information and Technology Branch (CIO). He now serves as Optiv Security’s vice president of cyber risk, strategy and board relations. James has personally helped many companies respond to and recover from ransomware attacks and is an expert in cybercrime, cyber insurance, cybersecurity, ransomware and more. James draws on his two decades of experience investigating and solving cybercrimes for the FBI. He was instrumental in the creation of the FBI’s Terrorist Watch and No-Fly Lists.

Implementing AI Assurance Safeguards Before OMB’s December Deadline

By Gaurav (GP) Pal, stackArmor Founder and CEO

In March 2024, OMB released groundbreaking new guidance in accordance with President Biden’s Executive Order on AI for the government’s safe use of artificial intelligence – the first of its kind government-wide policy on AI.

Under this new policy, government agencies must meet and implement mandatory AI safeguards that provide more reliability testing, transparency, and testing of AI systems. Agencies have to implement these safeguards by December 1, 2024.

The new mandates are designed to drive a thoughtful and considered approach to implement AI assurance safeguards and focus on the steps needed for long-lasting AI safety and development in their operations.

To meet this deadline and create long-lasting change, agencies should leverage and augment existing practices – such as the Authority To Operate (ATO) process – to add AI Assurance guardrails checking for safety, bias, and explainability in addition to confidentiality, integrity and availability.  With new and emerging AI Risk Management guidance from NIST, ATOs with AI Risk Management Overlays can be applied to IT systems using AI so agencies can continue implementing safe solutions by assessing and managing risk.

New Guidance Will Lead to Safe AI Development

Over the last two years, we have seen a rapid evolution of technology with generative AI, making it imperative that the public sector catch up to this advancement for its successful and safe use.

The Biden administration and federal agencies have been making a significant effort to get ahead of advancing innovation by focusing on AI safety, development, and research. We have seen this through NIST’s AI Safety Institute (AISIC) announced in February – bringing together over 200 private sector stakeholders to help prepare the U.S. for AI implementation by developing responsible standards and safety evaluations.

NIST recently released helpful guidance designed to help manage the risks of generative AI. This guidance serves as a companion resource to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF).

What Agencies Need to do Ahead of the Deadline

Agencies should use documents like NIST AI RMF to create a risk classification methodology and create a risk baseline for conducting AI risk assessments ahead of OMB’s newly established December 2024 deadline.

To meet the ambitious deadline set forth in the new OMB guidance, agencies must take advantage of the current methodologies and frameworks in place, including NIST’s RMF and SSDF and look to implement robust test and evaluation techniques on the training data and models. Both frameworks are a good starting place for agencies looking for a high-level roadmap in AI security management.

By using a well-known RMF process to discover, classify, POAM (plan of action and milestone), and monitor the risks, leaders can quickly leverage what is available to them more efficiently and correctly for long-lasting and sustainable change.

However, current frameworks need more specific guidance and actions for agency leaders who need to implement the safeguards under the OMB framework. Leaders, including Chief AI Officers and Chief Information Officers, need to leverage additional tools, frameworks, and guidance to achieve these safeguards for the secure and responsible use of AI – adding to the complexities and challenges agencies are already facing.

Agencies should look to augment and leverage existing mechanisms to manage AI risk and enable the success of the mission to allow for agencies to reap the benefits of the Generative AI and AI/ML technologies.

With OMB’s new guidance and the subsequent deadline looming, agencies have a great opportunity to enable the mission while ensuring a safe and rights-respecting approach to  be integrated into their day-to-day operations.

Over the past two years, we have seen many new frameworks that agencies can use; however, the challenge will be integrating different systems and frameworks to meet the demands of the OMB guidance by December.

The December 2024 deadline for implementing AI safeguards presents a significant challenge for government agencies. However, by leveraging existing frameworks such as NIST’s RMF and SSDF, as well as implementing an authority to operate (ATO) system for AI, agencies can work towards meeting the requirements outlined by OMB. The focus on AI safety and development is crucial, and by taking proactive measures, agencies can ensure the responsible and secure use of AI systems in their operations.

The Next AI Wave: Quantum AI

By Dr. James Matney, Vice President, Defense Strategy, GDIT

Amid all of the (well-placed) excitement around artificial intelligence, quantum AI is an emerging field that combines the power of quantum computing with AI to create new and innovative solutions for an array of complex problems.

Here’s why: Quantum computing is a method of solving complex problems in ways that classical computing cannot. Similarly, quantum AI can perform certain types of machine learning tasks much more efficiently than classical AI. By combining them, we create new and powerful capabilities.

For instance, quantum AI can train neural networks for image and voice recognition using large datasets in a fraction of the time it would take for classical AI, leading to more accurate predictions and better performance. Quantum AI can also train machine learning models on large datasets, which allows for more efficient processing of large amounts of data, which is particularly useful in machine learning applications where large datasets are common.

Early Quantum AI Use Cases Hold Insights for Agencies

Though it is in its early stages, quantum AI has the potential to revolutionize many industries, and the learnings from early industry pilots hold tremendous insights for agencies.

In finance, quantum AI can be used to analyze financial data and identify trends, which leads to more accurate predictions and better investment decisions. One can imagine the impact on, perhaps, fraud detection within agencies such as the Securities and Exchange Commission, the Internal Revenue Service, or Centers for Medicare and Medicaid Services, to name a few.

In cybersecurity, a significant concern is that quantum computers can break many of the encryption algorithms used to secure data. Quantum AI can create new and more secure encryption methods that are resistant to quantum computers. Quantum AI can also enhance network security where it can analyze large amounts of network traffic and detect anomalies that may indicate a security threat, improving the overall security of networks. Every agency has an interest in improving its cybersecurity posture and adding more dynamic detection capabilities, alongside the increasing adoption and maturation of new paradigms like zero trust.

In healthcare, quantum AI can be used to analyze medical images and identify patterns that may not be visible to the naked eye. This could lead to more accurate diagnoses and better treatment outcomes.

And in transportation, quantum AI can be used to optimize traffic flow and reduce congestion, leading to faster travel times and improved air quality. Alongside the recent multi-billion dollar investment in America’s infrastructure, implementing quantum AI to optimize that investment is a logical goal.

Preparing for Quantum AI Exploration and Adoption

No matter the agency or mission, it’s important to remember that quantum AI is not just about the technology itself. It also requires a skilled workforce capable of developing AI algorithms that can take advantage of the exponential increase in power that quantum computers bring. Already, companies and universities are increasing their focus on quantum computing and have programs directly related to quantum AI.

For example, today GDIT is actively working with universities and quantum technology companies on software skill development for developers and applying quantum AI techniques to solve real customer use cases. The most exciting part: Continued advances in quantum computing and in AI will generate progressively more sophisticated algorithms that will become more powerful and efficient, allowing for even greater performance on complex problems.

CDM’s Evolution to Non-Traditional Technology: Why Now and How Will it Succeed?

By Tim Jones and Alison King, Forescout

The Cybersecurity and Infrastructure Security Agency (CISA) earlier this year announced that the next phase of its Continuous Diagnostics and Mitigation (CDM) program would broaden to include non-traditional technology, such as Operational Technology (OT) and the Internet of Things (IoT), in 2024.

At its 2012 launch, CDM was focused on gaining centralized visibility of traditional IT assets across civilian agencies. Since then, CDM has broadened its scope to include assets in mobile and cloud platforms, laying the groundwork for extending parity in visibility across increasingly complex environments.

While it is not news that the entirety of the enterprise needs to be taken into consideration for risk and exposure programs to be successful in the age of zero trust, it is momentous that a program as large and influential as CDM (which spans 92 federal agencies and boasts 3 million endpoints) is set to embark on securing non-traditional technology assets. So, why now, and what will it take to overcome the challenges inherent in understanding, interacting with, and protecting OT?

Why now?

While the security risks of non-traditional technology are not new, three significant factors have now come to a head, creating the perfect storm to prompt CISA to declare that the next phase of CDM will incorporate OT and IoT. Those factors are A) the knowledge gleaned from the first part of CDM, B) the known increase in threats to critical infrastructure, and C) the evolution of cybersecurity policy.

  • A) The knowledge gleaned from the first phase of CDM: When CISA set the initial parameters of CDM to end at traditional IT, there was recognition that the network and the endpoints extend beyond the security parameter. Through CDM’s initial exploratory work, CISA now has a dashboard with a single holistic view of what is going on across all civilian networks. Simply put, the more they see, the more they know what they don’t see on their networks, and these assets represent a growing risk that is no longer acceptable to ignore. As they build out this visibility, CDM program managers can peek at an item on IoT and realize that a subset of assets have no homes, aren’t mobile platforms, and aren’t part of the organization’s cloud deployment. This surfaces new assets and devices that may be seen and could be at risk for vulnerabilities.
  • B) Increased threat to U.S. critical infrastructure from adversarial nation-states:In January, the FBI warned about the growing threat of Chinese Communist Party (CCP) cyberattacks against U.S. critical infrastructure. This is just one of many recent examples of state-sponsored malicious actors targeting civilian critical infrastructure.
  • C) Growing cybersecurity policy: Underlying CDM’s evolution to broaden its scope to include OT and IoT is, of course, policy. The amount of cybersecurity policy and regulation that has come out under the Biden administration is unprecedented. In December 2023, the Office of Management and Budget (OMB) issued a memo outlining the 2024 reporting requirements in accordance with the Federal Information Security Modernization Act of 2014 (FISMA). The document outlines a mandate that created a more holistic perspective on zero trust; all agencies are required to submit their annual Chief Information Officer and Senior Agency Official for Privacy metrics as well as annual reports from their respective inspectors general by Oct. 31, 2024

What will it take to make CDM’s evolution to include non-traditional technology feasible?

One part of the challenge is that OT and IoT are built differently from IT. Non-traditional technology is often assets that don‘t have agents or come from a managed service, which means a lot of custom operating systems. Another part of the problem is the abundance and longevity of non-traditional assets in federal networks. A third challenge is that broadening the scope of CDM’s purview increases the (already large) amount of asset data to track – ensuring the quality of the data is critical.

In preparation for this evolution, it is important to leverage highly scalable technology to handle the influx of new data elements. One single source of trusted data and a solution that can address all three categories of IT, IoT, and OT in a single pane of glass becomes even more critical for the successful future use of artificial intelligence (AI) or machine learning (ML) capabilities, which will depend on clean, structured data.

Once a single source of trusted data has been validated, the first step is identifying all system components. Next is prioritization of what to conquer first, depending on each asset’s risk factor, leading to the creation of a tailored zero trust strategy.

The strategy will be shaped by exploring questions such as: How large is my platform? What is on it? What’s the risk factor of each component? Once the risk factors have been prioritized, agencies will need to take the more challenging step of first making sure that the riskiest assets are not internet-exposed and then coming up with a plan to replace them.

There must be some level of accountability and manageability around those assets in the move towards zero trust, including questioning: What services are on them? What functionality they provide? What kind of data will be moving around inside the enterprise? Those are all going to be components of this evaluation around IoT and OT that CISA is calling out.

Ultimately, for the evolution of CDM to succeed, we need to foster a culture change – transitioning from blanket acceptance of the presence of, for example, the telepresence-connected video camera in the conference room to the exploration of: What’s running on that system? Is that part of my network? Have we segmented that device into a safe environment?

As we start to report back the types of assets coming into the enterprise, we need to dig beneath the surface to determine if they are going to satisfy zero trust requirements.

Tim Jones is Vice President of Public Sector Systems Engineering, and Alison King is Vice President of Government Affairs at Forescout.

Customer Expectations Require Agencies to Raise the Bar on Customer Experience, Report Shows

By Steve Caimi, Senior Principal, Product Marketing, U.S. Public Sector, Okta

 As the digital revolution proceeds, public sector organizations know that delivering great digital experience is vital to building loyalty and convincing citizens that they can safely and seamlessly access government services online. Yet a new report shows that rising customer expectations are posing challenges that require new solutions from digital leaders.

To help public sector organizations gain a greater understanding of how to deliver safe and frictionless digital services, Okta surveyed more than 20,000 people worldwide in various industries about their digital experiences in the areas of convenience, privacy, and security. Okta’s Customer Identity Trends Report explores the survey results.

Survey Results

The survey’s fundamental finding: In the area of user experience, the digital era has raised the bar. People now expect seamless and frictionless digital experiences from the public sector – Federal, state, local, and tribal agencies, along with healthcare and education organizations – similar to those offered in the private sector. The report revealed that 48-60 percent of respondents would be more likely to engage public sector services online if the experience was simple and secure with frictionless login.

Across the board, the public sector faces a growing imperative to deliver safe and easy experiences. The survey also found:

  • Data privacy is top of mind: The report revealed that 81 percent of public sector respondents consider it important to have control over their data, especially when dealing with sensitive or private personal information.
  • Control is even more important than convenience: When asked to choose between the two, respondents favored maintaining control over convenience. This finding is particularly relevant in the public sector because services, such as unemployment benefits or student loans, can be attractive targets for bad actors.
  • Security is a top priority: The report revealed that 80 percent of data breaches can be attributed to stolen credentials. Perhaps not surprisingly, public sector users prefer dedicated login solutions that offer robust security measures.

Okta Can Help

Okta’s Identity solutions enable the public sector to strike a delicate balance between control and convenience, helping to ensure that customer privacy rights are protected while delivering smooth digital experiences. The public sector can maintain regulatory compliance, protect sensitive data, and safeguard user privacy by leveraging Okta’s Identity solutions and:

  • Effortlessly scale across multiple applications. The survey showed that 65 percent of people are overwhelmed by the number of usernames and passwords they need. By leveraging Okta’s robust features and scalability, organizations can expand their use of gov while ensuring security, performance, and reliability.
  • Deliver a secure and seamless user experience that upholds the highest standards of compliance. With Okta, your organization can maintain compliance with regulations and industry standards, ensuring the protection of sensitive data and user privacy. By prioritizing security and control, government agencies can gain the trust of those they serve while delivering services only to those who need them.
  • Have the same robust Identity and Access Management (IAM) solution that works with Common Access Cards (CACs) and Personal Identity Verification (PIV) Cards for federal employees. The survey revealed that 63 percent of people are unable to log into an account at least once a month because they forgot their username or password. Okta’s IAM solution is FedRAMP High and DoD IL4-authorized, providing the necessary security and compliance measures required by government agencies.

Overall, the survey revealed that people value control of their data and expect different levels of security for different online interactions. With Okta’s Identity solutions, your organization can strike a balance between data control and convenience, safeguarding user privacy while delivering seamless digital experiences. Let Okta power your organization’s digital front door with Login.gov and their 7,000+ other integration capabilities. With Okta’s help, governments can navigate the digital landscape, providing citizens with the exceptional experiences they need, enhanced by the security they deserve.

Read the Customer Identity Trends Report for a deeper dive into the survey results, and visit Okta Public Sector to find out how to get started with Okta today.

Applying for Government Benefits Shouldn’t Be Difficult When It Comes to Identity Verification

By Jeffrey Huth, Senior Vice President of TransUnion’s Public Sector business

Three senators last month introduced bipartisan legislation intended to create a better customer experience for people trying to access government services. Specifically, the Improving Government Services Act will require agencies to develop plans within a year of enactment to reduce wait times and improve digital services.

Coincidentally, TransUnion recently published a report on consumers’ experiences, preferences, and beliefs regarding enrollment in government benefit programs. The Reduce Benefit Enrollment Burdens report shows how challenging these processes can be for those accessing the customer service portals—and identifies means to address those problems.

Proving Identity Across Channels

One of the most significant areas for improvement concerns how government agencies verify identity remotely. Nearly 4 in 10 respondents (39 percent) spent 15 minutes or more trying to verify their identity when signing up for programs online. Spending 15 minutes online doing ANY form would be enough to tax a person’s patience. You can imagine the frustration that might occur regarding needed government benefits.

Additionally, constituents indicated their applications were delayed or declined due to not being able to prove their identity or eligibility online at more than double the rate that people experienced when applying through other channels, such as in-person. This helps explain why more than a quarter of people who do not apply online take that route because the process appears too difficult.

Other highlights from the survey – which polled 1,006 adults on whether they applied for government benefits in the past or might do so in the coming years – include:

  • While people appreciated the convenience of an online process, 23 percent of those polled could not complete their applications quickly.
  • There was a disconnect between expectations and reality for time spent verifying identity. Nearly 50 percent hoped to finish in under 10 minutes, but only 37 percent did.

While improving and streamlining identity verification online is critical, there is still a need for direct human interaction. The report found 60 percent of constituents will call an agency to get information about a program, while just slightly more will visit the official website (61 percent). In addition, when constituents run into problems while completing an online application, they are equally likely to call the agency for help as they are to use a digital channel, like the website chat function.

Here again, constituents expect a seamless and secure process to validate their identity in order to quickly get the help they need. These findings highlight the importance of a consistent omnichannel experience and should encourage agencies to invest in their call centers as part of their digital transformation.

What Modernization Will Require

The good news is that technology exists to securely verify identities without being overly intrusive for core entitlements and everyday benefits Americans need. Now, government agencies just need to continue in their commitment to deliver the experience that benefits program participants expect and deserve.

Among several recommendations, the TransUnion report notes a multipronged approach is most effective. Such an approach includes device reputation tracking, fingerprinting, device-to-identity linkages, and user behavior analysis. Solutions incorporating identity and device-proofing technologies are more likely to catch fraudsters early while reducing friction for legitimate benefits applicants, claimants, and recipients.

Inbound authentication solutions can help call centers reduce reliance on costly and time-consuming knowledge-based authentication while focusing fraud resources on only the minority of risky calls. These solutions reduce average call handling times, increase Interactive voice response containment, and improve the caller experience.

It’s clear that with so many more Americans connected to the web and accessing benefits via online portals, government agencies must implement more advanced identity verification methods using document validation, liveness detection, and facial matching recognition. Investing in more modern techniques for identity verification online and in the call center will pay dividends through more successful enrollment, improved citizen satisfaction, and increased program integrity.

Four Federal Software Supply Chain Security Trends to Watch

By Jeff Stewart, Vice President, Product, SolarWinds

The exponential growth of digital government has led to unprecedented security breaches across the supply chain. To address these threats, in 2021 the Biden administration enacted Executive Order 14028 intensifying scrutiny over vendors’ software supply chain. Subsequently, in 2023 the National Cybersecurity Strategy was introduced, urging software vendors to deploy greater secure software practices based on the NIST Secure Software Development Framework.

Despite these developments, a recent survey found most public sector respondents remain concerned about software supply chain security and are unsure what measures to implement to safeguard their systems.

While software supply chain security is a relatively new concept, here are four enduring trends to help agencies close the supply chain security gap and have a lasting impact on the public sector’s overall cybersecurity posture.

  1. Making software releases faster and more secure with DevSecOps and AIOps

Today’s applications are built compositely with natively developed code integrated with third-party and open-source components, potentially creating numerous entry points for a threat actor. Unfortunately, while developers can control the things they build, they have zero control over how those applications are validated and secured.

DevSecOps and AIOps can help.

DevSecOps helps agencies break down the silos between development, operations, and security to produce faster, more secure software releases. Meanwhile, AIOps works across IT operations to help ensure software in production is operating efficiently, securely, and reliably.

AIOps works by leveraging artificial intelligence, machine learning, and predictive analytics to collect data from the entire digital ecosystem. It autonomously analyzes this data to yield deep, consolidated insights into the IT infrastructure and development process, including identifying vulnerable code.

AIOps doesn’t just find issues, it also fixes them automatically. For example, it can patch known vulnerabilities in production or deployed software. This streamlines the process for DevSecOps teams, allowing them to easily track and address security and performance problems from the source of the code to its deployment.

  1. Meeting and exceeding compliance standards

Executive Order (EO) 14028 and other actions underscore the federal government’s commitment to leveraging its purchasing power to elevate security standards across the supply chain.

In response, agencies are increasingly looking to partner with software vendors who develop systems utilizing best practices that consistently meet or exceed NIST standards. Those practices include:

  • Basing the application build system on ephemeral operations, spinning up resources on-demand and destroying them when discrete tasks have been completed.
  • Building in parallel by utilizing isolated and distinct build environments, where numerous scans and security checks are performed before release.
  • Advancing beyond zero trust by adopting an “assume breach” position.
  • Deploying automated tools to concurrently scan for vulnerabilities throughout the development process.
  • Recording each build step, creating an immutable record of proof, and providing complete traceability.
  • And more.
  1. Increased transparency through SBOMs

SBOMs (or software bills of materials) are a critical step forward in mitigating software supply chain risk. These documents provide a thorough overview of all components, libraries, tools, and processes used by software vendors in the build process. This transparency makes it easier for agencies to identify security risks that require patching or addressing through mitigating controls.

To date, three agencies – DoD, NASA, and GSA – have proposed new rules for federal contractors to develop and maintain SBOMs for any software used on a government contract.

Compliant vendors generate SBOM files at build time and may use them in the build process to validate that third-party dependencies haven’t changed underlying code, provide a comprehensive picture of the dependency tree available on a current build and historical basis, and perform build-time checks and enforce policies based on CVSS-scoring.

  1. Increased observability across the supply chain

To counter security threats and increase visibility across the complex and layered software supply chain, agencies are increasingly adopting observability tools, techniques, and processes.

Observability offers comprehensive visibility into service delivery and component dependencies across the software creation and deployment ecosystem. This enables various departments within an agency, ranging from IT to development teams, to gain clearer, holistic insights into vulnerabilities for more rapid remediation. In addition, teams can leverage the integrated automation of AIOps to further boost security by reducing the opportunity for human error.

Observability solutions offer a holistic approach to identifying and addressing risks, making the future of software supply chain security significantly more promising.

Partnership is key

The four trends outlined above give federal agencies an opportunity to set a new standard for mitigating software supply chain risks. Crucially, agencies needn’t tackle this challenge alone. Software companies are actively addressing supply chain risk and can support agencies as they work to mitigate their own risks. These collaborations between the public and private sectors will be instrumental as agencies aim to meet or surpass NIST guidance for secure software development.

FedRAMP Baseline Transition Points to OSCAL-Native Tools

By Travis Howerton, Co-Founder and Chief Technology Officer at RegScale

Until recently, FedRAMP (Federal Risk and Authorization Management Program) certification was an Executive Branch mandate, but now that it has become law, it legally stands between cloud service providers (CSPs) and government revenue.

Further impacting the landscape is FedRAMP’s approval earlier this year of Rev. 5 baselines that were updated to correspond with the latest guidance from the National Institute of Standards and Technology (NIST).

According to the FedRAMP marketplace, cloud service providers including Microsoft, Amazon Web Services, and Salesforce have many existing FedRAMP authorizations at moderate and high impact levels. These authorizations, however, date back years, and for these already-certified CSPs, they are required to move from Rev. 4 to Rev. 5 Baselines.

Update to FedRAMP Rev. 5 Baselines

The FedRAMP update to the baselines is based on the National Institute of Standards and Technology (NIST) Special Publication SP 800-53 Security and Privacy Controls for Federal Information Systems and Organizations, Revision 5.

While increasing security and privacy controls is important to address the changing threat landscape, many stakeholders are concerned about the cost of FedRAMP compliance. It can be a significant expense, particularly for smaller CSPs, making it difficult for federal agencies to find affordable cloud solutions. Compounding the challenge, the authorization process can be slow, further delaying the adoption of cloud services to meet the government’s mission-critical needs.

As organizations look at updating to FedRAMP Rev. 5 Baselines, many CSPs must make adjustments that will take time to implement, particularly for those with unique security requirements or those seeking to use cloud services in new ways to meet the changing demands of users.

A Quicker and More Cost-Efficient Transition Process

NIST developed the Open Security Controls Assessment Language (OSCAL) to provide machine-readable representations of control catalogs and baselines, system security plans, and assessment plans and results – essentially covering all aspects of the Risk Management Framework (RMF). The goal is to simplify these transitions by shifting to an Authority to Operate (ATO) as-code approach for compliance tools.

Instead of writing compliance documents in Microsoft Word and Excel Spreadsheets – a slow, manual process that does not reflect real-time changes – OSCAL-based tools enable automation, helping to address cost and accuracy concerns by accelerating the process and minimizing manual work and errors.

Authorization to Operate (ATO) Tools

An ATO is a formal decision made by a senior government official to authorize the operation of an information system on behalf of a federal agency. The agency requires an ATO to connect the CSP to the government network, while the CSP needs FedRAMP to approve the security of the cloud environment. This results in a double-edged sword for the government to enable technology in cloud environments because while it ensures that both agencies and CSPs have endeavored to secure both systems and data, it also adds complexity and bureaucracy to the cloud adoption process.

ATO tools are software applications that help automate the ATO process, saving time and resources while ensuring a consistent and repeatable certification process. These tools help streamline the assessment, accreditation, and authorization steps of the ATO process by automating many of the tasks involved in FedRAMP certification, including gathering evidence, preparing documentation, and conducting assessments.

Additionally, ATO tools provide insights into a cloud environment’s security posture, helping agencies identify and mitigate security risks. These tools also improve communication between CSPs and federal agencies, enabling them to resolve issues that emerge during certification more quickly, reducing the burden and cost of FedRAMP certifications now and in the future.

Simplify FedRAMP Certification

To address many of the concerns stakeholders have related to FedRAMP certification, CSPs should consider solutions built on OSCAL. Using OSCAL-native tools, CSPs can get to ATO faster using code and submitting packages for authorization in a machine-readable format. This enables CSPs to resolve issues early on rather than going back and forth during the authorization process and allows for automated package reviews to accelerate ATO approvals.

Considering the significant costs involved in both becoming FedRAMP certified and requirements to transition from Rev. 4 to Rev. 5 baselines, this latest revision should be an impetus for organizations to seriously consider investing in OSCAL-native tools to improve the ATO process. The threat and technology landscape continues to change, and organizations can expect future revisions and additional overlays on FedRAMP Rev. 5 baselines based on each agency’s unique requirements, particularly the Department of Defense. Adopting OSCAL-native tools can help transform the FedRAMP certification from a massive undertaking in terms of cost, time, and effort into an automated and streamlined process.

What Zero Trust Means for Modern Government: Best Practices for Key Tenets

By Patrick Tiquet, VP of Security and Architecture, Keeper Security 

Over the past few years, an important cybersecurity initiative has quickly swept across U.S. federal government agencies. Like few other tech initiatives, zero trust has taken hold at warp speed, thanks to a cooperative push from various cybersecurity authorities and frameworks.

The White House Executive Order 14028, CISA’s Zero Trust Maturity Model, Office of Management and Budget (OMB M-22-09) and the DoD zero trust strategy and roadmap have coalesced to make zero trust a current reality across numerous government agencies within the span of less than two years. That’s extremely fast, relatively speaking. And for good reason: the federal push toward zero trust is critical for the development and deployment of secure and resilient next-generation technologies and infrastructure.

Zero trust is a modern security framework that eliminates implicit trust. It requires all human users and devices to be continuously and explicitly validated, and strictly limits access to network systems and data. Instead of focusing on where users are logging in from, zero trust concentrates on who they are.

The continued unification of disparate cybersecurity efforts governmentwide indicates further progress toward a cohesive approach to cybersecurity as a true economic and national security priority. As with any new initiative, however, there are challenges to adopting new solutions and processes.

To effectively meet the requirements of Executive Order (EO) 14028, Office of Management and Budget (OMB) M-22-09, the Cybersecurity and Infrastructure Agency (CISA) Zero Trust Maturity Model and the Department of Defense (DoD) Zero Trust Strategy and roadmap, all Federal civilian agencies should implement a few key best practices including:

Select FedRAMP Authorized solutions. The Federal Risk and Authorization Management Program (FedRAMP) makes zero trust possible with its secure, authorized solutions. The U.S. government created FedRAMP to achieve a standardized approach to security assessment, authorization and continuous monitoring for cloud products and services. The FedRAMP marketplace is a critical resource for agencies to find and compare credible and secure authorized vendors through a trusted public-private partnership. It also ensures that the government is in lockstep with the most advanced cloud-based software and services that are driving the high-stakes capital markets. By working with FedRAMP Authorized solution providers that offer the highest levels of security and privacy, agencies can comply with federal government zero trust cybersecurity directives.

Adopt Multi-Factor Authentication (MFA). Using phishing-resistant MFA wherever available is a key directive. Agencies should add support for Two-Factor Authentication (2FA) methods such as SMS, TOTP-based authenticator apps like Google or Microsoft Authenticator, RSA SecurID, DUO Security; and FIDO2 WebAuthn devices like YubiKey.

Deploy capabilities for secure file sharing. In support of the OMB’s requirement for enterprise-wide information sharing, secure file sharing is important to enable efficient, secure, vault-to-vault sharing of stored files.

FIPS-140 Validated Encryption. FedRAMP and other federal directives mandate the use of FIPS-140 validated encryption. Ensuring the use of FIPS-140 validated encryption in your information systems will help you to achieve the best security, interoperability and compliance required by government agencies.

 The Future of Cybersecurity in Government 

The OMB Binding Operational Directive (BOD) M-22-09 has authoritative guidance on “Moving the U.S. Government Toward Zero Trust Cybersecurity Principles” including clarification on CISA directives for BOD 23-01. Achieving zero trust is a responsibility for every organization doing business with the U.S. government. All high-level activities conducted by federal agencies and all networked assets require it. And all benefit from the advanced protections that a zero-trust security model ensures.  FedRAMP makes available the tools and technologies to take the critical step in achieving modern effective compliance with EO 14028 and CISA, DoD and OMB zero-trust directives. Organizations are empowered to participate in these shared governance models that foster a collective approach to zero-trust cybersecurity.

Four Ways to Handle the IT Funding Crunch

By Chip Daniels, Vice President, Government Affairs, SolarWinds

When the Biden administration asked Congress to approve $300 million of additional money for the Technology Modernization Fund (TMF) in fiscal year 2023 (FY2023), hopes were high that agencies would finally have the financial backing necessary to truly accelerate digital transformation. Yet, when Congress passed its $1.7 trillion government funding bill and allocated just $50 million for the TMF fund, agencies were forced to pivot.

The budget shortfall means agencies must continue to balance doing more with less while advancing mission-critical IT objectives. As they put these plans into practice, here are four strategies to help organizations move forward while keeping financial acumen top of mind.

  1. Ensure technology investments are delivering value by linking priorities to mission outcomes

The first step in any modernization strategy is to align technology priorities and goals with desired mission outcomes. This serves two purposes:

  • It ensures agencies focus investments on solutions designed to yield results most important to their organizations and constituents, whether they are citizens, government employees, or mission partners.
  • It helps weed out solutions failing to deliver value in relation to the spend allocated.

Agencies should assess their current technology ecosystem to see if the technologies they currently use meet their needs. The assessment should analyze who uses the solutions, what capabilities and limitations the technology has, and whether the technology still aligns with the mission. For example, legacy solutions implemented years or even decades ago may no longer garner the appropriate results or value.

  1. Automate wherever possible to deliver greater efficiency

Most agencies know about how automation can help save on IT costs, but there’s still a question of where to begin. After all, agencies are plagued by a large volume of tasks, many of which can only be managed through traditional manual processes.

Chances are, however, that a substantial number of those processes can be automated – it’s just a matter of discovering them. To do this, IT managers must carefully review existing processes and look for inefficiencies and automation opportunities.

IT monitoring is an appropriate example. Traditional monitoring usually involves multiple disparate tools, each looking for abnormalities in different areas of an agency’s digital environment. Managers must keep track of all these tools and continuously engage in context switching which can be labor-intensive and lead to errors. Additionally, this gets more cumbersome the bigger the ecosystem gets.

Automating the monitoring process significantly bolsters an agency’s security posture and ability to control the IT environment. Automation takes the onus of security and application management away from the IT team by automatically discovering and troubleshooting anomalies and initiating incident response protocols. When IT managers are not burdened with a cacophony of alerts, their teams can focus on top-priority initiatives and delivering value-added services to their agencies in a more efficient manner.

Automated monitoring is also beneficial in identifying rogue or shadow IT devices and applications and redundancies across the IT landscape.

  1. Avoid toolset creep

Toolset creep is one of the biggest threats to an agency’s bottom line and efficiency. Toolset creep results from the collection of dozens – if not hundreds – of point-monitoring products over the years. This can include multiple tools for monitoring the performance and security of the distributed network, cloud instances, on-premises infrastructure, applications, databases, and more. Over time, the ecosystem becomes unwieldy and costly to maintain.

One of the best ways to keep toolset creep in check is through full-network observability. Observability is different from network monitoring. Observability allows IT teams to move beyond siloed views and monitoring and gain a single pane of glass view into their entire hybrid, multi-cloud, and on-premises environments. With this multi-stack visibility, IT gets a single source of truth and consolidated insights into IT operations. Observability applies cross-domain correlation, machine learning, and AIOps to yield intelligence traditional monitoring tools can’t, such as anticipating network issues or security threats for rapid remediation – ahead of any performance impacts.

With observability, agencies can also weed out unnecessary or outdated monitoring tools, lower administration overhead, and improve staff productivity.

  1. Develop a strategically phased approach to modernization

While Congress’ decision to approve only a portion of the TMF funding proposed by the president creates budget challenges for agencies, a little planning and patience can help agencies stay within budget while forging ahead with their modernization efforts.

Rather than trying to do too much, too soon, agencies should adopt a strategically phased approach to modernization and break up their initiative into manageable chunks over the next few years. This will ensure costs stay in line while allowing agencies to stay on their digital transformation paths.

Agencies Need to Get Creative to Fill the Cyber Workforce Gap

By Jim Richberg, Fortinet Public Sector Field CISO

With an estimated 3.4 million people needed to fill the global cybersecurity workforce gap, it’s time for organizations to start turning to new ways to recruit and keep talented cyber professionals. The federal situation mirrors what’s happening globally, but the stakes are even higher with civilian, defense and IC agencies all aiming to protect networks that keep the country up and running.

According to Fortinet’s 2023 Global Skills Gap report increases in breaches can be attributed to a lack of cyber skills. In fact, about 68 percent of organizations indicated they face additional risk because of cybersecurity skills shortages.

While that stat is daunting, there are ways to bolster the ranks and give more people opportunities to help defend federal networks.

Look Beyond Typical Candidates

Federal agencies are struggling to keep up with a flood of new attacks. For example, seven major new wiper strains targeting government, military and private organizations were identified in the first six months of 2022. That’s almost as many as the total identified since wiper ware came into existence a decade ago. Beyond that, the motive behind wiper ware attacks have broadened, running the gamut from vandalism to extortion to sabotage – and even as potentially deniable weapons of cyberwar.

This presents a unique challenge for agencies as the federal government often serves as a training ground where new employees enter the cybersecurity workforce. Many of them spend a few years supporting their agency, then migrate to the private sector.

To that end, government agencies need think differently about who they hire and how they recruit cyber workers. Prioritizing diversity in race, gender, age, and, crucially, life experiences, allows for a diversity of perspectives that are essential in this field. For example, hiring analysts from diverse educational backgrounds brings immense value to security teams by providing differing perspectives on a problem.

With their skills working as part of a team and demonstrated ability to work under pressure, veterans are well-suited for many cybersecurity jobs. But when it comes to hiring veterans, the Fortinet report showed that there was an overall decrease compared to the previous year, with 47 percent of organizations stating they hired veterans in cyber roles compared to 53 percent in 2021.

This is a missed opportunity. Teamwork is essential in cybersecurity positions such as incident response and security operations. Former service members bring experience in teamwork, attention to detail, and work in fast-paced, high-stress environments – all skills needed in the next generation of cyber defenders.

Provide Access to Continuing Education and Upskilling Opportunities

Beyond that, by providing better and more available training, agencies can quickly upskill their current cyber workforce and enable current employees who are interested in cybersecurity to more easily transition into cyber careers. Both the government and its private sector partners need to lower the barriers for entry when it comes to cybersecurity training, and formal education and continued learning programs are options that should always be available for closing the gaps numbers and cyber skills in the federal cyber workforce.

High quality training in cybersecurity should be available to anyone who wants to take on this challenge. Such opportunities open cyber careers to people who never thought they’d be a cyber professional and that, in turn, means a more diverse and creative workforce is available to solve federal cyber problems. This should include give the opportunity for employees at all levels – even those with no interest in a career in cybersecurity – to build a solid foundation of cyber awareness.

Remember that cybersecurity isn’t solely the responsibility of cybersecurity professionals – it’s about ensuring that everyone in the organization understands cyber hygiene and their role in ensuring the security of the data and services their agency provides.

Closing the cyber skills gap will be challenging. It will take diversifying the types of talent we draw from and positioning cybersecurity as a career for more than just computer science majors and others with college degrees. The federal government has an opportunity – and the need – to create and employ a large and diverse cyber workforce. This will not only help to protect the data and operation of our agencies but also fuel our nation’s innovation and growth and bolster our critical infrastructure.

Customer Identity trends report shows control trumps convenience

By Steve Caimi, Director of Product Marketing, Public Sector, Okta

In an ever-evolving digital landscape, the public sector is faced with a difficult challenge: ensuring that public services are safe, secure, and easy to use. Meeting all of these demands is critical to providing users with the digital experiences they require.

To help your public sector organization better understand how to deliver safe and frictionless digital services, Okta surveyed more than 20,000 people worldwide in various industries about their digital experiences involving convenience, privacy, and security. The survey results are included in the Customer Identity Trends Report.

Survey results

The survey found that customer expectations have soared, resulting in a greater need for seamless and safe experiences from the public sector. From federal agencies to educational institutions, the public sector must deliver safe and easy-to-use experiences. Some highlights from the survey include:

  • Consumer expectations are higher: The digital era has raised the bar for user experiences. From federal, state, and local government agencies to healthcare and education, people now expect seamless and frictionless digital experiences akin to those offered in the private sector. The report revealed that 48-60% of respondents would be more likely to engage public sector services online if the experience was simple and secure with frictionless login.
  • Data privacy in the spotlight: The report revealed that 81% of public sector respondents consider it important to have control over their data, especially when dealing with sensitive or private personal information.
  • Balancing control and convenience: Not only do people want control over their data, but survey respondents also prioritized control of their personal information over convenience. When asked to choose between the two, respondents favored maintaining control. This is particularly relevant in the public sector because services, such as unemployment benefits or student loans, can be attractive targets for bad actors.
Respondents Preferences for Convenience vs. Control

 

  • Security as a top priority: Public sector users tend to prefer dedicated login solutions that offer robust security measures. The report revealed that 80% of data breaches can be attributed to stolen credentials.
Graph of Preferred Security Measure by Industry

How Okta can help

Okta’s Identity solutions enable the public sector to strike a delicate balance between control and convenience, helping ensure that individuals’ privacy rights are protected while delivering smooth digital experiences. The public sector can maintain regulatory compliance, protect sensitive data, and safeguard user privacy by leveraging Okta’s solutions and:

  • Effortlessly scale across multiple applications. The survey showed that 65% of people are overwhelmed by the number of usernames and passwords they need. By leveraging the robust features and scalability offered by Okta, organizations can expand their use of login.gov while ensuring security, performance, and reliability. Okta’s seamless integration of the Identity as a Service (IDaaS) platform and login.gov can transform agencies’ authentication processes.
  • Deliver a user experience that is both secure and seamless, upholding the highest standards of compliance. With Okta, your organization can maintain compliance with regulations and industry standards, ensuring the protection of sensitive data and the privacy of users. Plus, by prioritizing security and control, government agencies can gain the trust of those they serve while delivering services only to those that need and qualify for them.
  • Have the same robust Identity and Access Management (IAM) solution that works with Common Access Cards (CACs) and Personal Identity Verification (PIV) Cards for federal employees. The survey revealed that 63% of people are unable to log into an account at least once a month because they forgot their username or password. Okta’s IAM solution is FedRAMP High and DoD IL-4-authorized, providing the necessary security and compliance measures required by government agencies and proof for all public sector organizations that security is critical.

Ultimately, the survey revealed that people value control of their data and expect different levels of security for different online interactions. With Okta’s Identity solutions, your organization can strike a balance between data control and convenience, safeguarding user privacy while delivering seamless digital experiences. Let Okta power your organization’s digital front door with our integration capabilities, including login.gov. Together, we can navigate the digital landscape, providing the public with the exceptional experiences they need and the security they deserve.

Learn more about how Identity powers modern digital services for the public sector.

Federal Agencies Making Strides Toward Sustainability and Climate Action

By Gary Hix, Chief Technology Officer for Hitachi Vantara Federal

In a determined effort to prioritize sustainability and combat climate change, Federal agencies have made significant progress since President Biden issued an executive order on Federal sustainability over a year ago. Guided by the government’s influential scale and procurement power, this order sets ambitious goals for emissions reduction and a sustainable future.

Federal agencies have been diligently working to align themselves with the order and the accompanying Federal Sustainability Plan, aiming to establish a robust and sustainable foundation within the government. While progress has been steady, Federal leaders are exploring synergies between sustainability initiatives and the modernization of aging legacy IT systems, striving to leverage the inherent energy efficiency advantages of emerging technologies.

Federal agencies are increasingly recognizing the inherent value of sustainability. The Environmental Protection Agency proudly reports significant reductions in energy and water consumption from established baselines, showcasing their commitment to sustainable practices. The Air Force Installation Energy Program has made similar progress, demonstrating a notable decrease in energy use intensity and water intensity on their installations.

A new program, “Assisting Federal Facilities with Energy Conservation Technologies,” has been established to provide grants for energy and water efficiency upgrades in Federal buildings. This program, funded through the Infrastructure Investment and Jobs Act, signifies a dedicated investment in sustainable practices.

Innovative deployment approaches, such as sensor-driven edge data centers, offer opportunities for significant energy savings while providing real-time field insights. As agencies embrace new technologies and approaches, incorporating performance requirements aligned with energy efficiency goals is vital.

Additionally, adopting virtual desktop infrastructure presents benefits such as streamlined management, reduced hardware costs, and energy savings, connecting technological advancements with sustainable practices in the era of remote work.

Sustainability can also be integrated into procurement practices. By incorporating energy-efficiency requirements into the procurement process and considering environmentally conscious guidelines for companies and contractors, this approach encourages the development and adoption of innovative solutions. It also incentivizes the market to provide sustainable and technologically advanced offerings, fostering a competitive landscape that drives digital transformation and spurs business growth.

Federal leaders should also focus on cultivating sustainable supply chains for their products, ensuring a holistic approach to sustainability in all stages of their operations. Guiding principles such as environment, social, and governance (ESG) provide valuable guidance in this pursuit, while also meeting mission goals and requirements.

With Federal agencies actively prioritizing sustainability and driving digital modernization to combat climate change, they are aligning with ambitious goals, leveraging emerging technologies, and integrating energy-efficiency requirements into procurement practices to establish a strong and sustainable foundation.

Regardless of the chosen path, Federal leaders must keep their sights fixed on the ultimate goal: combating the climate crisis and fostering a more sustainable government and society. In this endeavor, alongside data modernization efforts, sustainability can become ingrained as a fundamental business practice, delivering mission-driven, economic, and environmental benefits for both present and future generations. By leveraging data-driven insights and innovative technologies, Federal agencies can drive impactful change and build a sustainable future for all.

Executive Order 14028 | Improving the Nation’s Cybersecurity Depends on Data | All Data is Security Data

In recent years, Federal Agencies have been challenged by a growing list of adversaries operating in an increasingly complex cyber threat landscape. At the same time, Agencies have been diligently modernizing their information technology (IT) environments to accommodate evolutionary cloud technology trends and a more mobile and remote workforce. These dynamics have combined to create a complex and seemingly insurmountable challenge in securing and protecting the systems, infrastructure, data, information, and personnel representing our nation’s most critical and sensitive assets.

In the cybersecurity domain, Agencies are facing a myriad of limitations and constraints across both business and technical challenges. Effectively securing mission critical assets is paramount against a landscape of constrained funding, talent shortages, cyber skills gaps, and an overwhelming regulatory and compliance framework. The fast pace of technology evolution and urgent demands for decisive advantage in the cybersecurity domain to combat determined adversaries has resulted in complex operational IT environments with disjointed and siloed security architectures. In many Agencies, security teams are flooded with data, alerts, and indicators without the ability to properly analyze, make sense of, and act upon the insights and information available to them. Un-protected attack surfaces, security blindspots, and exposed vulnerabilities remain throughout the environment and are prime targets for malicious actors. Deployments of common, baseline solutions with manual analysis and process workflows have not proven effective against today’s adversaries.

While the cyber threat landscape continues to outpace defenses, federal IT and OT environments are also going through evolutionary change. Recognizing the performance, innovation, speed, and scale advantages of cloud platforms and solutions, Agencies are actively moving more sensitive and critical applications, data, and information to cloud environments. With a more mobile and remote workforce, Agencies have had to rethink and re-architect their IT capabilities to deliver more distributed capabilities resulting in a broadening of attack surfaces and increase in risks across the IT estate. The architectural changes occurring across the federal IT and OT landscape demand equally evolutionary change to the security and protections deployed to secure our federal assets and information.

The current geopolitical environment, with multiple overseas conflicts that threaten U.S. interests on the world stage, pits the United States against technology advanced nations and demonstrates for the first time in history the importance of cybersecurity and technology capabilities and dominance.

To address these challenges, the President’s Executive Order on Cybersecurity (EO14028) seeks to drive urgent and immediate change throughout federal government to modernize and improve the security posture of the nation’s IT infrastructure, foster robust threat and information sharing, enhance visibility of vulnerabilities and exposures, and improve investigative capabilities to automatically protect, detect, and respond to malicious activities, incidents, and events. To support this order, current OMB Memorandum (M-21-31) directs agencies through a maturity model aimed at not just collecting and storing log data for visibility, but also evolving and applying AI/ML and Behavior Analytic capabilities to enable orchestrated, automated, and comprehensive protective response actions. The Event Logging 3 Tier, at an advanced level, provides for the rapid and effective detection, investigation, and remediation of cyber threats as decisive actions against modern adversary tactics, techniques, and procedures (TTPs).

 

At SentinelOne, we are uniquely positioned to help Agencies tackle these problems and combat our most aggressive and malicious adversaries. The SentinelOne Singularity Platform is the only FedRAMP Authorized solution empowering centralized security operations in a world of big-data and decentralized IT. Built for the high performance, high scale demands of our Federal Agencies, our platforms and services offerings are leading the way for Agencies to fully leverage the power of AI/ML, deep analytics, and autonomous enforcement technologies. Combined with an extensive XDR Ecosystem of seamless partner integrations for estate-wide context and enforcement, SentinelOne is a powerful toolset that Agencies are leveraging for real results on the cyber battlefield.

SentinelOne | Singularity Platform

Delivers industry leading, autonomous protection, detection, and response across attack surfaces.

The SentinelOne Singularity Platform delivers a single, unified console to manage the full breadth of AI powered, autonomous cybersecurity protection, detection, and response technologies for all-surface protection. Our cloud-native technologies deliver performance at scale with unique and patented endpoint activity monitoring and remediation capabilities. Our Storyline Active Response (STAR), powered by AI/ML technologies, is foundational to improving investigative visibility and delivering automated response actions against modern day attacks. With full rollback as a remediation action, SentinelOne is leading the way in comprehensive, rapid, and effective recovery of assets compromised from ransomware and other malicious actions. The broad endpoint coverage, robust feature set, multi tenant architecture, and scalable infrastructure offered by the SentinelOne Singularity platform are unmatched in helping Agencies meet the requirements of Executive Order 14028.

SentinelOne | Security DataLake

Provides unmatched cross-platform security analytics and intelligence with scalable, cost-effective, long-term data retention.

The SentinelOne Singularity Platform, and underlying Security Data Lake, is the industry’s first and only unified, M-21-31 data repository that fuses SentinelOne and 3rd Party security data for active threat hunting, deep-dive analytics, and autonomous response and enforcement all from a single unified console. The unified data repository is built to deliver performance at scale leveraging unique technologies that combine to dramatically reduce the time required for complex, large-scale investigative queries. With built-in, AI-driven analytics, the Security Data Lake empowers security analysts with powerful capabilities, reducing the mean times to detect, identify, and respond to threats discovered and exposed in security log and event data. With scalable, cost-effective, long-term data retention and robust AI powered analytics, the SentinelOne Security DataLake is unmatched in delivering the capabilities needed for Agencies to reach EL3 maturity and M-21-31 compliance.

SentinelOne | Security XDR Ecosystem

Enhances threat intelligence and information sharing with open, bidirectional integrations across your security stack for extended threat enrichment and enforcement.

The SentinelOne Singularity XDR Ecosystem is the leading cloud-first security platform that enables organizations to ingest and centralize all security data while delivering autonomous prevention, detection, and response at machine speed across endpoint, cloud workloads, identity, and the extended 3rd party security ecosystem. With “one-click”, multi-function integrations, SentinelOne is delivering better detection and response, greater actionability, and improved workflows while protecting existing security investments and preserving cybersecurity talent and skills with a unified, consistent, and intuitive interface spanning your security environment. As an open platform for sharing threat information, enabling deep visibility, applying robust security analytics across large-scale datasets, and driving autonomous response, the SentinelOne Singularity XDR Ecosystem stands alone in providing the end-toend, seamless architecture and services for Agencies to meet the objectives of both Executive Order 14028 and OMB Memorandum M-21-31.

SentinelOne | Purple AI

Empowers cybersecurity analysts with AI-driven threat hunting, analysis, and response through conversational prompts, interactive dialog, and easy to understand analysis and recommendations.

SentinelOne continues to lead the industry in the innovative use of AI/ML technologies to empower more streamlined and effective SOC performance and response. Our latest innovations demonstrate the powerful use cases for generative AI technologies to better equip SOC analysts to easily search, understand, and gain actionable insights from extensive data stores without the need to learn complex query and programming languages. Our recently announced Purple AI will deliver seamless integration of generative AI technologies allowing SOC analysts to use conversational, plain language queries and prompts to perform complex and deep analysis for both known and unknown threats. Cybersecurity is a “big-data” problem and our Purple AI capabilities are being designed to simplify threat hunting and analysis, expose meaningful and contextualized data, and deliver insightful and actionable recommendations. SentinelOne Purple AI will enable less-experienced and resource constrained security teams with the tools necessary to respond to attacks faster and easier in alignment to the Executive Order 14028 and OMB Memorandum M-21-31.

SentinelOne is proud to partner with our federal government agencies in the fight against advanced adversaries seeking to compromise our national security. We are committed to delivering innovative and impactful cyber technology solutions that allow federal agencies to keep up with a growing list of cyber adversaries operating in an increasingly complex cyber threat landscape. The SentinelOne Singularity Platform, underlying Security DataLake, integrated XDR Ecosystem, and emerging Purple AI enhancements are uniquely designed to arm federal organizations with the tools they need to automatically identify and respond to threats in real-time and reduce the burden on understaffed and overloaded SOC teams.

For more information on the SentinelOne portfolio, emerging innovative technologies, and how SentinelOne can arm your organization to win the cyber wars, please contact us at s1-fedsquad@sentinelone.com.

Applying Geospatial Intelligence, AI/ML to Climate Change Challenge

By Eric Adams, Geospatial Functional Lead, GDIT

As climate change has been accepted as a scientifically undeniable and universally recognized challenge, geospatial intelligence teams are applying the tools of their trade to mitigate its effects and address its impacts on aspects of our environment. Today, like nearly every other industry around the world, artificial intelligence and machine learning (AI/ML) are powerful tools that geospatial professionals now have at their disposal.

In years past, teams relied on relatively simplistic workflows of measuring the impacts of climate change. Remote sensing via multi-spectral satellites could provide images which could examine things like vegetation health, soil composition, and water saturation. By looking at these images and analyzing health using the Normalized Difference Vegetation Index (NDVI), as one example, teams could assess the health and stress levels of vegetation.

The challenges with using images from satellites (at the time) was low temporal resolution, or how often a sensor could image the same spot on the earth’s surface. Some sensors were only providing new images on average every 17 days. Anything catastrophic happening between the satellites passes would be unobserved in near real-time.

Fast forward to today and there are more satellites in low-earth orbit (LEO) than ever. Researchers can obtain satellite images from almost anywhere on the planet at any time. Satellite sensors are better too, and they produce better images, with better resolution more often and with more data-dense outputs.

This availability of data is precisely why the proliferation of AI/ML technology is so important. These satellites are creating more data than humans could ever process or analyze. AI can help discover patterns and insights and we can capture these through deep learning and computer vision models to identify outliers or changes in the images.

AI/ML technologies have already been put to work in a variety of climate change-related applications. For instance, they have been utilized to monitor deforestation and illegal logging activities, detect changes in sea levels and ice sheets, and identify areas at risk of flooding or drought. By combining satellite imagery with AI/ML algorithms, we can track and predict the impacts of climate change on ecosystems and communities, allowing for more informed decision-making and proactive intervention.

When the models know what “right” looks like, they can give analysts indications and warnings about areas to monitor more closely, as one example. AI-assisted predictive models can help assess the impacts of climate change and identify ways to mitigate its effects.

These tools can also be used to understand and assess the impact of human-caused disasters, like the Ohio train derailment or the Indiana plastics factory explosion – to give two examples from recent weeks alone. These tools can help gauge the travel patterns of air pollutants and predict where they’ll go next as well as what the impact will be to the environment, and ultimately – to humans.

Overall, the consumption and analysis of massive amounts of data – enabled by AI/ML – has begun to change how we solve complex, climate-related problems. AI/ML can inform policy making at the federal, state, and local levels. They can also contribute to the growing body of academic research and understanding of climate change. This information, in turn, impacts how businesses operate and perform resource planning.

AI and ML enable us to leverage and employ more advanced, reliable, and automated analytics. And with a challenge as pressing as climate change, this becomes more important than ever.

Already, AI and ML are enabling more capabilities in the geospatial space than ever before. Leveraging that potential for the future will depend on ensuring data compatibility across application development and integration. It will depend on compliance with universally recognized data quality standards, like those from the Open Geospatial Consortium (OGC). We’ll also need to make existing and new AI/ML models more accessible to more users.

GDIT is working with customers and agency partners to address these challenges and to further enhance our AI capabilities in the geospatial intelligence community. We are developing and implementing strategies to make AI/ML a community effort. Fundamentally, we believe that, together, we can mitigate the impact of climate change and sustain our environment in a more positive manner.

And this Earth month, there doesn’t seem to be a more important mission than that.

My Cup of IT: Angry at Arthritis, Hunting for Cures

I’ve always been a very private person, but it’s time for me to stop being such a coward – I have OsteoArthritis. I guess it can happen to anybody – even Steve O’Keeffe. In fact, one in seven American adults has OA – that’s more than 32 million folks stateside.

So, I’ve spent the last two years getting smarter about the disease. We just launched a new nonprofit – www.angryarthritis.org – and we’re hunting for cures, and importantly providing patients with a guide to arthritis that I wanted when I got my diagnosis. Frustratingly, all I had at the time was google to help.

For me, this isn’t about living with OA – exercise, losing weight, and changing my diet. I already do all that – I want my old life back. Curing OsteoArthritis is not as nutty as it may sound.

I became a champion googler, read mountains of research, then traveled all over the world to get smarter about the disease and meet with the leading researchers and clinical minds. I’m serving that up to the arthritis community in my A@A podcast — https://www.angryarthritis.org/podcasts/.

Apologies, I know my voice is like nails on a chalk board – but, I’m interviewing the leading minds in OA, so you don’t have to google, read all those wonky papers, and travel around the world to get your own help, or to find the like-minded and willing to help pursue cures.

This week, A@A is hosting an OA Innovation Shark Tank on Capitol Hill, with my good friend Congressman Gerry Connolly (D-VA) — https://www.angryarthritis.org/oa-innovation-shark-tank/ — champion for Federal employees, IT modernization, and regrettably, an arthritis sufferer.

We’re partnering with the Arthritis Foundation, and bringing together the leading OA cure experts from all over the world. You’ll hear from doctors who have actually cured OA in humans – not mice or goats. And, you’ll get the skinny on new cures in human clinical trials.

Just think about that – don’t replace your joints, renew them. Oh, and what about those joints that aren’t knees and hips?  Want to fix those elbows, fingers, toes – ease that neck and back? What about better alternatives than knee replacements that wear out in 10 years – they’re not a great fit for younger, active folks.

America needs to know about new OA treatment options on the horizon. Join us for the battle of new OA science – it’s free to attend. If you’d like to join us, we’ll buy lunch for the first 100. But, plan to arrive early, regret the room’s a tight squeeze.

How bad is OA for America? Take a looksee — https://www.angryarthritis.org/wp-content/uploads/2023/05/osteoarthritis-by-the-numbers-v3.pdf

We the patients are the most powerful force in our quest for a cure for OA. If you’re hungry for a cure for OA, come join us – register here – https://www.angryarthritis.org/oa-innovation-shark-tank/. I apologize for this interruption – normal government IT programming will resume shortly. And, I am sorry for hiding from all my friends in our community – health challenges can happen to anyone. Stay well.

How the Federal Government Can Help Combat a Fragmented Internet

By Jim Richberg, Fortinet Public Sector Field CISO

The era of the global internet is over. The new reality is a fragmented digital landscape where nation states have largely been left to their own devices to create patchworks of policies to defend against threat actors who have become faster, stronger and more ruthless.

To combat this, the Federal government needs – among other things – to begin building newer and stronger cyber coalitions that promote and expand everything from digital trade agreements to cyber efforts in emerging economies. This will not only help foster an open flow of information but will also help keep those pipelines secure from cyberattacks.

Setting the Stage

The global internet was founded on the notion that information should be allowed to flow freely and securely across the world. Much of the foundational technology that makes the internet run was developed as projects for the Federal government. From the late 1960s when the DOD’s Advanced Research Projects Agency created ARPANET through the commercialization of the 1990s, the idea of the Internet was a place where free speech and easy access to information thrived.

But over the last two decades, that utopian vision has been cast by the wayside. The internet has become less free, less global and, ultimately, less secure. To reverse this course, the U.S. government needs to first develop a strategy that responds to this new, dangerous internet and then look to build partnerships that can restore its original vision.

The recently released National Cybersecurity Strategy begins to address some of these challenges. The strategy adds the important goal of building systemic resilience, which includes everything from ensuring that critical infrastructure is secure to helping shape international cyber standards and countering cybercrime.

In a recent report, the Council on Foreign Relations took on this task and made several foundational recommendations that will likely prove crucial to both the security of our own networks and the future of the open internet moving forward.

Build Coalitions

  • First, the CFR suggests that the Federal government create a coalition of friends and allies around the vision of the internet as a trusted, protected international communications platform. We cannot take on this task alone and cyber information sharing at the highest levels will be key to combating this new, dangerous internet.

Pressure Adversaries

  • Second, the report recommends the U.S. move toward putting diplomatic and economic pressure on adversaries and be more ready to execute disruptive cyber operations. The decisions about which countries to act against would be made by the coalition and become a dynamic conversation as nation states continue to evolve their offensive and defense cyber programs.

Look Inward

  • The CFR’s third recommendation is for the U.S. to look at its own cyber posture and make it an example for other countries to follow. This means doing a better job of integrating cyber with the other tools of national security power.

While the first two pillars should be left to the policy and diplomacy experts, the last recommendation – putting our own house in order – is one that Federal agencies can get started on now. Resources like President Biden’s cybersecurity executive order make for a good start, especially as agencies continue their zero trust journeys – a strategy that is foundational to a strong and nimble cybersecurity posture.

Since most agencies have moved beyond the initial planning stages of their ZT implementation strategy, they can now focus on the actionable side of things. That could start with something as fundamental as getting an accurate inventory of the users and devices that have access to Federal networks.

Doing this kind of discovery process early on will make it easier in the long run when IT and security teams are working to identify minimum thresholds for letting users have access to parts of the network. This also weeds out former employees or abandoned devices that could be used as a vehicle for a cyberattack.

Beyond that, part of the focus of the new national strategy is on transferring much of the responsibility for mitigating cyber risk away from end-users such as individuals, small businesses and small critical infrastructure operators like local utilities. These organizations are typically under-resourced and short on cyber expertise compared to organizations like technology providers and large corporations or government agencies, who are better able to deal with cyber risks systemically.

We haven’t lost the battle for an open and free internet, but the window is closing if we don’t act now. And while much of the task will rely on trusted global political partnerships, there is much work to be done domestically as well. This is a whole-of-nation challenge – cybersecurity is national security – and while the burden cannot be shouldered by individuals, companies, or government alone, each has a role.

By partnering with the right organizations, the Federal government can lead us into a bright future where the dream of a global internet can become a reality. Together, we can help make sure America’s vision of a free and secure internet prevails.

Accelerating Cybersecurity for US Critical Infrastructure

By Gaurav Pal, Principal and Founder, stackArmor, Inc.

Disruptions in gasoline supplies due to the cyberattack on the Colonial Pipeline in May 2021 transformed cybersecurity attacks from an “online problem” to a national security concern. This seminal event resulted in the release of the National Cybersecurity Strategy (NCS) on March 2, 2023. The NCS brought into focus the potential for serious economic damage and disruptions to our daily lives from cyberattacks on critical infrastructure. Congress and government agencies are acting with urgency to advise organizations in critical infrastructure sectors such as aviation, water and sewage utilities, education, and healthcare to rapidly address cybersecurity concerns. For this strategy to be successful, however, there is no reason to reinvent the wheel. Our path forward should be informed by lessons learned from successful cybersecurity and risk management programs that have proven track records.

Moving from Voluntary Approaches to Mandatory Requirements

There has been much progress over the last ten years in bringing cybersecurity issues to the forefront of organizations, lawmakers, and government executives. However, the speed of change and investments necessary to deliver “cyber resilience” have not kept pace with the velocity, volume, and variety of continued cyberattacks. We continue to read daily about ransomware attacks and cybersecurity incidents in schools, local governments, and healthcare facilities causing disruptions and hardship. The NCS recognizes that voluntary approaches are not working fast enough and seeks to change the status quo by driving policy changes. One of these changes includes transferring liability for cybersecurity from the user to the technology manufacturers of digital products and services. It also seeks to enforce minimum cybersecurity requirements from voluntary adoption to mandating their implementation under the supervision of government agencies. The goal is to accelerate cybersecurity investments to drive cyber resilience. These policy changes are especially important for the critical infrastructure sectors that include essential services that we all depend on.

Securing Critical Infrastructure is a Big Deal

Protecting our critical infrastructure will require the best and brightest minds to come together  because the problem is large and complex. A quick back of the envelope calculation reveals a $20+ billion market opportunity in the United States alone. According to Cybersecurity and Infrastructure Security Agency (CISA), there are 16 critical infrastructure sectors whose assets, systems, and networks are considered vital to the United States. Just to get a sense of the numbers, CISA provides the following data:

  • The defense industrial base consists of 100,000 firms that provide products and services to the Department of Defense. Assuming an average spend of $100,000 per year on cybersecurity solutions, that is a $10 billion market for Cybersecurity Maturity Model Certification (CMMC) 2.0.
  • On March 7, 2023 the Transportation Security Agency (TSA) issued guidance to the aviation sector on the need to improve their cybersecurity posture. There are around 19,700 organizations in the aviation sector alone – which includes aircraft, air traffic control systems, airports, heliports, and landing strips as well as ancillary service providers like aircraft repair stations, fueling facilities, navigation aids, and flight schools. Assuming these organizations on average spent $100,000 per year, then that sector equals a $2 billion opportunity.
  • On March 3, 2023 the Environmental Protection Agency (EPA) advised water and waste water utilities on the need to shore up their cybersecurity defenses. There are approximately 153,000 public drinking water systems and more than 16,000 publicly owned wastewater treatment systems in the United States. More than 75 percent of the U.S. population depends on these systems for their potable water and sanitary sewerage needs. Assuming these organizations on average spent $100,000 per year on cybersecurity, then that amounts to a $17 billion market.

Clearly, the 16 critical infrastructure sectors have a wide variety of cybersecurity needs and will require tailored solutions. Developing the right cybersecurity risk management model and mandating adoption are urgent priorities. To make rapid progress, we should consider leveraging cloud computing, the Federal Risk and Authorization Management Program (FedRAMP), and secure commercially developed innovations.

Steps to Accelerate Cybersecurity for Critical Infrastructure

  1. Mandate Adoption of Secure and Accredited Cloud Solutions

Cloud solutions that have been accredited for government and public sector use provide a secure foundational set of capabilities. These capabilities can rapidly improve the cybersecurity posture of critical infrastructure sectors. Given the wide variability in cybersecurity quality in non-regulated commercial solutions, FedRAMP or StateRAMP accredited solutions provide a ready-made marketplace of vetted capabilities critical to help protect critical infrastructure. A recent McKinsey survey on cybersecurity showed that highly regulated verticals are migrating to the cloud four times more quickly than non-regulated sectors. It is important to ensure that these regulated entities are using well-secured systems and platforms. Federal cybersecurity grants provided to state and local governments should emphasize the deployment and use of FedRAMP and StateRAMP accredited solutions where possible.

  1. Adapt and Clone Successful Cybersecurity Risk Management Programs like FedRAMP

FedRAMP was established in 2011 to help drive adoption of secure commercial cloud services for government and public sector use. Since then the program has successfully accredited nearly 300 commercial cloud services and 4,600 instances of reuse by various agencies, helping to save millions of dollars in compliance costs. The FedRAMP marketplace is heavily relied upon by state agencies, financial institutions, and even international agencies as a source for new and innovative solutions that meet the gold standard for cybersecurity. The success of the FedRAMP program has prompted the creation of other risk management programs like StateRAMP and TX-RAMP, among others. It is important for policy makers to learn from the lessons of FedRAMP by developing a data-driven case study. The case study should document cost avoidance from duplicative cybersecurity spending as well as cost avoidance from potential data breaches that might have otherwise happened.

  1. Develop a Scalable and Effective Cyber Risk and Authorization Management Program

In accelerating the cyber resilience of the critical infrastructure sectors, it is important to adopt approaches that have worked successfully in the past. The FedRAMP Program has enabled the streamlined adoption of commercial digital solutions by public sector organizations. Each Sector Risk Management Agency (SRMA) with oversight of critical infrastructure sectors should consider adapting and tailoring the FedRAMP program architecture based on their unique mission requirements. However, the foundational underlying pillars for FedRAMP’s success include formal authorization requirements and rigorous continuous monitoring processes. Incorporating these two pillars in a cybersecurity risk management program is essential if we want to move from a voluntary approach to a mandated posture. The infographic below provides an overview of how such a model might work.

Critical Infrastructure graphic reproduced from the CISA.gov website.
  1. Incentivize Investment in New and Innovative Cybersecurity Solutions

Currently there is a lot of focus on large firms to solve the cybersecurity issues in critical infrastructure. This approach might not yield optimal outcomes. McKinsey’s recent cybersecurity survey indicates a cybersecurity solutions pricing mismatch in the Small to Medium Businesses (SMB) segment resulting in lack of adoption of cybersecurity solutions. Encouraging the development of new and innovative solutions by small businesses and start-ups can help address this market gap. For example, small businesses and start-ups could be offered Small  Business Innovation Research (SBIR) grants to implement strong security measures and seek FedRAMP authorizations. The SRMAs can be encouraged to sponsor innovative solutions for FedRAMP ATOs thereby unblocking a critical chokepoint in the FedRAMP accreditation marketplace today.

  1. Harmonize Cybersecurity Requirements and Standards

Currently there is a wide variety of cybersecurity frameworks and standards especially in the critical infrastructure sectors. Over the past few years, the NIST Cybersecurity Framework (NIST CSF) has emerged as a “consensus” standard, with increasing adoption. SRMAs should continue to provide easy-to-use implementation guides that encourage the implementation of security best practices detailed in NIST SP 800-53 that underpins NIST CSF. Additionally, SRMAs should be encouraged to use and recommend the adoption of Open Security Controls Assessment Language (OSCAL) to streamline continuous monitoring reporting. NIST and CISA should consider expanding the use of OSCAL for cybersecurity incident reporting. Using machine-readable cyber-incident documents will make it easier to process, analyze, and respond to incidents through automation and enable rapid post-incident analytics.

What the Experts are Saying

“With the recent enactment of the FedRAMP authorization Act, the federal government has an opportunity to better leverage cybersecurity risk management frameworks that highlight the importance of utilizing commercial cloud technologies. Furthermore, as the FedRAMP program expands to serve more of the federal cloud marketplace, it and associated programs like StateRAMP provide decisionmakers with a faster path to authorization, and it is paramount we learn to effectively leverage what’s provided by these pre-approved secure solutions for critical infrastructure protection.”

— Mike Hettinger, Founding Principal, Hettinger Strategy Group

“The release of the National Cybersecurity Strategy roughly three weeks ago has been widely hailed as an important step towards incorporating stronger defenses into U.S. digital networks.  But our Federal landscape is replete with strategies, legislation, White House directives, and so on that have had little impact because there has not been the proper follow-up.  As we await the promised implementation plan for the strategy, this short paper provides some interesting thoughts about how to move forward with a strong and proven metrics-based approach.”

— Alan P. Balutis, Managing Partner, The CIO Collective

Getting in on the Ground Floor of the ‘New Observability’

By Ozan Unlu, CEO, Edge Delta

In fall 2022, Splunk issued its annual State of Observability Survey and Report, highlighting the increasingly critical role observability plays in enabling multi-cloud visibility and dramatically improving end-user experiences. Growth in observability – or the ability to measure a system’s current state based on the data it generates – has occurred in lockstep with cloud adoption and is viewed by many as an essential counterpart, helping IT teams overcome more complex and extensive monitoring challenges.

According to Splunk’s report, the benefits of observability are very real: organizations that are observability leaders (two years or more with an observability practice) report a 69 percent better mean time to resolution for unplanned downtime or performance degradation; and leaders’ average annual cost of downtime associated with business-critical applications is $2.5 million, versus $23.8 million for the beginner group (one year or less).

So where exactly do government IT organizations fall? Solidly in the beginner category, with 78 percent of government IT teams registering as beginners (versus an average of 59 percent across other industries). In fact, no government IT team surveyed registered as a leader.

While it may seem that government IT is trailing in observability, there’s actually tremendous upside to be had here. This is because observability, as a discipline, is undergoing a dramatic shift – spawned largely by rapidly growing data volumes –  that nascent observability practices are ideally poised to leverage. This shift entails:

  • Moving from ‘Big Data’ to ‘Small Data’ Traditional observability approaches have entailed pooling all data in a central repository for analysis, with the understanding that collecting all datasets together and correlating them can be the key to delivering the insights needed to quickly determine root causes. The problem with this approach is that traditionally all data is relegated to hot storage tiers which are exceedingly expensive, especially with data volumes exploding due to the cloud and microservices. The incidence of an IT team unknowingly exceeding a data limit and getting hit with a huge unexpected bill as a result is far more common than one would expect. To avoid this – and knowing that the vast majority of data is never used – an organization may begin indiscriminately discarding certain data sets, but the problem with this approach is that problems can lurk anywhere and random discarding may introduce significant blind spots. The solution is not to discard randomly, but rather to inspect all data in parallel, in smaller, bite-sized chunks.
  • Analyzing Data in Real-Time – Besides cost, another drawback of the “centralize and analyze” approach described above is the fact that it takes time to ingest and index all this data – time that an organization may not have if a mission-critical system is down. In spite of great technology advances, recent outage analyses have found that the overall costs and consequences of downtime are worsening, likely due in part to an inability to harness, access and manipulate all this data in milliseconds. Many data pipelines just can’t keep up with the pace and volume of data – and the more data there is to ingest, the slower these pipelines become. Furthermore, these pipelines do very little to help IT teams understand their datasets and determine what is worth indexing. This leads to overstuffed central repositories that then take much longer to return data queries, further elongating mean-time-to-repair (MTTR). With the average cost of downtime estimated at $9,000 per minute (translating to over $500,000 per hour), a much better approach entails analyzing data for anomalies in real-time.
  • Pushing Data Analysis Upstream – Analyzing data in real-time helps IT teams not just detect anomalies faster, but also immediately identify the root cause of issues, based on what systems and applications are throwing the errors. Furthermore, when data is analyzed at the source, the concept of data limits in a central repository becomes a non-issue. For organizations that want to keep a central repository, high-volume, noisy datasets can be converted into lightweight KPIs that are baselined over time, making it much easier to tell when something is abnormal or anomalous – a good sign that you want to index that data. Some organizations find they don’t actually need a central repository at all.
  • Keep All Your Data and Make it Accessible – As noted above, by pushing analytics upstream in a distributed model, an organization can have an eye on all data, even if all this data is not ultimately relegated to a high-cost storage tier. However, there are going to be times when access to all of this data is needed, and it should be accessible – either in a streamlined central repository, or in cold storage. In line with the DevOps principle of encouraging self-service, developer team members should have access to all their datasets regardless of the storage tier they’re in, and they should be able to get hold of them easily, not having to ask operations team members who often serve as the gatekeepers in the central repository model.

There’s little question that government IT teams are not as advanced as other industries when it comes to observability. This is reflected in the industry’s comparatively slow cloud adoption (on average, government IT teams report 24 percent of internally developed applications are cloud-based, compared to an average of 32 percent across other industries); and perhaps more importantly, lack of confidence. Only 22 percent of government IT teams reported feeling completely confident in their ability to meet application availability and performance SLAs, compared to 48 percent of respondents across other industries.  The good news is, with relatively few investments already made, government IT teams are in a better position to capitalize on more modern, cost-efficient and agile observability strategies – helping them increase cloud adoption while building confidence.

Comply-to-Connect is Key to Zero Trust for DoD

By Melissa Trace, Vice President, Global Government Solutions at Forescout Technologies

Research from Forescout’s Vedere Labs reveals that government organizations have the highest percentage of devices with risk. Between the explosion of remote work, the ongoing ransomware epidemic and the fact that the number of non-traditional assets – such as IoT, OT and IoMT – outnumber the volume of traditional IT assets, agencies are well aware of the need to evolve the cybersecurity of government networks. In fact, relative to the private sector, government is reportedly leading the charge in the adoption of zero trust.

The allure of zero trust methodology is the ability to restrict access to data resources by assessing user-resource connection requests in the most granular method possible. Agencies turn to the primary zero trust authority, Draft NIST Special Publication (SP) 800-207: Zero Trust Architecture  (NIST SP 800-207), which provides the following steps for introducing zero trust to a perimeter-based architected network:

  1. Identify actors on the enterprise;
  2. Identify assets owned by the enterprise;
  3. Identify key processes and evaluate risk associated with executing them;
  4. Formulate policies for the zero trust architecture candidate policy enforcement point (PEP);
  5. Identify candidate PEP solutions;
  6. Begin deployment monitoring; and
  7. Expand the zero trust architecture.

The very first steps are an undertaking for an organization as comprehensive as the Department of Defense (DoD). The DoD Information Network (DoDIN) spans thousands of networks, each with thousands of connected devices and connected systems. Simply knowing all of the IT assets on the DoDIN has always been a challenge.

Comply-to-Connect Leverages Zero Trust Principles for the DoD

The DoD launched Comply-to-Connect (C2C), one of the largest government cybersecurity efforts globally, to effectively boost its cybersecurity posture across the enterprise. C2C leverages zero trust’s least privilege principles to protect access to data resources and assets.

C2C provides the foundation of the DoD’s zero trust journey and is the next step in the evolution of security throughout the DoDIN at both the classified and non-classified levels. A major distinction between C2C and previous security programs is that C2C seeks visibility of all assets (both traditional and non-traditional). Whereas other enterprise security solutions focus on a subset of DoDIN-connected devices, C2C applies to all categories of DoDIN-connected devices: workstations/servers, mobile devices, user peripherals, platform IT devices, IoT devices, and network infrastructure devices.

Further, DoD’s C2C policy allows teams to authenticate the security posture for the endpoint of each resource prior to granting access to the network. Before access is given, all devices are examined to ascertain compatibility with organization policy. In accordance with zero trust, systems and devices are, then, only granted access to appropriate network areas. All connected devices are continually monitored with the ability to address any cyber-related discrepancies through automated action within the C2C framework.

The two main objectives of C2C are:

  1. C2C fills existing capability gaps in currently fielded enterprise security solutions through complete device identification, device and user authentication, and security compliance assessment.
  2. C2C automates routine security administrative functions, remediation of noncompliant devices and incident response through the integration of multiple management and security products and continuous monitoring.

The Importance of Visibility and Monitoring

Visibility and monitoring are prerequisites to DoD’s zero trust “never trust, always verify” authentication and compliance policies. They are also at the core of every government journey to zero trust, whether they are just beginning or have attained some level of maturity.

How Will Upcoming Cryptocurrency Regulations Affect Industry?

By Miles Fuller, Head of Government Solutions, TaxBit

The collapse late last year of FTX, one of the largest centralized cryptocurrency exchanges in the world, and the resulting contagion has garnered the interest of U.S. lawmakers and regulators. After years of talking about crypto, they now seem to be interested in more than just hinting at regulations and standards for digital assets.

That’s a good thing. Far from being limiting factors, regulations safeguard consumers’ assets, eliminate the threat of noncompliance, mitigate illegal activities, reduce transaction costs and market uncertainty, and stimulate robust business operations. They also protect investors from unwitting managers and consumers from nefarious business practices while providing clarity to entrepreneurs so products and business systems can be developed with less risk or uncertainty.

The Federal government agrees with the need for regulations for blockchain. The IRS’s recent pivot to using Congress’ term of art “digital assets” to describe virtual currency is a great example. The IRS defines digital assets as all crypto assets, including NFTs and stablecoins. Congress and the Treasury Department are merely imposing on the digital-asset ecosystem the same tax reporting rules that have long existed for traditional finance.

The agency’s position complements the requirements of the Infrastructure Investment and Jobs Act (IIJA), which details clear tax reporting rules for digital asset brokers, transfer statement reporting, and more, that were scheduled to go into effect on Jan. 1, 2023. But what will those regulations be, and how will they impact organizations engaged in the exchange of digital assets?

Lessons From Traditional Finance

In the wake of the FTX collapse, SEC Commissioner Hester Peirce stated that the crypto market needs to “take some basic lessons from traditional finance.” Fittingly, the rules imposed through the IIJA are already taking that approach in the tax world.

The rules are designed to reduce the tax-reporting burden on individuals by requiring brokers to provide easy-to-understand information about the tax impact of transactions conducted on their platforms. This information is shared with the IRS to streamline the review of tax returns filed by individuals and promote transparency and compliance.

For example, under the IIJA digital asset brokers are required to file Forms 1099 with the IRS and provide copies to customers. The forms will include specific tax information about digital asset transactions that the customer engaged in during the year.

This type of information sharing has never really existed in the digital asset space. Platform users generally do not receive any periodic statements and are only able to download data files for use in year-end tax calculations.

The new IIJA provisions completely change this and bring information reporting for digital assets in line with longstanding rules for traditional finance. Traditional financial investors receive Forms 1099 from their brokers summarizing what assets they bought and sold and the tax impact of those transactions. Digital asset investors will receive the same information.

As with securities, IIJA also requires digital asset brokers to share acquisition cost information with other brokers when assets are transferred. This allows the broker receiving the transferred asset to properly complete a Form 1099 when the asset is sold by including accurate purchase price information.

This will be a massive step toward reducing the tax-reporting burden on individuals in the digital asset space. This type of transfer reporting has been a stock market staple for more than a decade and has significantly reduced the burden of tax preparation in that space because all of the information is provided to the taxpayer from the broker. In the digital asset ecosystem, where transfers among brokers and platforms are more frequent, this type of reporting will have a similar impact.

However, certain hurdles remain.

Defining who is a Broker

One of the biggest issues is how to appropriately define who or what constitutes a digital asset broker that is subject to these reporting rules. In traditional finance, brokers are relatively easy to spot – they facilitate the transfer or trade of securities on behalf of their customers. But the digital asset ecosystem functions differently.

The nature of blockchain technology means that transactions can occur or marketplaces exist where assets are traded without the need for an actual business or individual to facilitate those transactions. In these situations, transactions are facilitated by a computer protocol that operates autonomously without any human oversight. Developers of these protocols argue that IIJA compliance is impracticable. The argument is that without any human involvement or oversight, there is no “broker” who can be held responsible for meeting the requirements.

It is unclear how the Treasury Department will address these situations or to what extent they will impose reporting requirements on platforms that operate autonomously in the decentralized finance space. This is not an easy problem to solve and regulatory overreach may negatively impact growth and development in the decentralized financial market. Of course, limiting the application of IIJA rules will be equally problematic because the goal of the regulations will not be met and taxpayers will continue to be left with limited or unhelpful information for tax purposes.

Digital Asset Functionality

Further, if an individual taxpayer personally holds digital assets and moves those holdings into and out of digital asset broker platforms, the effectiveness of IIJA reporting will be reduced, since individuals are not required to report personal transfers. However, requiring brokers to report transfers that are not sent to other brokers should help the IRS and taxpayers understand how to account for these units and properly file their taxes.

There are other IIJA requirements pertaining to the ability of digital assets to function like currency in some settings. For example, the transfer of digital assets exceeding $10,000 has its own reporting requirements that mirror the rule governing the receipt of physical cash by businesses. By imposing rules requiring the disclosure of high-value transfers of digital assets, Congress is seeking to put up informational guardrails around digital assets.

A Good First Step

While questions remain, the requirements contained in the IIJA are a first step at imposing needed regulations around digital assets. The reporting requirements will undoubtedly help the ecosystem continue to mature.

My Cup of IT: Cup Cake for Kushner?

As we tilt into this new year in the Federal IT events community, I wanted to point to the absurdity of Federal gift giving regulations.

We all know that industry’s not allowed to give a Fed food and drink worth more than $20 at one time – and no more than $50 in a year.  The General Services Administration says so right here.  Crossing that threshold constitutes bribery.

Try buying any meal for less than $25 in D.C. right now – even without the cup cakes, a simple meal has become expensive, don’t you know…?

I had to chuckle last year when we learned that Jared Kushner, the former president’s son-in-law and a key advisor during his administration, racked up a $2 billion investment from the Saudi Sovereign Wealth Fund.

That’s one helluva cup cake.

Is it not absurd that regular Federal civil servants are held to one standard, while appointed officials, when they step out of office, can accept whatever payments from whomsoever deems it in their interest to shower largesse?

Maybe it’s time to reform that $20 food and beverage limit to get in line with inflation – and maybe it’s also time to put appointees and their family members on a stricter diet?

Launching a New Era of Government Cloud Security

By Dave Levy, Vice President, Amazon Web Services

The FedRAMP Authorization Act was recently signed into law as part of the defense authorization bill, a signal that cloud technologies continue to have a permanent place in helping U.S. government agencies deploy secure and innovative solutions to accomplish their missions.

Through this legislation, policy leaders on Capitol Hill and in the Biden administration further recognize the important role that industry partners play in improving the security and resilience of government services.

Government cloud security begins with the Federal Risk and Authorization Management Program, or FedRAMP. FedRAMP is a program that standardizes security assessment, authorization, and monitoring for the use of cloud services throughout the U.S. federal government. The program was authorized in 2011 through a memorandum from the Office of Management and Budget (OMB), and the General Services Administration (GSA) established the program office for it in 2012.

Though in existence for ten years, FedRAMP had not been formally codified in legislation. In this time, we’ve seen meaningful improvements in the ways government agencies leverage cloud technology to improve how they deliver services and achieve their missions. From its adoption by the Intelligence Community to leveraging cloud technologies in its space missions, government agencies have demonstrated that cloud technologies allow them to rapidly deploy systems that are secure, resilient, and agile. Cloud technologies also allow them to do more, for less, and at a faster pace than imagined possible ten years ago.

Amazon Web Services (AWS) applauds Congress and the White House for bolstering cloud adoption and security package reuse through the FedRAMP Authorization Act, a piece of legislation led by U.S. Congressman Gerry Connolly, D-Va., to codify the FedRAMP program. With this bill signed into law as part of the National Defense Authorization Act, there is recognition for the important role that the cloud plays in securing federal systems – and the role FedRAMP plays in ensuring that security.

Safeguarding the security of our federal systems is more important now than ever. With the volume and sophistication of cybersecurity attacks increasing, coupled with evolving geopolitical security threats around the world, the U.S. Government must ensure that it is leveraging best-in-class security services to deliver its critical missions. Further, the “do once, reuse many times” ethos of FedRAMP will save money for mission teams across government as teams optimize security by leveraging existing system security packages.

Industry has a key role to play in this equation. For example, the FedRAMP Authorization Act creates the Federal Secure Cloud Advisory Committee, which would be tasked with ensuring coordination of agency acquisition, authorization, adoption, and use of cloud computing technologies. The committee will serve as a new method of formally engaging with industry partners to improve the way cloud accreditations are managed in government, and align the use of those services with agency missions and priorities. A joint group of government and industry partners such as this committee will help the FedRAMP program evolve to solve the toughest security challenges facing the U.S. government today.

Security is our top priority, and AWS has been architected to be the most flexible and secure cloud computing environment available today. Both the AWS GovCloud region, which is a region specifically designed to meet the U.S. Government’s security and compliance needs, and AWS US East-West regions have been granted FedRAMP authorizations.

AWS supports FedRAMP, as we have from the very beginning. U.S. government agencies are embracing cloud in existing programs and missions, and they are building new services with cloud technologies. Formally codifying the FedRAMP program through legislation ensures the U.S. government can leverage industry-leading cloud services, safeguard federal systems, and better support the delivery of critical citizen services in an evolving security landscape.

Dave Levy is Vice President at Amazon Web Services, where he leads its U.S. government, nonprofit and public sector healthcare businesses.

Managing IT Complexity in Federal Agencies

Despite recent progress, IT-related problems continue to hinder work at government agencies. These include data in silos across locations that complicate information-gathering and decision-making, rising cyber threats targeting employee logins, and legacy systems that don’t adapt easily to mission changes or remote work environments.

Accordingly, a recent study by Gartner found that while 72 percent of programs aimed at government IT modernization saw gains in response to the pandemic, fewer than half (45 percent) have scaled across the organizations they serve.

Fortunately, best practices and managed services can alleviate the problem. Trusted Artificial Intelligence (AI) for operations, zero-trust cybersecurity frameworks, and managed systems integration can all help, according to Aruna Mathuranayagam, chief technology officer at Leidos.

Managing IT Complexity

Mathuranayagam identifies three critical areas for federal agencies to address to streamline operations and reduce costs while increasing employee productivity and safeguarding sensitive data. These are systems integration, zero-trust cybersecurity practices, and digital user experiences.

  • Systems integration helps bridge divides across data silos without compromising security, according to Mathuranayagam. “Some of our customers are building common DevSecOps platforms so they can adopt algorithms or cloud practices quickly,” Mathuranayagam says. “The platforms are common across the different classified environments. The activities may vary within each secret classification, but they have a unified practice.” An example of where such integration can help is the National Nuclear Security Administration (NNSA), with headquarters in Washington, D.C., and eight sites across the country. Those sites have tended to manage their IT and implement cybersecurity measures in their own ways, creating silos, according to Mathuranayagam.
  • Zero trust cybersecurity represents more of an ongoing journey than a problem to be solved with off-the-shelf solutions, according to Mathuranayagam. But it is essential for safeguarding systems and data from today’s sophisticated and relentless attacks. “You have to look at how your networks are configured, how your employee authorizations are configured, how your architecture is laid out,” Mathuranayagam says. “It’s a complete transformation of your philosophy and how you have been providing security to your users, to your customers, to your stakeholders.”
  • Digital user experiences are often given less attention in IT transformations. However, they are vital for streamlined operations and productivity, according to Mathuranayagam. That’s because well-designed interfaces and workflows reduce the burden on users so they can work with minimal friction.

Bringing these focus areas together in a managed enterprise cybersecurity model will result in safer, more efficient, and less costly IT, according to Mathuranayagam. She cites Leidos as a vendor providing a unique toolset and deep experience for meeting the challenge.

Managed IT and Security from Leidos

AI plays a starring role in IT managed by Leidos. “What Leidos has built in the last ten years of research and development from supporting a wide range of customers across the Department of Defense, Department of Energy, federal civilian agencies, and the intelligence community, is something called trusted AI,” Mathuranayagam explains.

Trusted AI developed by Leidos depends on a framework known as the Framework for AI Resilience and Security, or FAIRS. The system lets organizations gather data across systems for deep analysis while enhancing security.

The benefits of using FAIRS include reduced cognitive workload through automation and the surfacing of insights from data across systems. “It can identify patterns that we as humans cannot identify in terabytes or petabytes of data across different sources,” Mathuranayagam says.

For example, AI-driven data analysis can spot trends in help tickets. Analysis of ticket types, frequency of tickets, and peak times for tickets can lighten the load for tech support teams. “You can start observing the tickets and extract patterns,” Mathuranayagam says. “And then you can write rules-based algorithms to autonomously respond to certain tickets or implement practices to eliminate a percentage of them.”

In the realm of security, although no single solution can check the “zero trust” box, trusted AI and managed services from Leidos can give agency IT leaders confidence on the journey.

Mathuranayagam explains that Leidos helps organizations understand their IT environments through complete visibility of all assets, identifying any security gaps. From there, Leidos experts help teams build multi-year roadmaps and acquire the expertise and technologies they need, all of which can aid agencies in reducing digital complexity and risk while advancing their missions.

To learn more, visit leidos.com/enabling-technologies/artificial-intelligence-machine-learning.

1 2 3 19