March 17, 2026
CNAS Insights | America’s AI Cyber Defense Gap Needs Congress to Act
Twice in the past five months, the U.S. Congress has allowed the authorization for U.S. cyber threat intelligence sharing to lapse. In each case, it managed only short-term extensions for this pillar of America’s collective cyber defense. This cycle of expiration and stopgap extensions is undermining the certainty that both industry and government need to strengthen and modernize information sharing to meet threats in the AI era.
The law in question, the Cybersecurity Information Sharing Act of 2015 (CISA 2015), provides the framework for voluntary cyber threat information sharing between industry and government, removing legal impediments that had disincentivized companies from sharing threat intelligence. The law first expired in September 2025, and Congress has since relied on stopgap measures to revive it, most recently extending it through September 2026.
The problem runs deeper than short-term lapses. CISA 2015 was written over a decade ago and has not kept pace with AI-driven cyber threats and other emerging risks. As AI reshapes the threat landscape, Congress should use the reauthorization debate to modernize the framework for the AI era.
CISA 2015 was written over a decade ago and has not kept pace with AI-driven cyber threats and other emerging risks.
The original law addressed a fundamental challenge in American cybersecurity: While most cyber intrusions occur on private-sector networks, defending against them is a national security imperative—particularly when private companies own and operate the majority of the nation’s critical infrastructure.
Before CISA 2015 passed, companies faced significant legal disincentives to sharing threat information with the government and each other. A company that reported a vulnerability in its systems risked exposing proprietary information to competitors, inviting antitrust scrutiny, or facing shareholder lawsuits, even when sharing that information would help protect the broader ecosystem. The costs of this chilling effect were real. In May 2000, a financial services information-sharing organization became aware of the ILOVEYOU virus hours before the government did, but did not directly alert relevant agencies. The worm eventually spread to an estimated 10 percent of all internet-connected devices globally and caused billions of dollars in damages. At a congressional hearing in the aftermath, a defense specialist at the Government Accountability Office testified that the government “was not effective at detecting this virus early on and warning agencies about the imminent threat.”
The ILOVEYOU incident was an early signal, but the problem only deepened. Cyberattacks grew more sophisticated and damaging throughout the 2000s and early 2010s, with a series of high-profile intrusions exposing flaws in the existing approach to defense. In 2009, Operation Aurora saw state-sponsored Chinese attackers exfiltrate intellectual property from Google and dozens of other technology companies. A few years later, cybersecurity firm Mandiant exposed APT1, a Chinese state-sponsored group that had been systematically stealing intellectual property from American companies across multiple sectors for years. The 2014 Sony Pictures hack demonstrated the willingness of nation-states to target private companies in pursuit of policy objectives.
Facing an increasingly hostile cybersecurity environment, Congress passed CISA 2015 to create legal protections for entities that voluntarily share cyber threat information with federal agencies. Information shared under the law cannot be made public through Freedom of Information Act requests or used against companies in regulatory proceedings. The law offers even stronger protections for information shared through designated Department of Homeland Security channels, shielding companies from lawsuits that might otherwise arise from the act of sharing itself.
In the decade since its passage, CISA 2015 has demonstrably strengthened U.S. cyber defenses. The law’s liability and antitrust protections gave companies the legal confidence to share threat intelligence they would previously have kept to themselves, and in doing so, fueled the growth of Information Sharing and Analysis Centers and Organizations, industry-specific bodies where companies and government agencies exchange cybersecurity intelligence in near real-time. A broad coalition of industry groups, including the American Bankers Association, the Edison Electric Institute, and the IT Industry Council, wrote to Congress in March 2025 that the law ”has meaningfully improved the capacity and speed with which we can respond to large-scale cyber incidents.” These information-sharing channels have become essential infrastructure for detecting and responding to threats that cross sectoral and organizational boundaries.
AI is rapidly reshaping the cyber threat landscape in ways that make timely information sharing more critical than ever.
Last fall, Congress allowed CISA 2015 to lapse, leaving companies without its legal protections for over six weeks before temporarily restoring it through January 30, 2026. The latest extension, signed on February 3, only runs through September of this year. One legislative staffer estimated that allowing the law to expire would result in “maybe an 80 to 90 percent reduction in cyber threat information flows.” A broad coalition of industry groups warned it would leave “us all more vulnerable to nation-state attacks and cybercriminals moving forward.” Although some sharing would persist through existing contractual relationships and sector-specific arrangements, without CISA 2015’s legal safe harbors, the volume, speed, and breadth of threat intelligence exchange would sharply decline. Uncertainty about the law’s future comes at the worst possible moment. AI is rapidly reshaping the cyber threat landscape in ways that make timely information sharing more critical than ever.
The underlying problem that CISA 2015 sought to address has only grown more acute. A new generation of AI companies now possess capabilities central to national security, from models that can identify software vulnerabilities at scale to AI-driven tools embedded in critical infrastructure defense. Bringing these actors into robust information-sharing arrangements is essential to keeping pace with the threat.
In November 2025, Anthropic disclosed what it believes is the first documented case of a large-scale cyberattack executed with minimal human intervention. A Chinese state-sponsored group manipulated Anthropic’s own AI model into functioning as an autonomous cyber attack agent, using it to perform 80 to 90 percent of a sophisticated espionage campaign with humans serving only in a strategic supervisory role. Notably, the attack was detected through Anthropic’s own internal monitoring, underscoring why information sharing between AI companies and government is essential. The incident illustrates a broader pattern: According to the World Economic Forum's Global Cybersecurity Outlook 2026, 87 percent of surveyed executives identified AI-related vulnerabilities as the fastest-growing category of cyber risk over the past year.
The Trump administration’s approach to managing these risks relies heavily on the voluntary public-private collaboration that CISA 2015 enables. The administration’s AI Action Plan calls for forming a government-led AI information sharing and analysis center, issuing AI-specific vulnerability guidance to the private sector, and sharing known AI vulnerabilities through “existing cyber vulnerability sharing mechanisms”—mechanisms built on CISA 2015’s legal framework. The Center for AI Standards and Innovation, the administration’s primary interface with frontier AI developers, similarly depends on voluntary agreements with leading AI companies to evaluate models for national security risks. Without these collaborative channels, the government has limited visibility into the capabilities and threats emerging from the private sector. Yet, these AI-specific initiatives require multiyear planning and sustained coordination that is undermined by the recurring (and needless) uncertainty about CISA 2015’s future.
While the statute’s existing protections may cover some AI-related threat sharing, companies lack certainty about which AI-specific issues fall within its scope, and that ambiguity discourages the very cooperation the law aims to promote. Congress has an opportunity to close this gap.
At the same time, simply extending CISA 2015 is insufficient. Congress wrote the law before AI-driven cyber threats existed. The Department of Homeland Security’s most recent guidance on the law, updated in November 2025, does not mention AI at all. While the statute’s existing protections may cover some AI-related threat sharing, companies lack certainty about which AI-specific issues fall within its scope, and that ambiguity discourages the very cooperation the law aims to promote. Congress has an opportunity to close this gap.
One area ripe for modernization is model distillation—the process by which actors, including those in Chinese AI labs, systematically query frontier models and use the outputs to train competing systems, effectively stealing American intellectual property. An updated CISA 2015 could explicitly authorize and protect the sharing of indicators related to distillation attempts, such as unusual query patterns, application programming interface (API) abuse, and coordinated extraction behavior, giving both companies and government the intelligence to collectively defend against this emerging threat. A refreshed law could also explicitly scope in defensive techniques and procedures, in addition to threat intelligence itself.
CISA 2015 has not escaped criticism. Senator Rand Paul has blocked long-term reauthorization and sought to remove the liability protections which have been central to CISA 2015’s success. Flaws in the design of a specific information-sharing initiative supported by CISA 2015, the Automated Indicator Sharing program, have led to attacks against the broader law itself. At the time of enactment, critics raised privacy concerns about the breadth of information companies may share with the government, but a decade of practice has done much to put those concerns to rest. That Congress keeps extending rather than reforming the law suggests lawmakers recognize its core value but have yet to chart a coherent path forward.
The solution is deliberate reform, not an indefinite cycle of short-term extensions. Several bipartisan efforts have attempted to chart a path forward. Some proposals have sought to extend CISA 2015 without amendment, while others have presented bills with modifications to modernize the legislation. But neither have advanced. Nevertheless, the consensus on the law’s importance is clear. As Representative Eric Swalwell put it at a hearing last May: “It’s rare that these days we see such a wide consensus on any topic, but on the issue of reauthorizing CISA 2015, I’ve received a very clear message from everyone I’ve talked to: Do. Not. Let. It. Lapse.”
Letting the law lapse while pursuing improvements would fracture established threat intelligence channels at the very moment AI is reshaping the cyber landscape. But simply extending the statute unchanged would be a missed opportunity. Congress has just over six months to do this right: Update CISA 2015 to explicitly address AI-driven threats and strengthen the legal foundations for the public-private cooperation outlined in the administration’s AI Action Plan. Bipartisan support exists and the need for AI preparedness is urgent. Congress should stop extending and start modernizing.
Spencer Michaels is an independent researcher who previously worked with the Technology and National Security Program at the Center for a New American Security (CNAS).
Janet Egan is a senior fellow and deputy director of the Technology and National Security Program at CNAS.
Michael Daniel is the president and CEO of the Cyber Threat Alliance, a threat intelligence sharing organization focused on the cybersecurity industry.
More from CNAS
-
Technology & National Security
Two Illegal Biolabs Reveal Gaps in U.S. BiosecurityThis article was originally published on Lawfare.Last month, law enforcement officials launched an investigation into a suspected biolab in the Las Vegas home of Chinese natio...
By Sam Howell
-
Technology & National Security
War Is Deadly. Why Is Trump Turning It into a Meme?Israel says it killed two of Iran’s highest-ranking leaders in an airstrike on Monday night. And President Trump is bashing allies for declining to help secure the Strait of H...
By Paul Scharre
-
Technology & National Security
How AI is Being Used During the War with IranPaul Scharre, executive vice president at the Center for a New American Security joins CNN to discuss the use of artificial intelligence in warfare....
By Paul Scharre
-
Energy, Economics & Security / Technology & National Security
Who Will Make Money on AI? With Paul ScharrePaul Scharre joins Emily and Geoff to talk about how commercial markets for AI might evolve and how different market outcomes may mean different types of risks for U.S. nation...
By Emily Kilcrease, Geoffrey Gertz & Paul Scharre
