March 20, 2026

CNAS Insights | Bridging Washington and Silicon Valley

The recent friction between Anthropic and the Pentagon has made me reflect on the painful chasm that opened between Washington and Silicon Valley following leaks from Edward Snowden, a private contractor at the National Security Agency (NSA). I was appointed as the NSA’s first chief risk officer shortly after Snowden released thousands of classified documents revealing surveillance practices at the NSA and its British counterpart. The Director of NSA asked me to review and build an agency-wide approach to mission, risk, and transparency. At the time, it was clear that we faced more than just a policy disagreement; we were grappling with a loss of trust by the American public in how our nation pursued intelligence operations.

As chief risk officer, I saw firsthand the divide between the internal views of the agency and the external perspectives of the tech sector, from CEOs to engineers; a divide that took time to develop and time to heal. I believe we learned valuable lessons during that period that apply today for the principles governing the relationship between Washington and Silicon Valley in the age of artificial intelligence (AI).

Principle 1: Transparency as a Foundation for Trust

In a democracy, national security must maintain adequate transparency regarding its use of technology to retain public trust. Because lawmaking inevitably trails technology, it is never enough to simply ask, “Is it lawful?” The more vital question is, “Is it wise?”

The military and intelligence communities must hold themselves to a standard where an informed citizen—one who invests the time to read what is public about our military and intelligence operations—would believe those actions are in line with American constitutional protections. While the national security community must operate in the shadows for a reason, it must share enough of its methods, practices, and procedures to prove it operates in line with our laws and beliefs. When missions fail to prioritize transparency as tech outpaces policy, we end up with incidents that erode the very trust that is nonnegotiable in a democracy.

While the national security community must operate in the shadows for a reason, it must share enough of its methods, practices, and procedures to prove it operates in line with our laws and beliefs.

Principle 2: Only the Government Bears Responsibility for Keeping Us Secure—But Frontier Labs’ Technical Insights Must Inform the Government’s Rules for AI Use

There is a fundamental difference in mission that defines the relationship between the worlds of Silicon Valley and Washington. The government’s mission is to keep people safe, while the tech sector’s mission is to build innovative tools people want to use and buy. Anthropic overreached in pressing the Department of Defense to outline restrictions beyond what is lawful. As citizens, we don’t want a company—with zero accountability for protecting the nation—putting in place restrictions above and beyond what Congress and the courts have determined is appropriate.

With that said, the gap between the missions of the national security community and the tech community leads to a lack of shared context that must be addressed to ensure the government makes the right calls regarding how AI is used: Frontier AI labs lack an understanding of classified threat intelligence and military principles like “proportionality” in the laws of war. Conversely, the government lacks the labs’ technical expertise to understand the unique failure modes of AI, such as sycophancy—where a model’s answers shift to please the user.

This plays out in the current debate and is particularly relevant to the two use cases at the heart of the fight: broad surveillance and military autonomy and targeting.

Broadly, the military and intelligence community have two types of targets: the ones they know (e.g., Iran’s Islamic Revolutionary Guards Corp leader) and the ones they must discover (e.g., the lone wolf terrorist who drove improvised explosive devices into a synagogue last week). The latter are some of the hardest challenges intelligence agencies and militaries face. How do you find someone hiding among civilians? And how do you ensure that protecting the broader population’s civil liberties doesn’t come at the price of stopping a terror attack? Finding the needle in the haystack requires searching through the haystack, and the rules around doing so are some of the most complicated and frequently reviewed in the intelligence community. This complexity played out in the debate with Anthropic on one of the two use cases: broad surveillance of American citizens. Broad collection of data about Americans is unlawful today, although using commercial data (e.g., data purchased from commercial data brokers) is legal. It’s unclear why the Pentagon wanted the flexibility to use AI to analyze this data—domestic counterterrorism “discovery” would typically be done by the FBI. If Anthropic thought the law was inadequate, the appropriate path would be to pursue accountability through democratic processes—such as educating Congress about what AI could do and advocating for legislative updates. As a country, Congress and the courts should decide which use cases for AI are acceptable and which are not, as those branches of government bear ultimate accountability for American safety.

In contrast, the second AI use case of military autonomy is more complicated. While military doctrine does not require a “human in the loop,” current military culture places a strong emphasis on having a human in the loop. If the Pentagon is building a weapon that does not have a human in the loop, there are additional review processes to test, evaluate, and assure that the weapon is reliable, and its failure modes are well understood. AI presents new risks in targeting. But AI also presents new opportunities to improve military and intel targeting. By recognizing patterns, AI can make targeting more precise and reduce collateral damage in strikes.

Which brings me to my central point: We need a deep technical and operational partnership where AI labs inform the military’s understanding of a tool’s capabilities and limitations. This must not and should not shift decision-making to the private sector, but is critical to ensure that the Pentagon’s mission, law, and policy are informed by technical reality. We need a permanent version of the “listening tour” that followed the Google Maven cancellation, which successfully built trust and created the Pentagon’s first responsible AI principles.

We need a deep technical and operational partnership where AI labs inform the military’s understanding of a tool’s capabilities and limitations.

Principle 3: Winning the Global Competition of Values

Finally, we cannot forget that we are in a relentless competition with authoritarian countries, notably China, Iran and Russia. These regimes use AI as a weapon against their own people—training AI facial recognition models on millions of cameras in China or spying on citizens in streets and hospitals to target protesters in Iran. Their aims to integrate AI into warfare is unlikely to consider issues of personal freedoms.

AI is shaping the wars of today and tomorrow, and we must ensure that we are leaning forward on this technology to keep the nation safe. We must stay ahead of our adversaries. But we must do so in a way that remains anchored in our civil liberties and privacy. If we surrender our democratic values in the process, then we’ve lost the very thing that we are fighting for.

My father was raised in Soviet-occupied Budapest and fled to the United States with his parents after the Hungarian uprising in 1956. In the late 1980s, he took our family back to Hungary. At one point, we were walking down a street and he suddenly became agitated, insisting we make an inconvenient detour to avoid passing a specific building. Later, my mother told us that was the former location of the Soviet police station. As a child, my father’s parents instilled in him that he should never walk by there; people disappeared into the police station. That he still couldn’t walk by 50 years later gave us a picture of what life must have been like—and what it’s like today in Iran, China, and Russia. It’s why my family moved to the United States and why I feel so privileged to have spent almost 20 years in the national security community.

Today, as a country, we face two different types of threats: keeping pace with authoritarian competitors and protecting our nation from terrorism. AI plays a role in keeping the nation safe from both. We must bridge the technological and values divide between Silicon Valley and Washington to do so but we must never forget who bears ultimate responsibility for our security.

We are all on the same side. If we fail to integrate mission, law, and AI technology into a single, transparent effort, we risk either another attack like 9/11 or another Snowden-level loss of national trust. Our goal is to ensure the American people are both secure and free.

Anne Neuberger is a member of the CNAS Board of Directors and served as deputy assistant to the president and deputy national security advisor in the Biden administration where she had responsibility for national policy around cyber and emerging technologies.

View All Reports View All Articles & Multimedia