The Defense That Wasn't
Seven years. That's how long the patches were available. When Salt Typhoon arrived, it didn't need to break anything. It needed to find what had never been repaired. This is the story the Salt Typhoon hack actually tells, and it isn't primarily about China.
Seven years. That's how long the patches were available.
Seven years in which the vulnerabilities sat documented, catalogued, publicly known, with working fixes that required nothing more exotic than applying them. Seven years in which network engineers at the largest telecommunications companies in the United States could have closed the doors that Chinese intelligence services eventually walked through. They didn't. And so when Salt Typhoon arrived, operating under a mission given to it by the Ministry of State Security, it didn't need to break anything. It needed to find what had never been repaired.
This is the story the Salt Typhoon hack actually tells, and it isn't primarily about China. The Chinese operation is sophisticated, patient, and dangerous, all of that is true, but sophistication doesn't explain what happened here. An adversary with patience and resources will probe your perimeter indefinitely. What determines whether they get through is the condition of the perimeter they find. Salt Typhoon found Cisco routers running firmware with known vulnerabilities, legacy equipment that hadn't been updated in years, and credentials that could be acquired through weak passwords. The adversary met negligence, and negligence opened the door.
What followed was one of the most consequential intelligence penetrations in American history.
Salt Typhoon has been active since at least 2019. It operates under China's Ministry of State Security, which means it works for the same organization responsible for China's foreign intelligence collection and counterintelligence operations. MSS doesn't improvise. Its campaigns reflect deliberate strategic priority: which targets would yield the highest intelligence return, maintained over years, with discovery treated as a failure to be prevented rather than an acceptable risk. The group has gone by multiple names across the security research community (Earth Estries, FamousSparrow, GhostEmperor, UNC2286), which is itself a tell about how long it's been operating and how thoroughly its methods have been analyzed. Different researchers who encountered different aspects of the same operation named what they found.
The telecom campaign, as it became publicly known in October 2024, was already well underway by the time anyone disclosed it. Officials estimated the intrusion had been running for one to two years before discovery. Some forensic evidence suggests activity going back to 2022. The campaign ultimately reached at least nine U.S. telecommunications companies: AT&T, Verizon, T-Mobile, Spectrum, Lumen, Consolidated Communications, Windstream, and others. Former NSA analyst Terry Dunlap called it "a component of China's 100-year strategy," and I keep returning to that framing because it's clarifying in a way the breach narratives usually aren't: the intrusion wasn't designed to accomplish a single objective and exit. It was designed to stay. Not a smash-and-grab. Occupation.
The question worth asking about any long-running intrusion isn't just how it got in. The more revealing question is how it stayed.
To understand what Salt Typhoon accessed, you need to understand CALEA: the Communications Assistance for Law Enforcement Act, passed in 1994, which requires telecommunications carriers to build intercept capability into their systems. When law enforcement or intelligence agencies obtain court authorization for wiretapping, they access communications through infrastructure that the carriers are legally required to maintain. CALEA systems are, by design, a single point of access to enormous volumes of sensitive communication. Build them into every major carrier, make them technically accessible to government agencies, and you've created exactly the kind of concentrated target that a foreign intelligence service would spend years working toward.
Salt Typhoon got there. The intrusion accessed CALEA systems at multiple carriers, meaning the Chinese operation had access to the same infrastructure that U.S. law enforcement uses to conduct authorized surveillance. I've spent time trying to think through the full implications of that and I keep running into the limits of what can be said publicly, which is itself a kind of answer (the operational consequences are significant enough that I won't speculate about them in detail here). What can be said is that the intelligence value of knowing who is under investigation, through which carrier, by which agency, for how long, is not difficult to estimate. In addition to the CALEA access, the operation harvested metadata from over a million users concentrated in the Washington D.C. area: call records, message timestamps, source and destination numbers, IP addresses. The geographic concentration matters. Metropolitan Washington is where the people who make, implement, and oversee U.S. national security policy work and live.
The operation also reportedly tracked locations in real time and accessed the communications of high-ranking officials, including individuals on presidential campaign communications.
None of this happened because Salt Typhoon broke something that was working. It happened because the carriers had left critical infrastructure in a state that any security professional would recognize as indefensible.
I've spent thirty years in this field. I've watched organizations cycle through the same failure pattern enough times that I can describe it without looking at the specifics of any particular breach. The pattern is fundamentally about formation, about what gets internalized as non-negotiable versus what gets treated as adjustable depending on circumstances, and it goes roughly like this: the compliance team documents the requirement, the IT team acknowledges it, the patching gets scheduled, the patching gets deprioritized, something breaks and demands immediate attention, the scheduled maintenance gets bumped, and three years later there's a router running firmware that was already obsolete the quarter it was deployed.
What this kind of failure actually reflects, more than any technology gap, is what the organization values when the quarterly report and the security patch compete for the same attention. Management failures, prioritization failures: these are names for the same thing, which is an organization that has decided, through accumulated small choices rather than any single bad decision, that the patch can wait. Patches don't generate revenue. They prevent future losses, which are speculative and discounted, against a cost that is immediate and certain. Every CFO who has ever reviewed a capital expenditure request knows which side of that ledger tends to win.
The carriers will tell you, and have told regulators, that their networks are complex and that applying security updates to live telecommunications infrastructure carries risk of service disruption. That's true. It's also a reason to patch carefully and with proper change management, not a reason to leave known vulnerabilities unaddressed for seven years. The complexity argument is the default response of every organization that has underinvested in security when the consequences arrive. I've heard it from hospital systems, financial institutions, manufacturing operations, and now I'm hearing it from the companies whose networks carry the communications of 265 million Americans.
The version of events in which Salt Typhoon is primarily a story about Chinese capability is a more comfortable version of events for the organizations that failed to defend their networks. It positions them as victims of sophisticated adversaries rather than as parties whose negligence made the adversary's work straightforward. Both things can be true simultaneously: the operation was sophisticated in its patience, its targeting, and its operational security, and it succeeded in part because the defenders had not maintained basic discipline. The sophistication of the attacker doesn't explain the seven-year-old unpatched routers. That part requires a different explanation.
There's a regulatory dimension to this story that compounds the failure, and I want to be careful not to turn a security argument into a partisan one, because the underlying problem predates any particular administration. But some facts are directly relevant to the security analysis, and they don't improve on close inspection.
After the Salt Typhoon breach became public, the FCC under the previous administration issued a Declaratory Ruling establishing legal obligations for carriers to secure their networks under CALEA. The ruling required carriers to create, update, and certify cybersecurity risk management plans annually. In November 2025, the FCC under Chairman Brendan Carr voted 2-1 to reverse that ruling, claiming it had "misconstrued" CALEA and calling it "flawed" and "unlawful." The reversal came, Senator Maria Cantwell documented, after "heavy lobbying" from the same carriers that had failed to detect the intrusions and that have subsequently refused to provide documentation proving they've removed the intruders from their networks.
The sequence is worth sitting with. The carriers failed to implement basic security controls, their networks were penetrated by Chinese intelligence for up to two years, and the regulatory response, when it finally came, was a Declaratory Ruling requiring carriers to certify minimum cybersecurity standards annually. The carriers lobbied to have the ruling reversed. The ruling was reversed. The FCC is now relying on "voluntary collaboration" with the same companies whose voluntary approach to security produced the breach in the first place.
Anyone who has spent time watching regulated industries handle security requirements will recognize this sequence without needing a name for it. The companies most affected by a regulation have the most concentrated incentive to fight it; the public most harmed by its absence has no organized lobbying presence; the regulator has limited resources and strong institutional incentives toward accommodation. It plays out the same way in telecommunications, finance, aviation, healthcare. The FCC's own concession, in the proceedings around the reversal, is that vulnerabilities are "still being exploited." That line appeared in the same document that revoked the requirement to address them.
As of this writing, in February 2026, Senator Cantwell has demanded a hearing with the CEOs of AT&T and Verizon, citing their refusal to cooperate with oversight and what she calls "serious questions about the extent to which Americans who use these networks remain exposed to unacceptable risk." Both companies have declined, through months of requests, to provide the documentation that would demonstrate remediation. The carriers whose voluntary approach to security produced the breach are now being asked, voluntarily, to prove they've fixed it.
The remediation picture is, if anything, worse than the initial breach suggests.
Between December 2024 and January 2025, while remediation was supposedly underway, Salt Typhoon launched a new campaign targeting over 1,000 unpatched Cisco edge devices globally. The campaign compromised devices at five additional organizations, including U.S. telecommunications providers. Security researchers identified over 12,000 Cisco devices with web user interfaces exposed to the internet. The same class of vulnerability, the same type of target. The campaign that hadn't been remediated continued finding the same kind of opening it had always found.
In December 2025, intrusions were detected in systems of multiple U.S. House of Representatives committees and attributed to Salt Typhoon. The operation has now compromised over 200 targets in more than 80 countries. Viasat, a satellite communications company serving both military and commercial customers, disclosed in June 2025 that it had been breached during the 2024 presidential campaign period. An unnamed Canadian telecom was compromised in February 2025.
The group's techniques center on exactly the kind of vulnerability that carrier negligence created: exploiting known, patchable weaknesses in routers and edge devices to establish persistent access, then using that foothold to move laterally into the network infrastructure that matters. The CVEs (Common Vulnerabilities and Exposures) that Salt Typhoon exploited in the Cisco devices had available patches. CVE-2023-20198 and CVE-2023-20273 were disclosed as zero-day vulnerabilities in October 2023. Salt Typhoon was still finding them unpatched at scale more than a year later.
Separate from Salt Typhoon but running in parallel is Volt Typhoon, a different Chinese state-sponsored group whose objective is pre-positioning in U.S. critical infrastructure rather than intelligence collection. Where Salt Typhoon harvests information, Volt Typhoon works to establish persistent access to aviation systems, water utilities, energy infrastructure, and transportation networks, the systems that would become targets in a geopolitical conflict that escalated to the point of disruption operations against the American homeland. The DOJ announced a disruption of Volt Typhoon's botnet infrastructure in January 2024; the group reestablished its botnet and continued operations. The disruption was real, the continuity was also real, and the combination suggests what we should expect: not clean resolution, but a contest in which temporary setbacks to the adversary alternate with the persistent availability of negligently maintained targets.
Pre-positioning in critical infrastructure requires the same precondition that Salt Typhoon found in telecoms: defenders who haven't maintained the discipline to close the doors. These aren't independent failures. They're the same failure finding different targets.
What would the defense have actually required? This is the question I keep returning to, because the answer is genuinely not complicated, which is what makes the failure so hard to look at squarely.
Applying patches to routers requires that someone know the patches exist, that the organization have a process for testing and deploying them, and that the process actually run on schedule. That's the core of it: not exotic technology, not unlimited budget, not a different class of security professional than the ones the carriers presumably employ. What it requires is that the people responsible for defense have internalized that maintenance is the defense: keeping the thing running and current is as much their job as responding to incidents. Operators who actually understand what they're protecting, management that treats patch cycles the way it treats financial reporting cycles, a regulatory framework that enforces minimum standards instead of accepting assurances. Seven-year-old unpatched vulnerabilities don't indicate a technology failure. They indicate that at some point, none of those conditions held.
What does the organization that fails this way actually look like from the inside? The compliance documentation exists. The security team exists. The audit passes. The budget line reads "cybersecurity" and the org chart shows a CISO and there are probably policies, maybe even good ones, somewhere in a shared drive. And yet when the adversary arrives, the routers are seven years unpatched and the passwords are weak and the CALEA systems that were supposed to be secured are accessible. I've been calling this "performed security" for years, though I'm not sure the term is precise enough: it implies more conscious theater than usually exists. The more accurate picture is drift: an organization that once had the discipline and then, through accumulated deprioritizations, lost it without ever noticing the loss. A facility with locked front doors and broken cameras, a guard who checks credentials at the entrance but hasn't walked the perimeter in months. Everything visible is in order; the actual exposure is invisible until someone finds it.
The telecommunications sector, at the scale of its network infrastructure, was performing security for years. Salt Typhoon walked through what turned out to be a performance. Whether it even constitutes a "defeat" is the wrong question; you don't defeat a defense that was never really there.
The distinction matters because the lessons you draw from "sophisticated adversary defeated sophisticated defender" are completely different from the lessons you draw from "adversary found negligence and exploited it." The first lesson leads toward an arms race in capability, which favors well-resourced nation-state adversaries by definition. The second lesson leads toward a much more tractable problem: the organizational discipline to maintain basic hygiene, consistently, across the systems you've been trusted to protect.
That's not a technology problem. It's a formation problem, meaning it's a problem about whether the people and organizations responsible for defense have internalized what defense actually requires, well enough that they maintain it when no one is looking, when the quarterly pressure is up, and when the patching feels less urgent than everything else competing for attention.
The formation is what was absent. The adversary found the absence, and occupied it.
Senator Cantwell's letter demanding that AT&T and Verizon CEOs appear before the Commerce Committee and account for their non-cooperation is, at minimum, the right instinct applied to a situation that has already gone wrong in ways that can't be walked back. Communications infrastructure that Chinese intelligence may still occupy is not a theoretical future risk. It is the current condition of networks used by American citizens, government officials, and the law enforcement and intelligence agencies that depend on those networks to conduct authorized investigations.
What the hearing would need to produce, if it happens, is not just accountability for what went wrong but clarity about what genuine remediation requires: not assertions that the networks are now secure, but documented evidence, independently verified, that the intrusions have actually been removed. The carriers' refusal to provide that documentation is itself information. Companies confident in their remediation provide evidence. Companies that have not fully remediated argue about the scope and complexity of the task.
There are things I can't know from outside: whether the carriers have made progress that can't be disclosed for intelligence reasons, whether the silence reflects ongoing exposure or operational security, whether the publicly available picture is distorted in ways I'm not positioned to see. Maybe. But a year and a half after public disclosure, with no verifiable evidence of remediation and a regulator that has chosen not to require any, the available information points one direction.
The harder question, which a hearing can gesture at but not resolve, is what mandatory minimum security standards would look like for the communications sector, who would enforce them, and how to design enforcement mechanisms that don't get lobbied into irrelevance by the same industry they're meant to govern. That question remains open. The FCC's reversal of its own post-breach requirements suggests that the political economy of telecommunications regulation is, at the moment, poorly aligned with the security requirements of the networks those regulations are meant to address.
Thirty years in this field teaches you some things that are hard to unlearn. One of them is that the organizations most resistant to security requirements are reliably the organizations that most need them, because the resistance and the need share the same root: a management culture that treats security as cost without visible return, right up until the moment it becomes an undeniable liability. At that point, the argument shifts from "we don't need to do this" to "we are already doing this" without much acknowledgment of the gap between the two.
The gap, in the Salt Typhoon case, was seven years wide. Chinese intelligence is still inside it.
The Defense That Wasn't is the first essay in The Compound Vulnerability series.