What the Defense Actually Requires
Performed security and genuine security produce the same documentation. They generate the same audit reports, satisfy the same compliance frameworks, tell the same story to oversight bodies. The difference only becomes visible when the adversary shows up. The adversary has shown up.
The three preceding essays in this series have been, in structure if not in intent, diagnostic. The first asked how Chinese state-sponsored hackers managed to occupy the most critical civilian communications infrastructure in the United States for years without being stopped. The second asked how the government managed to create, through its own actions, exactly the attack surface that foreign intelligence services had been trying to manufacture for decades through external penetration. The third tried to name the compound condition those two failures produce together, and what it means that both are simultaneously true at a moment when the adversary's strategic ambition has shifted from collection toward pre-positioned disruption.
What I haven't done is say what the defense that would actually work looks like. I've avoided it, partly because prescription is easier than diagnosis when you're wrong, and partly because the constructive argument I want to make is harder to state without sounding like a slogan. But the three essays have been building toward it, and avoiding it any longer would be a kind of intellectual dishonesty.
The argument I want to make is about formation. Not technology, not regulation, not budget (though all three matter and none is sufficient without the fourth thing). The failures this series has traced are failures of judgment, discipline, and institutional character: the qualities that I've spent the past four essays calling, when I'm being precise, formation. The defense that actually holds is staffed and led by people who have internalized why the discipline matters. The compliance checklist is a downstream artifact of that internalization. Without it, the checklist is theater.
This is not a comfortable argument to make right now, in February 2026, given what's happened to the institutions that would produce that formation. But it's the argument the evidence points toward, and I want to try to make it honestly.
Let me start with a distinction I've been drawing around in the first three essays without making explicit.
Performed security and genuine security produce the same documentation. They generate the same audit reports, satisfy the same compliance frameworks, and tell the same story to the same oversight bodies. From the outside, they look identical. The difference only becomes visible when the adversary shows up.
The telecom carriers whose routers Salt Typhoon occupied for years weren't failing their compliance audits. The security teams existed on org charts. The policies were written. The frameworks were followed. What was absent was the internalized judgment that would have made someone, somewhere in those organizations, treat a seven-year-old unpatched router as an unacceptable condition regardless of whether an audit was coming. The discipline wasn't maintained because no one had formed the habit of maintaining it when it didn't immediately matter. And in security, the discipline that's only maintained when it visibly matters is not discipline at all. It's performance.
The DOGE-related failures in federal systems have a different surface structure but the same root. The personnel who bypassed standard security protocols, disabled logging systems, and accessed sensitive databases without oversight weren't ignoring discipline they understood to be necessary. They were operating from a formation, the Silicon Valley formation that speed and disruption are intrinsically good, that process is bureaucratic friction rather than earned wisdom, that the person who moves fastest is by definition the most competent. That formation is coherent and internally consistent. It has produced real things of value in contexts where the rules of engagement don't include state-sponsored adversaries actively watching for the moment when someone turns off the audit log.
The mismatch between that formation and the environment it encountered is what makes the resulting security failures so severe, and also so difficult to address through the mechanisms that would normally respond to them. Convictions are formed over years, through experience and mentorship and consequence, not promulgated through executive order. What policy can produce is compliance documentation. What it cannot produce is the belief, held at 3 a.m. on a quiet Friday, that the process being shortcut exists for a reason that still applies when no one is watching.
When I was planning this series, I expected the connection between the cultural argument in the first four essays and the cybersecurity argument in these to feel somewhat forced. The two domains share vocabulary (formation, discernment, the difference between performed and genuine) but vocabulary can be borrowed without the underlying structure actually aligning. What I found, in the writing, is that the parallel is tighter than I anticipated. Uncomfortable-tight, in ways I want to try to be precise about.
In that series, I argued that a generation raised on synthetic media, algorithmically curated, AI-generated, optimized for engagement rather than truth, had developed a hunger for the real that the synthetic environment couldn't satisfy. The hunger is the beginning of formation, or maybe it's just the precondition, the thing that has to be present before formation becomes possible. Either way, it precedes the discipline. It's the recognition, often inarticulate, that the representation isn't the thing, that what's being offered is constructed for effect rather than captured from reality.
The cultural argument was about discernment: the capacity to evaluate the difference between a treatise and a pamphlet, between genuine engagement with a hard idea and a simulation of it designed to produce a feeling of engagement without the cost. I called the person who exploits the absence of that discernment the pamphleteer. The pamphleteer doesn't need to be dishonest. The pamphleteer simply needs the audience to lack the formation to tell the difference.
The cybersecurity argument is structurally identical. The carrier that performs security rather than practicing it is the cybersecurity pamphleteer, producing documentation that simulates genuine defense for an audience (regulators, oversight bodies, shareholders) that lacks the formation to evaluate whether the real thing is present. DOGE's "move fast and break things" approach to federal systems is the cybersecurity pamphleteer operating from the other direction: producing disruption that simulates efficiency for an audience that lacks the formation to evaluate what the disruption is actually costing.
In both cases, the structural advantage belongs to whoever benefits from the absence of discernment. The adversary's job is easier when the defenders are performing rather than defending. The pamphleteer thrives when the audience can't tell the difference. The compound vulnerability compounds precisely because the mechanisms of discernment, the audit logs, the monitoring systems, the incident review boards, the whistleblower protections, the institutional knowledge that takes years to build and days to destroy, are the first things eliminated when the ideology of velocity encounters the friction of accountability.
Somewhere in the middle of writing these essays I found myself sitting with a question I couldn't resolve analytically: what does formation actually look like, in practice, in the people who do this work well?
I've been doing this for thirty years. I've worked in environments where the security discipline was genuine and environments where it was performed, and the difference is visible within days of arrival even when the documentation is identical. The genuine version has a quality of attention that the performed version lacks. People maintain the logs because they understand what a gap in the logs would mean to an investigation, not because the policy says to maintain them. They patch the router because they've internalized what an unpatched router represents in the adversary's targeting calculus, not because the compliance cycle is coming up. When something anomalous appears: a spike in outbound traffic at 3 a.m., a login attempt from an unexpected geography, a container created on the network that no one ordered. They notice. Not because an alarm fired, but because they're paying the kind of attention that lets you notice when something is wrong before the alarm knows to fire.
That quality of attention is what Rob Joyce was describing when he said that eliminating CISA's probationary employees would destroy the pipeline of trained talent responsible for detecting and eradicating threats. He didn't mean the compliance capacity or the headcount on a chart. He meant the formed judgment that comes from years of working alongside experienced practitioners who teach you, mostly by example, what genuine attention to the adversary actually requires, and how to maintain it when nothing is visibly wrong.
It is also what Daniel Berulis demonstrated at the NLRB, a security architect who noticed the anomalous activity, who understood what the combination of disabled logging, disabled monitoring, and unusual outbound traffic actually indicated, who tried to do what the system was designed to enable him to do, and who was told to stop before he could finish. His formation held. The institution's response to his formation is the part of the story I find hardest to set aside.
I want to say something careful about that, because there's a version of the formation argument that becomes, if you follow it far enough, a counsel of despair. If the defense requires formed people, and formed people require institutions to develop, and the institutions are being actively dismantled, then the argument circles back on itself: formation solves the problem that only formation can create the conditions to solve. I don't have a clean answer to that circularity. What I have is the observation that the knowledge doesn't simply disappear when the institutional home does: it goes somewhere, it persists in individuals, it can be transmitted informally in ways that are harder to disrupt than the formal pipelines. Whether that's enough, at this scale, at this speed, I genuinely don't know.
The adversary has formation of its own. Salt Typhoon's seven-year persistence inside U.S. telecom networks required sustained discipline: the patience to stay quiet, the judgment to know what to collect and what to leave untouched, the tradecraft to avoid triggering the monitoring systems that weren't switched off. The DIGOS breach in Italy, which emerged this week, bears the same signature: a "surgical" operation, not oriented toward disruption, but toward the selective extraction of precisely the information that would make future operations more effective. The objective was to map the people who do the watching, so that the watchers can be watched.
Formation versus formation. The adversary's is intact and supported by a state that takes the long view. The defense's is being actively dismantled, in the United States, at precisely the moment when the adversary's strategic ambition has shifted from passive collection toward active pre-positioning for disruption.
The constructive version of this argument has a limit I should name, because the essay reads more honestly with it visible than without it.
I can describe what genuine formation looks like in individuals. I've watched it develop and watched it erode, in government contexts and private sector ones, over thirty years. I can say with confidence that the discipline is teachable, that the training pipelines that produce it are real and have been demonstrated to work, and that what's been destroyed at CISA and across the federal cybersecurity workforce in the past year is precisely the infrastructure that develops and sustains that discipline.
What I cannot provide is a mechanism for rebuilding it quickly. Formation is slow by design, and the slowness is not a bug but the thing itself. The security professional who patches a router because they understand the adversary's targeting calculus didn't arrive at that understanding through a certification program, however good the program. They arrived at it through years of working alongside practitioners who understood it, in environments where the discipline was maintained under pressure, where the anomalous was noticed and investigated rather than tolerated, where the lesson from the breach was studied and absorbed rather than documented and filed.
Over half a million cybersecurity positions currently sit unfilled in the United States. The workforce development infrastructure that existed to address that gap: the CISA advisory programs, the developmental pipelines that Rob Joyce described, the institutional mentorship that turns technically capable new professionals into formed defenders. It has been significantly degraded, which is a bureaucratic phrase for something more consequential: the gap between what the threat requires and what the defense can currently provide is not closing. It is widening in a way that the threat's acceleration makes increasingly consequential.
Which is also, I think, what the adversary is counting on. The hundred-year strategy Terry Dunlap described isn't a metaphor. It describes a planning horizon that the defense has never operated on, and is currently operating further from than at any recent point.
Eight essays is a long time to argue one thing. I want to be sure I've said what I think rather than just what the argument required.
The reality hunger argument in the predecessor series ended with a conviction that the hunger for the real is older and stronger than any technology designed to simulate it. The people who develop the discernment to tell the treatise from the pamphlet don't always arrive through institutional channels. Sometimes the formation happens outside the systems that were supposed to produce it, through a stubbornness of attention that survives the synthetic environment without being explained by it.
The cybersecurity equivalent of that conviction is narrower and harder to state without romanticism. The people who do this work well, the ones who maintain the discipline when no one is watching, who notice the anomaly before the alarm fires, who understand what the adversary is doing because they have spent years developing a mental model of the adversary that is accurate rather than convenient, are not produced by policy. They emerge through practice sustained over years, in environments where the discipline was modeled by someone who already had it, and where the consequences of its absence were visible and real and studied rather than filed.
Those people still exist. They are, right now, the former CISA analysts who left the federal government this year carrying institutional knowledge that took years to build. They are the Daniel Berulises who noticed what was wrong and tried to do something about it. They are the Jen Easterlys who left and built matching systems to connect fired CISA alumni with employers, because the knowledge doesn't disappear when the institutional home does. They are, in their various organizations and contexts and countries, the practitioners who understand what's happening and are doing what they can with what they have.
What they cannot do, individually or collectively, is substitute for the institutional structures that sustain formation over time. The discipline that survives the current moment is what the individuals carry. What gets rebuilt, if it gets rebuilt, will require the structures: the training pipelines, the mentorship environments, the oversight mechanisms, the regulatory frameworks with teeth, the incident review processes that convert breaches into lessons rather than filing them as liabilities.
Those things require time, political will, and a public that can distinguish performed security from genuine defense. A public that cannot evaluate whether its government is creating or destroying security, that lacks the formation to ask the right questions of the institutions responsible for its protection, is the cybersecurity equivalent of the audience that cannot tell the treatise from the pamphlet. The pamphleteer thrives. The adversary operates in the resulting dark.
The formation that the defense actually requires begins there, in the public's ability to care about the right thing. I keep coming back to the fact that this is harder to build than any technical system, and also harder to destroy. The surveillance architecture can be dismantled. The audit logs can be switched off. The workforce can be fired. What cannot be eliminated, at least not quickly, is the capacity of people who have been formed to ask whether what they are being shown is real. That question, applied with genuine curiosity and genuine consequence attached to the answer, is the foundation. Whether the sophisticated threat detection is actually running. Whether the remediation was actually completed. Whether the cooperation being offered with one hand is operating alongside the intelligence being collected with the other.
Is this real, or is this performed?
That question applied to cybersecurity and to culture is the same question. The Reality Hunger argued that the hunger for the real is older and stronger than any technology designed to simulate it. I want to believe the same thing is true here. The evidence mostly supports it. The public that eventually produces the political will to rebuild the institutional infrastructure the defense requires will not arrive at that will through policy documents or threat briefings. It will arrive through the accumulated recognition that something essential is absent: that the performance of security is not the same as its presence, that the cooperation offered with one hand and the intelligence collected with the other are not the same thing, that the remediation announced is not the same as the remediation completed.
That recognition, carried and practiced and asked out loud by enough people, is the beginning of the formation the defense requires. It is slow. The adversary's patience is longer. Both of those things are true, and neither of them changes what the work is.
In February 2026, with the compound vulnerability fully visible and the adversary operating in the resulting dark, the formation that survives is what the individuals carry. What gets rebuilt will require the structures. The structures require time and political will and a public that has learned to ask the right question.
The hunger to ask it is not nothing.
What the Defense Actually Requires is the fourth and closing essay in The Compound Vulnerability series. The preceding series (The Reality Hunger, The Presence Test, The New Formation, and What the Pamphleteer Counts On) forms the first half of a connected eight-essay argument about formation, discernment, and resilience.