What the Pamphleteer Counts On, and Why We Need to Be Reborn

The finance worker at Arup did everything right. He requested a video call to verify. He looked at the screen and used his judgment. His judgment told him this was real. It wasn't. Twenty-five million gone. The pamphleteer doesn't count on the forgery. He counts on your inability to detect it.

What the Pamphleteer Counts On, and Why We Need to Be Reborn

The renaissance doesn't just need artisans. It needs a public that can tell the artisan from the pamphleteer. And the pamphleteer is counting on that public never arriving.


In February 2024, a finance worker at the engineering firm Arup joined a video call with the company's chief financial officer and several senior colleagues. The CFO explained an urgent, confidential transaction. The finance worker listened, asked questions, received answers that made sense, and authorized fifteen transfers totaling twenty-five million dollars to five bank accounts in Hong Kong.

Every person on that call was a deepfake. The CFO, the colleagues, the voices, the faces, the mannerisms: all of it generated by AI trained on publicly available video of the real executives. The finance worker had done exactly what security training tells you to do. He'd been suspicious of the initial email. He'd requested a video call to verify. He'd looked at the people on the screen and used his judgment. His judgment told him this was real.

It wasn't. And the money was gone before anyone noticed.

I've spent three decades in cybersecurity, and I want to be precise about what this case represents. Forget the gullible-employee narrative, and set aside the security-controls conversation, though those mattered. What happened at Arup is a story about discernment failing at the point where it was most needed, where a human being looked at a sophisticated approximation of reality and could not distinguish it from the thing itself.

I've worked on this problem my entire professional life, though we've called it different things at different times: social engineering, adversarial deception, information operations. The technical vocabulary changes. The underlying dynamic stays remarkably stable. An attacker succeeds when the target cannot tell the real from the performed. Every phishing email, every pretexting call, every fabricated identity operates on the same principle: if the approximation is good enough, discernment collapses.

What has changed, and what brings me to write this essay, is scale. The tools to produce convincing approximations of reality are now available to anyone with a laptop and a few dollars. Generating a deepfake voice clone costs less than two dollars. Documented losses from deepfake-enabled fraud exceeded two hundred million dollars in the first quarter of 2025 alone. Three seconds of audio is enough to produce a voice match at eighty-five percent accuracy. The barrier between amateur and professional deception has effectively dissolved.

This is no longer just a cybersecurity problem. That's what I want to say clearly, because it's the connection that makes the rest of this essay necessary.


The collapse into public life

What the cybersecurity world has been dealing with for decades, the broader public encountered in 2024. And the encounter was not what anyone predicted.

The feared deepfake apocalypse in the U.S. elections didn't arrive as a single catastrophic event, the bombshell fake video the night before the vote that was supposed to set the world on fire. What arrived instead was something more diffuse and, I think, more damaging: a steady saturation of the information environment with content whose authenticity could not be determined by inspection. AI-generated images of political candidates in fabricated situations, synthetic voice clones used in robocalls telling voters not to vote, fake celebrity endorsements, manufactured news websites populated with AI-written articles designed to lend credibility to deceptive stories nested among them. All of it circulating simultaneously, none of it individually decisive, the cumulative effect greater than any single piece.

The individual incidents were, by most assessments, less impactful than feared. Researchers found that traditional "cheapfakes" (edited and doctored content made without AI) were used seven times more often than AI-generated material. The large-scale disinformation campaigns that intelligence agencies warned about didn't revolutionize the influence operations they'd been tracking for years.

But this framing misses what actually happened. The damage wasn't in any single deepfake. It was cumulative: a general degradation of the public's ability to trust what they see, hear, and read. When any piece of media might be synthetic, the question "is this real?" becomes unanswerable through inspection alone. And when that question becomes unanswerable, the fake gains credibility because it can't be definitively identified as fake, while the real loses credibility because it can't be definitively identified as real. Both directions at once.

Researchers call this the liar's dividend: the point at which the existence of convincing fakes allows anyone (politicians, corporations, anyone with something to hide) to dismiss authentic evidence as fabricated. The liar's dividend doesn't require producing a single deepfake. It only requires that deepfakes exist and that the public knows they exist. The mere possibility of fabrication becomes a tool for evading accountability.

I recognize this dynamic because I've watched it operate inside organizations for thirty years. When the adversary can produce convincing forgeries, the first thing to go isn't truth itself. It's the confidence that truth is findable. And once that confidence erodes, discernment becomes impossible, not because people lack intelligence but because they lack a stable ground from which to judge.

The political situation in the United States has accelerated this erosion in ways that connect directly to what I wrote in The New Formation. The attacks on universities and educational institutions aren't separate from the information crisis; they are part of the same structural problem. The institutions that formed critical thinkers, that taught people to evaluate evidence, weigh sources, and hold complexity without collapsing into false certainty, those institutions are being hollowed out at precisely the moment when the tools of deception are becoming most powerful. Anyone in my profession would recognize the pattern: a compound vulnerability. Weaken the defense while strengthening the attack, and the system fails not at one point but everywhere at once.


What the pamphleteer counts on

In The New Formation, I drew a distinction between two kinds of AI-enabled operators: the artisan who brings genuine formation to powerful tools, and the pamphleteer who uses the same tools to produce work that performs knowledge without possessing it. I want to push that distinction further, because the pamphleteer's advantage depends on something specific, and naming it is the point of this essay.

The pamphleteer counts on an audience that cannot tell the difference.

That's the whole model. Whether we're talking about deepfake fraud, political disinformation, AI-generated content that performs expertise, or any other form of sophisticated approximation, the mechanism rests on a single condition: that the person receiving the output lacks the discernment to evaluate it. The finance worker at Arup was a competent professional who exercised reasonable judgment. The voters who encountered AI-generated political content were ordinary citizens doing their best to stay informed. The readers who consume AI-written analysis that sounds authoritative are, most of them, intelligent people acting in good faith.

None of that protects them. Good faith and reasonable intelligence are not enough when the approximation is sophisticated and the foundation for evaluating it is absent.

In the earlier essay, I argued that the Renaissance required people formed deeply enough to do something worth doing with a new technology. The printing press produced pamphlets and treatises in equal measure, and the difference between them was the formation of the person using the press. But I left something out of that argument, something that nags at me and that I think completes the picture.

The printing press also required readers formed deeply enough to tell the pamphlet from the treatise.

The artisan can produce extraordinary work: genuine thought, authentic voice, the mark of a consciousness shaped by contact with the world rather than just with data. But if the audience for that work has been systematically deprived of the formation needed to recognize it, the artisan and the pamphleteer become indistinguishable. And when they become indistinguishable, the pamphleteer wins, because the pamphleteer is faster, cheaper, and produces at a volume the artisan cannot match.

The pamphleteer counts on the absence of discernment, not on the quality of the forgery. The deepfake on the Arup video call didn't need to be perfect; it needed to be good enough for someone who lacked the specific tools to detect it. The AI-generated political content didn't need to fool everyone; it needed to reach people whose formation hadn't equipped them to evaluate it. And the AI-written analysis that performs expertise doesn't need to withstand scrutiny, only to land in an environment where scrutiny has become rare.


The obligation formation creates

I've written three essays now about what I've called the reality hunger: the growing human need for what is authentic, unmediated, genuinely formed. I've argued that the hunger is real, that meeting it requires formation, and that a new renaissance is possible if enough people develop the discipline to use powerful tools without being consumed by them.

All of that still holds. But it's incomplete without this: formation creates an obligation that extends beyond the person who possesses it.

The formed person who produces authentic work in an environment of synthetic noise is doing something valuable. But if they produce that work only for an audience of other formed people, the renaissance stays confined to a circle of practitioners who recognize each other's quality while the broader culture drowns in pamphlets. That's a monastery, not a renaissance.

The Renaissance itself succeeded because it didn't stay in the workshops and private libraries. The humanists didn't just produce formed work; they fought for the conditions under which formed work could be recognized. They established schools and academies. They argued, publicly and sometimes dangerously, for the value of classical knowledge against institutions that wanted it suppressed. They took the risk of insisting that discernment mattered, not just for themselves but for the culture at large.

The contemporary version of this obligation is uncomfortable, because it means the formed person can't simply retreat into the quality of their own work. The responsibility extends to the audience: cultivating discernment rather than just exercising it, making the distinction between formed work and generated content visible and legible, teaching (in whatever form teaching takes) the skills of evaluation that the institutional infrastructure is no longer reliably providing.

I feel this in my own practice. The humanization framework I described in The New Formation, the discipline I built to maintain authentic voice in machine-assisted production, has value beyond my own writing. Not because the specific rules should be shared (they're mine, and I'm keeping them) but because the principle of the framework matters: that maintaining authenticity in an age of synthetic production requires structured discipline, and that the discipline itself is a form of knowledge worth developing. When I write about this publicly, I'm not just describing my practice. I'm making an argument that discernment is possible, that the difference between formed work and generated content is real and detectable, and that the skills to detect it can be cultivated.

That argument, more than any individual essay, is what I think the moment requires.


The darker thread

I want to be honest about something the optimistic version of this argument elides.

The economic incentives are running in the wrong direction. When an audience can't distinguish formed work from generated content, the generated content wins on cost and speed. When voters can't distinguish genuine policy analysis from persuasive pattern-matching, the pattern-matching wins on volume. When a finance professional can't distinguish a real CFO from a deepfake, the deepfake wins by default. Every market rewards efficiency, and generated content is more efficient than formed work. Without discernment, the market selects for the pamphleteer over the artisan, every time, because the pamphleteer's output is cheaper, faster, and functionally equivalent to an audience that can't tell the difference.

The degradation of discernment, then, is not a side effect of the current technological and political moment. For some actors, it's the goal. The political forces attacking education benefit from a public that can't evaluate claims. The content mills flooding every platform with AI-generated material benefit from an audience that can't tell what's genuine. The adversarial operations I've spent my career defending against benefit from targets whose capacity to judge has been systematically weakened.

I don't say this to induce despair. I say it because the stakes need to be visible. The renaissance I described in The New Formation is possible, but it's not inevitable, and the forces working against it are not passive. They are active, resourced, and structurally advantaged by every decline in the public's capacity to discern.


Why we need to be reborn

And yet.

Every renaissance in history has emerged against resistance. The original Renaissance contended with institutional suppression, political violence, plague, and an entrenched authority structure that had every incentive to maintain ignorance. The humanists persisted not because the odds were favorable but because the alternative was intolerable: a culture permanently unable to distinguish knowledge from noise.

That's where we are. The alternative is intolerable. A world where deepfakes are indistinguishable from reality and nobody has the formation to care, where generated content displaces formed work and the audience shrugs because they never learned to tell the difference, where the liar's dividend compounds year after year until accountability becomes structurally impossible.

Intolerable. And because it's intolerable, people will resist it. They already are.

The hunger I identified in the first essay of this series, the reality hunger, the fatigue with synthetic everything, the craving for what is authentic: that hunger is the seed of the resistance. People feel the absence of the real even when they can't name what's missing. They sense when something they're reading was assembled rather than thought. They know, in a way that precedes analysis, when an encounter is genuine and when it's performed.

That instinct is where discernment begins. Pre-rational, pre-institutional, and impossible to fully suppress, because it belongs to what is most fundamentally human about us: the capacity to recognize another consciousness at work.

But instinct alone isn't enough. It has to be developed into capability. The person who senses that something is off needs the formation to identify what is off and why. The instinct has to become discernment, and discernment has to become practice, and practice has to become culture.

That's the rebirth. Not a return to the institutions we had, though some of them are worth defending, and not a rejection of the technology, which is as neutral as the printing press was. A rebirth of the commitment to human formation as the foundation of everything else: the production of knowledge, its evaluation, its transmission, and the shared capacity to tell the real from the performed.

The pamphleteer counts on that commitment dying. The artisan's obligation is to make sure it doesn't.

I've spent thirty years protecting systems from adversaries who exploit the gap between what's real and what's convincing. I've written four essays now about what happens when that gap moves from networks and servers into culture and consciousness. And what I keep coming back to is this: the defense has always been the same. Formed people, with the judgment to see clearly and the discipline to act on what they see.

The tools are here. The hunger is here. So is the threat. The question is whether enough of us will choose the discipline of formation, not because an institution requires it, but because the alternative is a world where no one can tell the difference.

I think enough of us will. Not because I'm optimistic. Because the hunger for the real is older and stronger than any technology designed to simulate it.

That's why we need to be reborn. And that's why I think we will be.


This is the final essay in a sequence that began with The Reality Hunger, continued through The Presence Test and The New Formation. Together, they trace what becomes scarce in an age of synthetic everything, what it demands of us, and what happens if we refuse the demand.