<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Marco Brondani]]></title><description><![CDATA[Cybersecurity governance, AI risk, platform strategy. For boards, executives, and founders who can't afford to get it wrong. Thirty years inside complex systems. Now helping organizations govern them.]]></description><link>https://www.marcobrondani.com</link><generator>Substack</generator><lastBuildDate>Mon, 27 Apr 2026 12:09:42 GMT</lastBuildDate><atom:link href="https://www.marcobrondani.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Marco Brondani]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[marcobrondani@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[marcobrondani@substack.com]]></itunes:email><itunes:name><![CDATA[Marco Brondani]]></itunes:name></itunes:owner><itunes:author><![CDATA[Marco Brondani]]></itunes:author><googleplay:owner><![CDATA[marcobrondani@substack.com]]></googleplay:owner><googleplay:email><![CDATA[marcobrondani@substack.com]]></googleplay:email><googleplay:author><![CDATA[Marco Brondani]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[After the Valley]]></title><description><![CDATA[Masahiro Mori never asked what lay on the other side. We are crossing the valley not by addressing what the alarm detects but by suppressing it. The question is no longer whether we are crossing but what we are crossing into.]]></description><link>https://www.marcobrondani.com/p/after-the-valley</link><guid isPermaLink="false">https://www.marcobrondani.com/p/after-the-valley</guid><pubDate>Thu, 26 Mar 2026 06:30:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e2de5405-efe7-4ac9-9f4f-d2299342dfa4_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9XPm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9XPm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!9XPm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!9XPm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!9XPm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9XPm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9XPm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!9XPm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!9XPm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!9XPm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F80d2e254-5393-4e5f-a3c6-f31626e96ce3_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h3>Essay Six of <em>The Valley of False Signals </em>series</h3><div><hr></div><p>Masahiro Mori never asked what lay on the other side.</p><p>His 1970 essay mapped the uncanny valley as a region to be understood and, for practical purposes, avoided. The design recommendation was clear: keep your robots clearly robotic, or make them indistinguishable from human, but do not leave them in the liminal region where the alarm fires. The valley was a problem of proximity, of getting too close to human without closing the remaining gap, and the solution was either distance or completion.</p><p>What Mori did not address, because in 1970 there was no reason to, was the condition that obtains when you have actually crossed. When the valley is behind you. When the simulation has achieved the fidelity that was always theoretical and is now, in specific domains and with increasing generality, practical. What the world looks like when the alarm no longer has a reliable object to fire at, not because the alarm is broken, but because the incongruence it detects has been engineered away.</p><p>This series has been, in one sense, a map of the crossing. The alarm that fires and is suppressed by social convention. The alarm that fires and is suppressed by organizational culture. The alarm that fires and is suppressed by the apparatus of institutional assurance. The alarm that is approaching a condition where it may stop firing altogether because the simulations have become too precise. The alarm carried by people who refuse to suppress it, who pay professional costs for that refusal, and whose protection is a structural condition for any institution maintaining the capacity to perceive its own reality.</p><p>We are, to be precise, not yet fully past the valley. The crossing is in progress, in different domains, at different rates, with the synthetic signal capabilities advancing faster than the institutional adaptation to absorb them. But the direction is clear, and it has been clear long enough that the more urgent question is no longer whether we are crossing but what we are crossing into.</p><p>What does trust look like after the valley?</p><div><hr></div><h2>What Trust Was Built On</h2><p>Trust, in the sense relevant to everything this series has examined, is an inference rather than a feeling: a conclusion drawn from evidence, about whether an entity is what it presents itself to be, whether the signals it is producing are causally connected to the reality they purport to represent.</p><p>For most of human history, that inference rested on direct observation: behavior watched over time, across varied circumstances, until coherence could be assessed. This model was slow, labor-intensive, and calibrated for small social environments. It began to break down as soon as human cooperation scaled beyond the face-to-face, and every expansion in scale since has required the development of new trust infrastructure to proxy for the direct observation that was no longer feasible. Credentials, contracts, certifications, reputational systems, legal liability, regulatory oversight: all mechanisms designed to make trustworthiness legible at scale.</p><p>What this series has mapped is the systematic failure of that trust infrastructure, not in all respects and not all at once, but in ways that are structural and accelerating. The failure has three distinct sources that the preceding essays examined separately and that now need to be understood together.</p><p>The collapse of signal fidelity: the practical impossibility of forging certain signals (voice, face, behavioral pattern, writing style) has ended, and the trust infrastructure built on that impossibility is being rendered obsolete faster than it can be replaced.</p><p>The optimization of signal production without substance: the learning, at both the individual and institutional level, that producing the right signals is sufficient to satisfy verification mechanisms, without the production requiring the underlying reality those signals are supposed to represent.</p><p>The systematic suppression of the most reliable detection instrument available: the coherence check, the prediction error mechanism, the alarm, which fires when it registers the split between signal and source, and which is suppressed, at every scale of social organization, by the norms of cooperative life that mistake the suppression of alarm for the exercise of good judgment.</p><p>These three failures form a system. Signal synthesis undermines the evidentiary value of signals. Signal production optimization is accelerated by the knowledge that signals rather than substance are what verification measures. And both are protected from detection by the suppression mechanism, which prevents the alarm that might otherwise surface the split from reaching action.</p><p>The trust infrastructure that was built for a world in which signals were hard to fake and institutions were assumed to produce the substance they claimed is not adequate for a world in which neither assumption holds. The question of what comes after is the question this essay is trying to answer.</p><div><hr></div><h2>The Three Errors to Avoid</h2><p>Before naming what adequate trust infrastructure might look like, it is worth being precise about three errors that responses to this situation most commonly make. Each has real advocates, and each is wrong in a way that the analysis of this series makes visible.</p><p>Technical solutionism is the most common: the search for a new technical signal that cannot be faked. A biometric so complex, a cryptographic proof so robust, a behavioral marker so deeply embedded in neurological reality that it cannot be synthesized. These investments raise the cost of forgery, which has real value: it narrows the adversary population, increases attack resource requirements, and buys time. But as a foundation for trust infrastructure, technical solutionism fails because the adversarial parity dynamic described in Essay Three is real. Detection and generation co-evolve. No technical signal achieves permanent unforgeability in an environment where adversaries have access to the same foundational techniques as defenders. The deeper error is the implicit assumption that trust is a property of signals. Trust is a property of systems, of the institutional architectures, incentive structures, verification processes, and accountability mechanisms that determine whether the entities operating within them are what they claim to be. Rebuilding trust infrastructure on a new signal without rebuilding the system is building on a foundation that will, again, be undermined.</p><p>Cynical withdrawal is the second error: having recognized the collapse of signal fidelity and the systematic production of accountability theater, it concludes that trust is simply no longer possible. Every institution is performing. Every signal is suspect. This response has a kind of intellectual tidiness; it is consistent with the evidence and requires no difficult work. It is also indistinguishable from surrender. Trust, even imperfect and provisional, is a precondition for collective action. The cynical withdrawal does not protect against the failures this series has documented; it merely removes the possibility of institutional development that might address them. It also makes a subtle epistemic error: it treats the collapse of particular trust mechanisms as evidence that trust itself is impossible, rather than as evidence that particular mechanisms were built on inadequate foundations.</p><p>Nostalgic restoration is the third, perhaps most common in policy circles: the attempt to restore the conditions under which the previous trust infrastructure worked. To regulate synthetic media out of existence, to mandate signal authenticity through legal requirements, to impose on the current environment the assumptions under which the old mechanisms were adequate. The conditions under which signals were practically unforgeable were not policy choices. They were technological constraints that have been removed by capabilities that are not reversible. Deepfake generation cannot be uninvented. The regulatory impulse to require watermarking, provenance tracking, and synthetic media disclosure raises the floor, but it does not restore the underlying condition. Nostalgic restoration is particularly dangerous in the governance domain, because it produces exactly the institutional uncanny valley that Essay Four examined: frameworks that signal the restoration of trust infrastructure without actually rebuilding it.</p><div><hr></div><h2>What Structural Trust Requires</h2><p>Trust infrastructure adequate for the post-valley condition is built on the assumption that signals are not reliable, compensating for that unreliability through structural design rather than signal improvement.</p><p>This requires a shift in the foundational question. The question that previous trust infrastructure was built to answer was: <em>does this signal indicate trustworthiness?</em> The question that adequate trust infrastructure asks is: <em>is this system structured so that trustworthy behavior is produced by the incentives operating within it, regardless of whether signals are reliable?</em></p><p>The shift is from evidentiary trust (trust based on the interpretation of signals) to structural trust (trust based on the design of systems that make trustworthy behavior the rational choice for actors operating within them). The idea is not new; it is the foundational insight of institutional economics, of mechanism design, of the branch of political philosophy concerned with how constitutions should be designed to produce good governance even from self-interested actors. What is new is the urgency: the recognition that the evidentiary trust model has been more thoroughly undermined than previous transitions have produced, and that structural trust is a practical necessity rather than a theoretical refinement.</p><p>The distinction is worth pausing on, because it reframes everything this series has examined. Evidentiary trust asks: does this entity produce the right signals? Structural trust asks: is this entity operating within constraints that make producing the right substance more rational than producing the right signals? The first question can be answered by inspection, by evaluating what the entity presents. The second can only be answered by understanding the incentive architecture within which the entity operates. A compliance certificate answers the first question. The question of whether the compliance framework is adversarially designed, whether it tests the substance or merely the documentation, answers the second.</p><p>The distinction reframes everything this series has examined. Evidentiary trust asks: does this entity produce the right signals? Structural trust asks: is this entity operating within constraints that make producing the right substance more rational than producing the right signals? A compliance certificate answers the first question. The question of whether the compliance framework is adversarially designed, whether it tests the substance or merely the documentation, answers the second.</p><p>Most of our trust infrastructure is still designed to answer the first question. The shift to the second requires a fundamentally different relationship between the verifier and the verified, one in which the verifier assumes that signal optimization is the default behavior and designs for it, rather than assuming good faith and being surprised when the gap appears. This is the shift from cooperative verification (we trust you; show us your documentation) to adversarial verification (we assume the gap; show us your reality under conditions you haven't prepared for). It is, in essence, the shift from the world before the valley to the world after it.</p><p>The preceding essays developed the specific principles this shift requires: accountability that carries real costs, so that producing accountability signals is never cheaper than producing accountability itself; verification that is adversarial by design, testing for the gap under conditions the institution cannot prepare for; detection systems structurally independent of the functions they evaluate; and organizational cultures that treat the alarm as an institutional asset rather than a mark of poor judgment.</p><p>These principles are not new individually. What is new is the recognition that they are structural prerequisites for trust infrastructure in an environment where signal production has been decoupled from substance at every scale, from the synthetic voice on the phone to the compliance framework on the shelf.</p><p>But there is something these principles cannot address, and it would be dishonest to conclude this series without naming it.</p><div><hr></div><h2>The Epistemological Problem at the Center</h2><p>The institutional responses this series has been advocating are adequate at the organizational and sectoral level. They can close the institutional uncanny valley in specific domains. They cannot solve the civilizational problem that underlies them.</p><p>The signal/source split that this series has been mapping is an epistemological problem: a problem about how collective knowledge is constituted, and about whether the conditions for collective knowledge still obtain.</p><p>Collective knowledge, the shared understanding that allows large groups of people to coordinate, assign trust, and recognize when the things they depend on have failed, is produced by the interaction of signals and verification. It requires that some signals be reliably connected to the realities they represent, and that the mechanisms for distinguishing reliable from unreliable signals be trusted enough to be actionable.</p><p>When the signals of human presence, institutional accountability, and individual authenticity are all simultaneously under systematic attack, when deepfakes produce voice and face, when compliance frameworks produce documentation without substance, when the mechanisms designed to verify these signals have themselves been optimized for signal production rather than source verification, the infrastructure of collective knowledge is under pressure in a way that no institutional design can fully address. The consequences are already visible outside the security domain: the erosion of shared factual frameworks across democratic societies, the inability to agree on what constitutes evidence, the progressive delegitimation of the institutions (media, regulatory bodies, scientific consensus) that were trusted to verify the verifiers. These are the same mechanism, signal/source split, suppression of detection, exhausted credulity, operating at civilizational scale.</p><p>This is an honest description of a real structural condition, not the counsel of despair that the second error above was warned against. The institutional responses this series has been advocating are necessary. They are not sufficient. The civilizational problem requires something more: not a better institution but a different relationship to the question of how trust is constituted when the old answers no longer hold.</p><p>That different relationship is a cultural achievement rather than a design solution. It cannot be mandated, regulated, or installed. It has to be recovered, which means it has to be understood as something that can be lost. The capacity of a population to make collective judgments about what is trustworthy and what is not, to distinguish between institutions that are producing accountability and institutions that are performing it, to attend to the alarm rather than suppress it: this capacity is developed through practice, maintained through exercise, and eroded through disuse. A society that has spent decades building institutional architectures designed to suppress the alarm has been training itself not to use the instrument it most needs. The recovery is the decision to stop suppressing an old capacity, not the development of a new one.</p><div><hr></div><h2>What Can Be Recovered</h2><p>What has been lost is not trust itself. Trust, as a human capacity, is not something that can be taken away. What has been lost, or is in the process of being lost, is the shared epistemic infrastructure that made trust legible: the common frameworks for evaluating signals, the shared conventions about what kinds of evidence were sufficient for what kinds of claims, the institutional mechanisms that were trusted to verify the verifiers.</p><p>This loss is not evenly distributed. It is concentrated in the domains where the signal/source split has advanced furthest: digital communication, institutional accountability, the credentialing systems that proxy for direct observation of capability and character. It has not reached the domains of direct physical experience, extended personal relationship, and small-group cooperation, where the original detection mechanisms still operate with something approaching their original fidelity. The observation is uncomfortable but important: the recovery of adequate trust infrastructure will look more like the trust model of the environments the detection system was calibrated for than like the large-scale institutional trust that post-valley signal synthesis has undermined. The principle is not smallness (the scale of modern cooperative endeavor is not reversible, and the effort to reverse it would cost more than the problem it was solving) but proximity.</p><p>I started to write a paragraph here about what trust looks like at the individual level in this environment, what it means for a person rather than an institution, and realized I don't have an answer that isn't either nostalgic or naive. I kept writing it and kept deleting it. The honest version is that individual trust, in the post-valley condition, requires something that no essay can provide: the slow accumulation of direct observation, the willingness to attend to the alarm when it fires, and the acceptance that the signals we used to rely on are no longer sufficient. That is not a program. It is a disposition. And dispositions cannot be mandated; they can only be cultivated, or eroded.</p><p>What can be described more precisely is what proximity means at the institutional level.</p><p>Proximity between decision-makers and the reality they are deciding about. Governance mechanisms designed so that the people making consequential trust decisions can observe, over time and variety, the entities they are trusting, rather than relying on documentation that travels through layers of institutional translation before reaching them.</p><p>Proximity between accountability claims and the mechanisms that test them. Compliance frameworks in which the distance between the claim and the test is short enough that the claim cannot be optimized independently of the substance.</p><p>Proximity between the expression of alarm and the people who have the authority and the obligation to respond to it. Organizations in which the alarm does not pass through five layers of management filtering before reaching someone who can act, because at each layer, the suppression machinery has another opportunity to engage. This is what protecting the unsuppressed looks like in practice: not a whistleblower hotline, but an architecture in which the alarm's signal path is short enough that suppression cannot accumulate.</p><p>The point is structural, not romantic: the detection mechanism still functions reliably in environments of proximity, and the question is what it would mean to design institutions that allow its functioning rather than impeding it. The alarm was calibrated for a world of proximity, where the distance between signal and source was short enough that the coherence check could operate. The institutional project of the post-valley condition is to recover that proximity, not by making organizations smaller, but by making the critical channels shorter.</p><div><hr></div><h2>A Final Observation About the Alarm</h2><p>There is something worth saying at the end of this series about the alarm itself, about the coherence check, the prediction error mechanism, the felt wrongness that has appeared in every essay as the most direct and most suppressed instrument of trust detection.</p><p>The alarm is not infallible. It fires for reasons that are sometimes wrong: for unfamiliarity masquerading as incongruence, for difference mistaken for deception, for the cognitive dissonance produced by encountering a genuine person or institution that does not conform to the expected pattern. The history of the alarm's failure modes is not short, and some of those failures have caused real harm.</p><p>The argument of this series has not been that the alarm is always right. It has been that the alarm is more often right than the suppression mechanisms credit, that its failure mode in the current environment is predominantly false negative rather than false positive, and that the social and institutional architecture that converts alarm into silence is doing more damage than the alarm's imperfections.</p><p>This is a calibration argument, not an infallibility argument. The alarm needs to be calibrated: its outputs need to be taken seriously as inputs to an investigative process, not acted on blindly as commands. What it does not need is to be suppressed by default, because suppression by default is the mechanism that all of the threats this series has examined depend on.</p><p>We have, over a long period of social development, professional culture-building, and institutional design, become very good at suppressing the alarm. We have built that suppression into our professional norms, our organizational hierarchies, our verification mechanisms, our compliance frameworks.</p><p>We have confused the suppression with wisdom and the unsuppressed with naivety.</p><p>The post-valley condition is the condition in which the cost of that confusion has become visible. The alarm that was overridden by the performance of normalcy at Orion, where sixty million dollars followed the signals out the door. The alarm that could not fire at all during the Arup deepfake call, because the simulation had crossed the valley. The alarm that went undetected for eighteen months in the carriers' networks before Salt Typhoon surfaced it. The alarm that dissolved under political pressure when documented federal access controls proved decorative. The alarm that Peiter Zatko carried and was fired for. The alarm that North Korean operatives rendered invisible by inhabiting trusted organizational space as synthetic colleagues for months at a time. These are not isolated failures. They are the predictable output of a system that has been optimized, at every level, to suppress the instrument it most needs.</p><p>The valley that Masahiro Mori mapped in 1970 was a region of alarm. We have spent fifty years learning to cross it by suppressing the alarm rather than addressing what the alarm was detecting. The crossing we find ourselves in now is the consequence of that choice.</p><p>What lies on the other side is not determined. It is not inevitable that the infrastructure of trust continues to degrade, that the institutional uncanny valley deepens, that the alarm is progressively rendered inoperative by the combination of signal synthesis and social suppression. These are tendencies, not destinies. They can be reversed, not quickly, not easily, not through any single institutional reform or technical solution, but through the accumulated effect of design choices that are made with clear eyes about what the problem actually is.</p><p>And here, at the end of the series, I find myself thinking not about the civilizational abstraction but about the finance worker at Orion who transferred sixty million dollars because the signals were right and the alarm did not survive the context. I think about that person because they are the scale at which this problem is actually experienced: one person, one decision, one moment in which everything this series has described, the signal/source split, the suppression mechanism, the organizational culture that makes acting on alarm costly, converges on a single human being who has to decide whether to trust what they are seeing. The civilizational problem is real. But it is composed of moments like that one.</p><p>The problem is not, at its root, a security problem, though security is where the consequences are most measurable. It is not a technology problem, though technology is what has changed the conditions. It is not even, primarily, a governance problem, though governance is where the institutional responses must be built.</p><p>It is the problem of a species that built its cooperative infrastructure on the assumption that the signals of authenticity could be trusted, discovering, in real time, at civilizational scale, that they cannot. And choosing, in the face of that discovery, what to build next.</p><p>That choice is still available. It is the choice this series has been, in its way, arguing for: to stop suppressing the alarm, to build institutions that protect its function, to design verification that tests the source and not just the signal, to recover the proximity between decision and reality that the alarm was calibrated for.</p><p>The alarm is still working.</p><p>The question is whether we will finally stop turning it off.</p><div><hr></div><p><em>This is the final essay in The Valley of False Signals, a six-part series on trust, mimicry, and the collapse of authentication. The series begins with Essay One &#8212; The Alarm.</em></p><div><hr></div><h2>Sources</h2><h3>Foundational Reference</h3><p>Mori, M. (1970). Bukimi no tani [The uncanny valley]. <em>Energy</em>, 7(4), 33&#8211;35. (In Japanese.) English translation: Mori, M., MacDorman, K.F., &amp; Kageki, N. (2012). The uncanny valley [From the field]. <em>IEEE Robotics &amp; Automation Magazine</em>, 19(2), 98&#8211;100.</p><h3>Structural Trust and Institutional Economics</h3><p>The essay's distinction between evidentiary trust and structural trust draws on the foundational literature of mechanism design and institutional economics:</p><p>Hurwicz, L. (1972). On informationally decentralized systems. In R. Radner &amp; C.B. McGuire (Eds.), <em>Decision and Organization: A Volume in Honor of Jacob Marschak</em> (pp. 297&#8211;336). North-Holland. (Foundational framework for analyzing institutions as mechanisms that structure incentives and information.)</p><p>Hurwicz, L., &amp; Reiter, S. (2006). <em>Designing Economic Mechanisms</em>. Cambridge University Press.</p><p>Maskin, E. (1999). Nash equilibrium and welfare optimality. <em>Review of Economic Studies</em>, 66, 23&#8211;38. (Originally circulated 1977. Implementation theory and the design of institutions that produce desired outcomes from self-interested actors.)</p><p>Myerson, R.B. (1981). Optimal auction design. <em>Mathematics of Operations Research</em>, 6(1), 58&#8211;73.</p><p>The 2007 Nobel Prize in Economics was awarded to Hurwicz, Maskin, and Myerson for their foundational contributions to mechanism design theory.</p><h3>Institutional Design and Governance</h3><p>Ostrom, E. (1990). <em>Governing the Commons: The Evolution of Institutions for Collective Action</em>. Cambridge University Press. (Institutional design principles for systems that produce cooperative behavior through structural incentives rather than signal-based trust.)</p><p>The essay's reference to "the branch of political philosophy concerned with how constitutions should be designed to produce good governance even from self-interested actors" draws on a tradition extending from James Madison's <em>Federalist Papers</em> (particularly Nos. 10 and 51, on designing institutions that channel self-interest toward collective good) through modern constitutional design theory.</p><h3>The Three Errors</h3><p>The essay identifies three common errors in responding to the collapse of signal-based trust:</p><p><strong>Technical solutionism</strong> references include the liveness detection arms race documented in Essay Three (see Essay Three sources: iProov, Sumsub, MITRE ATLAS), zero-knowledge proofs for identity (the broader decentralized identity literature), continuous behavioral biometrics, and hardware-bound authentication tokens.</p><p><strong>Cynical withdrawal</strong> is described as a structural tendency rather than attributed to a specific source. The essay's analysis of this error draws on the broader literature on institutional trust and social capital erosion.</p><p><strong>Nostalgic restoration</strong> references include regulatory approaches to synthetic media: watermarking requirements, provenance tracking (e.g., the Coalition for Content Provenance and Authenticity, C2PA), and synthetic media disclosure mandates under various national and proposed international frameworks.</p><h3>Case Catalogue (Closing Section)</h3><p>The closing section references six cases documented in detail across the preceding essays:</p><p>Orion S.A. BEC fraud, 2024 ($60 million). See Essay Two sources.</p><p>Arup deepfake video conference fraud, 2024 ($25 million). See Essay Three sources.</p><p>Salt Typhoon telecommunications intrusion (eighteen months undetected). See Essay Four sources, originally analyzed in the author's <em>Compound Vulnerability</em> series.</p><p>Department of Government Efficiency federal access control failures, early 2025. See Essay Four sources, originally analyzed in the author's <em>Compound Vulnerability</em> series.</p><p>Zatko, P. ("Mudge"). Twitter whistleblower complaint, 2022. See Essay Five sources.</p><p>North Korean synthetic worker campaign (Famous Chollima), 2022&#8211;present (320+ organizations infiltrated). See Essay Three sources.</p><h3>Cross-Series References</h3><p>Brondani, M. Essays One through Five of <em>The Valley of False Signals</em>. Published at marcobrondani.com.</p><p>Brondani, M. <em>Reality Hunger</em> and <em>The Compound Vulnerability</em> (essay series). Published at marcobrondani.com.</p>]]></content:encoded></item><item><title><![CDATA[The Unsuppressed]]></title><description><![CDATA[There is a person in your organization who has been telling you something is wrong. Not loudly. Not with a polished deck. In the register organizations find most difficult to process: persistent, imprecise, and professionally inconvenient.]]></description><link>https://www.marcobrondani.com/p/the-unsuppressed</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-unsuppressed</guid><pubDate>Wed, 25 Mar 2026 06:07:16 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3df30ae4-de55-4733-9461-6e33ff220b16_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HXib!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HXib!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!HXib!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!HXib!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!HXib!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HXib!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HXib!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!HXib!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!HXib!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!HXib!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F173c49bf-bd40-4966-ab50-c2b38eff779d_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h3>Essay Five of <em>The Valley of False Signals </em>series</h3><div><hr></div><p>There is a person in your organization, possibly several, who has been telling you something is wrong.</p><p>Not loudly. Not with a polished deck and a clear remediation roadmap. In the register that organizations find most difficult to process: the persistent, imprecise, and professionally inconvenient insistence that something in the system does not cohere. The analyst who keeps escalating a vendor concern that everyone else considers resolved. The auditor who writes the same finding three engagements in a row because the remediation never quite closes. The engineer who flags an architectural decision as a future exposure and is told, repeatedly, that the business has accepted the risk. The CISO who frames the board presentation in terms that are accurate rather than reassuring and finds, over time, that the invitations become less frequent.</p><p>These people are not difficult. They are not lacking in social intelligence or professional judgment. They are, in many cases, the most technically capable people in their organizations. What they share is a specific resistance: a failure, or a refusal, to perform the social operation that the organization's culture requires, the suppression of alarm that cannot be fully articulated.</p><p>This essay is about them. What they share, structurally. Why organizations systematically marginalize them. And what it would mean to build institutional architecture that protects their function rather than eroding it.</p><p>The answer to that last question requires a detour through developmental neuroscience and the philosophy of institutional design that may feel, initially, distant from the cybersecurity governance problems this series has been examining. The distance is not as great as it appears.</p><div><hr></div><h2>The Research That Changes the Frame</h2><p>In 2018, a team of researchers at Peking University published a study with a finding that has received far less attention than it deserves. They were examining the uncanny valley effect in children, specifically whether the effect Mori described in adult responses to humanoid robots was present in younger populations, varying the realism of facial appearance and inducing perceptual mismatch in ways shown to trigger the uncanny valley response in adults.</p><p>Their control group, typically developing children, showed the expected effect. As facial realism increased and approached but did not reach full human likeness, preferences declined. The alarm fired. The uncanny valley was present and robust.</p><p>The children with autism spectrum disorder showed no such effect. Their preference curve did not display the characteristic valley. None of the features that produced strong negative responses in typically developing children triggered the same alarm. The uncanny valley, for this population, was absent.</p><p>This finding has been replicated in multiple subsequent studies. If the uncanny valley effect is, as the first essay in this series argued, a trust detection mechanism rather than an aesthetic response, then its absence in ASD represents a structurally different relationship to the detection mechanism itself. And that structural difference has consequences that extend well beyond robot therapy.</p><div><hr></div><h2>What the Absence Means</h2><p>The uncanny valley alarm, as established in Essay One, fires when the brain detects incongruence between what an entity signals and what it is. The <em>suppression</em> of that alarm is a separate operation: the professional norm, the hierarchical deference, the discomfort of accusing someone of deception without articulable proof. Detection is perceptual. Suppression is social.</p><p>The critical question: in the ASD population, which operation is different? The research does not resolve this cleanly, and intellectual honesty requires saying so. What it does establish, across multiple studies and in both child and adult ASD populations, is that the behavioral output is different: the avoidance behavior, the expressed preference decline, the reported eeriness are attenuated or absent. The proposed mechanisms vary (differences in how prior social experience calibrates the detection model, differences in social motivation, differences in how social norming converts alarm into suppression) and the research has not settled which account is most accurate.</p><p>What matters for our purposes is the structural implication that all three mechanisms share: the relationship between the detection system and the social suppression operation is different. Whether the alarm calibrates differently, or fires differently, or reaches expression differently, the output is a detection profile that is less shaped by the social forces that, in the typical case, convert alarm into suppression and suppression into compliance.</p><div><hr></div><h2>The Inversion</h2><p>Here is where the argument takes a turn that requires careful handling.</p><p>The absence of the uncanny valley effect in ASD has been framed, in the research literature, primarily as a deficit: something is missing from the social alarm system. This framing makes sense within the therapeutic context.</p><p>But the framing inverts when the context is adversarial. In an environment where sophisticated actors are systematically producing signals of authenticity disconnected from their actual intentions, the alarm that typically developing individuals possess is an asset with a critical vulnerability: it is susceptible to the suppression mechanism. The reliable operation of the alarm depends on the social suppressability of the alarm being resisted. And resistance to the suppression mechanism is not, in the typical developmental profile, strongly selected for. The social costs of unsuppressed alarm expression are real, and the social environment reliably punishes their expression. People who do not perform the suppression are difficult. Organizations prefer people who perform it.</p><p>This preference is the source of institutional vulnerability. Not because it is irrational (it is rational for the ninety-nine percent of interactions that are not adversarial) but because in the specific subset that are adversarial, the suppression preference produces exactly the exposure that sophisticated attackers and institutional drift rely on.</p><p>Resistance to the suppression mechanism is not strongly selected for in typical professional development. The social costs of unsuppressed alarm expression, the professional friction, the accusation of paranoia, the disruption of cooperative relationships, are real. The social environment reliably punishes their expression. This is not a design flaw. It is a design feature whose costs have changed.</p><p>The question the ASD research raises, indirectly, is whether the suppression operation is separable from the detection operation in ways that could be structurally exploited for defensive purposes. Not "can we make people more like autistic individuals," which is both clinically wrong and ethically untenable. But: what can the existence of a different detection-suppression profile teach us about how to design institutional architectures that protect detection outputs from social override?</p><p>This is the inversion. The research that was conducted to understand a population that lacks a typical alarm response turns out to illuminate something about the alarm response itself, specifically about the social operation that converts alarm into silence, and about what happens when that operation is attenuated or differently regulated.</p><div><hr></div><h2>A Necessary Pause</h2><p>Before proceeding, something needs to be stated directly and without qualification.</p><p>Autism spectrum disorder is not a superpower, not a security asset, and the people who have it are not instruments for organizational detection architectures. The research findings summarized above do not establish that autistic individuals are better at security; they establish something much more specific and limited: that a particular behavioral output of the uncanny valley alarm is attenuated in this population, and that this attenuation involves the relationship between detection and social suppression.</p><p>The lived experience of autism includes challenges in social navigation, sensory processing, executive function, and communication that are real and often severe. The absence of the uncanny valley effect is not, for the people who live with ASD, primarily experienced as an advantage. It exists within a broader profile that the neurotypical world has not been designed to accommodate.</p><p>What the research offers is a structural insight, not a personnel recommendation. The suppression mechanism is a social operation applied to detection outputs, not an inevitable feature of the detection process itself, and it can, in principle, be differently regulated. The detour through ASD research is a lens, not a template. It shows us something about the structure of the problem that neurotypical cognition, precisely because it takes the suppression operation for granted, cannot easily see from the inside.</p><div><hr></div><h2>The People Who Do Not Suppress</h2><p>Return to the person at the beginning of this essay. The analyst who keeps escalating. The auditor who writes the same finding three years running. The engineer who will not accept "business has accepted the risk" as a final answer.</p><p>These people are not, generally, autistic, or at least, that is not what defines their functional profile in the organizational context. What defines it is a particular relationship to the organizational suppression pressure that most professionals navigate as automatic. They feel the pressure. They understand it. In many cases, they have paid professional costs for not complying with it. And they do not comply anyway.</p><p>The reasons are various. Some have an unusually high tolerance for professional friction. Some have a professional identity built around a specific obligation: the auditor who understands their role as a fiduciary function compromised by social deference, the security researcher who has internalized a specific ethical commitment to disclosure. Some have experienced, personally and concretely, the consequences of suppression, and the memory of it makes the social cost of speaking feel small by comparison. And some have a cognitive style that processes the social suppression pressure differently, that perceives the organizational norm to perform the override as a distinct thing from the professional obligation to report accurately, and declines to conflate them. This cognitive style exists on a spectrum, is distributed across the population, and is not reducible to any single neurological profile. But it shares, structurally, the feature that the ASD research illuminates: the suppression operation is not automatic. It is perceived as a separate choice, subject to a separate judgment. And the judgment, in these people, consistently comes back: the alarm is more important than the comfort. The choice is refused.</p><p>These are the people organizations most consistently fail to protect, and most consistently fail to use.</p><p>In 2022, Peiter "Mudge" Zatko, one of the most respected figures in the cybersecurity community, a former member of the L0pht hacking collective who had testified before Congress on network security in 1998, filed a whistleblower complaint against Twitter, where he had served as head of security. Zatko alleged that Twitter's executive team had instructed him to present cherry-picked data to the board to create a false impression of progress on security issues, had a consulting firm's report scrubbed to minimize its findings, and had the CEO discourage him from being fully transparent with the board about the company's actual security posture. He documented servers running outdated software lacking basic security features, thousands of employees with broad and poorly monitored access to core systems, and approximately one security incident per week serious enough to require government reporting.</p><p>The company's response was to characterize Zatko as having been fired for "ineffective leadership and poor performance," a classic instance of credibility erosion. His alarm, which had been raised internally and documented, was reframed as evidence of his inadequacy rather than evidence of the gap he was describing.</p><p>The Zatko case matters because it demonstrates every mechanism of institutional suppression operating in sequence against a single person. But Zatko had resources most alarm-carriers do not: a national reputation, legal representation from a nonprofit whistleblower firm, and a public moment (the concurrent Musk acquisition dispute) that gave his allegations an audience. Most people who carry the alarm have none of these. They have only their observation and the organizational culture that surrounds it.</p><div><hr></div><h2>How Organizations Suppress the Unsuppressed</h2><p>The mechanisms are numerous, varied, and rarely explicit. They operate through ordinary professional culture rather than through direct censorship.</p><p>Credibility erosion is the most common: the gradual reframing of persistent alarm as evidence of poor judgment rather than accurate detection. The professional consequence is not dismissal; it is the progressive withdrawal of institutional trust, which operates through smaller signals, the meeting invitation that stops arriving, the project that goes to someone else, the promotion that is indefinitely deferred. Scope limitation is subtler: moving the unsuppressed person from functions with broad organizational visibility to functions with narrow technical scope, where their observations become invisible to the people who might act on them.</p><p>The most sophisticated mechanism is process capture, which converts the unsuppressed person's output into the compliance apparatus itself. Their findings are acknowledged, logged, assigned to remediation owners, tracked in the risk register, and reviewed in the quarterly governance meeting. Every alarm is formally received. None of it changes the posture. The organizational machinery for receiving the alarm and the organizational machinery for acting on it are decoupled. The finding goes in the register. The register goes to the committee. The committee notes the finding. The finding ages.</p><p>And perhaps the most damaging is social isolation: the informal cost of being the person who names the wrongness. The difficult colleague. The one who makes meetings tense. The one who, when they walk into the room, produces a subtle shift in the atmosphere because everyone knows they may say something uncomfortable. The social isolation is rarely deliberate. It is the aggregate output of individual decisions to prefer comfortable company, which is to say, it is the suppression mechanism operating at the social level.</p><div><hr></div><h2>Red Teams and Whistleblower Systems</h2><p>Before asking what institutional design could protect the alarm, it is worth examining the two mechanisms that have explicitly tried.</p><p>The red team is, at its best, a structural attempt to create an organizational function whose purpose is to not suppress the alarm. Its mandate is adversarial: to find the gaps between what the institution claims and what it is, to produce findings that are uncomfortable rather than reassuring. Its value depends on its independence from the organizational culture that would otherwise convert its findings into the compliance register.</p><p>When red teams work, they work because they externalize the permission to alarm. They do not rely on individual resistance to suppression pressure; they create an institutional role that makes suppression impermissible, or at least much more costly. The red team analyst who finds a critical exposure has a mandate, a role, an institutional permission structure that converts the alarm into a deliverable rather than a career risk. The gap between what a red team finds and what the compliance apparatus documents is a direct measure of how much the suppression mechanism has cost the organization.</p><p>But red teams have their own failure modes, and understanding them matters. Their findings get converted into the compliance register. Their scope is limited by the same management that controls the systems being tested. Their independence is conditional on the continued support of the hierarchy they are supposed to challenge. In organizations where the institutional uncanny valley has deepened, where the gap between claimed and actual posture is large and acknowledged at the level of senior leadership, the red team's findings are received as a threat to management rather than intelligence for it, and the team's scope and independence are progressively curtailed. The red team is a structural workaround for the suppression problem, and a valuable one. It is not a solution, because it is still embedded in the organizational culture that generates the suppression pressure.</p><p>Whistleblower systems, the formal mechanism for protecting alarm against suppression, perform similarly. Academic research consistently finds that formal protection mechanisms fail to prevent the informal costs of whistleblowing: the credibility erosion, the scope limitation, the social isolation. Legal protection from termination does not protect against being moved to a role with no visibility. Anonymous reporting channels do not protect against the informal attribution of reports to the small number of people with access to the relevant information. Regulatory protection for safety reporting does not prevent the organization from making the whistleblower's professional life sufficiently unpleasant that resignation becomes the rational choice.</p><p>The failure mode is the same as the compliance framework failure mode: they produce the signal of protection without its substance. The gap between the documented protection and the experienced protection is the institutional uncanny valley of whistleblower systems.</p><div><hr></div><h2>Toward Adversarially Resistant Detection Architecture</h2><p>What would institutional design look like if it were built to protect detection from suppression rather than to produce documentation of assurance?</p><p>This is not a question the security governance literature has directly addressed, and the reason is the same reason that security awareness training has not addressed the suppression layer: the field has been focused on the detection capacity, not on the social architecture that determines whether detection outputs reach action. Several principles suggest themselves, drawn from the analysis above and from the places where suppression-resistant detection has been attempted and partially achieved.</p><p>The most powerful protection is structural independence of alarm functions: genuine separation between the function that generates alarm and the function that manages the operations the alarm is about. The independence must be real, not just documented; reporting lines, budget authority, and scope definition that cannot be controlled by the management layer being evaluated. Closely related is output that bypasses hierarchy: architecture that routes alarm directly to board-level or external oversight without requiring management endorsement or framing. The suppression mechanism operates primarily through hierarchy; findings that pass through management layers get filtered before they reach decision-makers. Reducing the number of points at which suppression can be applied is structurally uncomfortable for management, which is precisely why it is rarely implemented in its strong form, and precisely why the strong form is where the protection lives.</p><p>Formal protection for alarm-carriers must have teeth. Whistleblower protections that address only formal retaliation leave the informal suppression machinery intact. Protection that genuinely prevents the informal costs (the scope narrowing, the credibility erosion, the social isolation) requires monitoring and enforcement mechanisms at least as sophisticated as the informal machinery they are trying to counteract. This is expensive, intrusive, and organizationally uncomfortable. It is also the difference between a protection signal and actual protection.</p><p>And the deepest change is cultural rather than structural: the normalization of inarticulate alarm. The professional norm that requires articulable justification before alarm can be expressed is the engine of the suppression mechanism. Changing it requires organizations to explicitly value the expression of inarticulate unease, to create contexts in which "something seems off and I can't say exactly what" is a legitimate input rather than evidence of poor judgment. This is the hardest change because it runs against how professional cultures define rigor and rationality. It requires accepting that the alarm system is sometimes more accurate than the documentation, and that acting on the alarm before the documentation catches up is not paranoia but intelligence.</p><div><hr></div><h2>What the Cassandra Problem Teaches</h2><p>The Cassandra myth is old enough that it has become a clich&#233;, but its precise structure deserves attention.</p><p>Cassandra was given the gift of true prophecy and the curse that no one would believe her. The standard reading emphasizes the social reception of accurate alarm. That reading is correct and important. But there is another element that gets less attention: the cost to Cassandra herself. The experience of being the person who sees accurately and is systematically disbelieved, who watches the consequences of suppressed alarm unfold in slow motion, produces its own pathologies: the escalating alarm that loses credibility by virtue of its persistence, the psychological toll of sustained professional isolation, the progressive narrowing of the space from which accurate signals can be transmitted.</p><p>The institutional suppression machinery does not just silence individual alarms. It degrades the people who carry them. The analyst who has been told repeatedly that their concern is unfounded eventually faces a choice: absorb the professional cost of continued escalation, or absorb the psychological cost of self-suppression. Many capable people make the second choice, not because they stop seeing accurately, but because the cost of seeing accurately, in a context that will not receive what they see, becomes unsustainable.</p><p>The organizations that lose these people do not lose them all at once. They lose them gradually, as the space for accurate alarm narrows, and the professional costs of occupying that space accumulate, and the people who occupy it calculate that there is no longer a path from accurate detection to any useful response. This is the final mechanism of institutional suppression: not silence, but exhaustion.</p><div><hr></div><p>The suppression mechanism is a feature of how human social life manages the tension between cooperative trust and adversarial vigilance, not a corporate pathology or a security industry problem. The norms that generate suppression pressure are the norms that make large-scale cooperative life possible. They are functional, in the environments they were developed for.</p><p>The question is whether those environments still describe the world we are operating in. The base rate of sophisticated deception, at the individual level, the organizational level, and the institutional level, has increased. The cost of producing convincing simulations of authenticity has collapsed. The scale at which deception operates has expanded from the interpersonal to the civilizational.</p><p>The suppression mechanism is running on obsolete parameters, suppressing alarms at a rate calibrated for a world with far fewer genuine threats in an environment with far more of them. The recalibration required is specific: a higher assumed base rate of sophisticated deception at every scale, a lower cost threshold for acting on alarm before articulable evidence is available, and institutional structures that treat the cost of occasional false-positive caution as categorically lower than the cost of the false-negative compliance it prevents. This is the urgency inversion that Essay Two identified in the social engineering context, extended to institutional design: alarm should trigger more scrutiny, not less, and the organizational cost of acting on alarm should be lower than the organizational cost of suppressing it. And the people who, for whatever combination of cognitive style, professional commitment, and accumulated experience, are less subject to the suppression pressure, the people who keep naming the wrongness despite the cost, are the people whose function has become more valuable than it has ever been.</p><p>Protecting them is an epistemic infrastructure question, not a human resources question. They are nodes in the detection architecture. What happens to them, whether they are protected or eroded, whether their outputs reach decision-makers or disappear into the compliance register, determines whether the institutions they inhabit maintain any capacity to perceive the gap between their signals and their reality.</p><p>The institutional uncanny valley persists as long as the alarm is suppressed. The alarm is suppressed as long as the people who carry it are not protected. Protecting them is not comfortable. It is, in the environment this series has been describing, necessary.</p><div><hr></div><p><em>Next: Essay Six &#8212; After the Valley. On what trust looks like when signals can no longer be taken at face value, and what it would mean to build the infrastructure of trust again, from different foundations.</em></p><div><hr></div><h2><strong>S</strong>ources</h2><h1></h1><h3>ASD and the Uncanny Valley</h3><p>Feng, S., Wang, X., Wang, Q., Fang, J., Wu, Y., Yi, L., &amp; Wei, K. (2018). The uncanny valley effect in typically developing children and its absence in children with autism spectrum disorders. <em>PLOS ONE</em>, 13(11), e0206343. (Primary study: Peking University. Typically developing children showed the uncanny valley effect; children with ASD did not. Varied facial realism through morphing and induced perceptual mismatch through eye-size modification.)</p><p>Kumazaki, H., Warren, Z., Muramatsu, T., Yoshikawa, Y., Matsumoto, Y., Miyao, M., Nakano, M., Mizushima, S., Wakita, Y., Ishiguro, H., Mimura, M., Minabe, Y., &amp; Kikuchi, M. (2017). A pilot study for robot appearance preferences among high-functioning individuals with autism spectrum disorder. <em>PLOS ONE</em>, 12(10), e0186581. (Replication context: ASD individuals showed different responses to humanoid robot appearance compared to typically developing individuals.)</p><p>Li, L., Imaizumi, T., Nishikawa, N., Kumazaki, H., &amp; Ueda, K. (2025). Do individuals with autism spectrum disorder not experience the uncanny valley? A psychological experiment and feature analysis using human and robot faces. <em>Cognitive Development</em>, 73, 101519. (Replication with robot and human facial images: typically developing individuals exhibited the uncanny valley effect; individuals with ASD showed a less distinct effect, with analysis suggesting emphasis on local rather than global facial information.)</p><p>Kumazaki, H., Muramatsu, T., Yoshikawa, Y., Matsumoto, Y., Ishiguro, H., Mimura, M., &amp; Kikuchi, M. (2015). A Bayesian model of the uncanny valley effect for explaining the effects of therapeutic robots in autism spectrum disorder. <em>PLOS ONE</em>, 10(9), e0138642. (Computational modeling: proposed that ASD produces an "uncanny cliff" rather than an "uncanny valley," with implications for robot-assisted therapy design.)</p><h3>Whistleblower Case Study</h3><p>Zatko, P. ("Mudge"). (2022). Whistleblower disclosure to the U.S. Securities and Exchange Commission, the Federal Trade Commission, and the Department of Justice, filed July 6, 2022. Allegations concerning Twitter, Inc.'s security practices, including misrepresentation of security posture to the board, scrubbing of third-party consulting findings, and systemic access control deficiencies.</p><p>Twitter's response characterizing Zatko as having been fired for "ineffective leadership and poor performance" was reported by multiple outlets including <em>The Washington Post</em>, <em>CNN</em>, and <em>The New York Times</em>, August 2022. The complaint became public during the concurrent Musk acquisition dispute.</p><p>For background on Zatko's career: Zatko testified before the U.S. Senate Committee on Governmental Affairs on network security vulnerabilities as a member of the L0pht hacking collective in 1998.</p><h3>Whistleblower Research</h3><p>The essay references academic research on the failure modes of formal whistleblower protection systems. Key works in this literature include:</p><p>Miceli, M.P., Near, J.P., &amp; Dworkin, T.M. (2008). <em>Whistle-blowing in Organizations</em>. Routledge/Psychology Press.</p><p>Moberly, R. (2012). Sarbanes-Oxley's whistleblower provisions: Ten years later. <em>South Carolina Law Review</em>, 64, 1.</p><p>Kenny, K. (2019). <em>Whistleblowing: Toward a New Theory</em>. Harvard University Press. (Documents the informal suppression mechanisms &#8212; credibility erosion, scope limitation, social isolation &#8212; that operate below the threshold of legal actionability.)</p><h3>Organizational Suppression and Red Team Literature</h3><p>The essay's analysis of organizational suppression mechanisms (credibility erosion, scope limitation, process capture, social isolation) draws on established organizational psychology and security governance literature. The red team analysis draws on practitioner experience and the broader adversarial design principles developed in Essay Four.</p><h3>Cassandra Problem</h3><p>The Cassandra myth is referenced as a structural analogy for the cost of carrying suppressed alarm. The essay's analysis of the cost to the alarm-carrier (escalating persistence losing credibility, psychological toll, progressive narrowing of transmission space) draws on the whistleblower research cited above and on practitioner literature in organizational psychology.</p><h3>Cross-Series References</h3><p>Brondani, M. Essay One: "The Alarm" (uncanny valley as trust detection mechanism, prediction error, suppression mechanism). Essay Two: "Cold Empathy at Scale" (urgency inversion, three suppression norms). Essay Four: "The Narcissistic Institution" (adversarial design principles, compliance framework failure modes). <em>The Valley of False Signals</em>. Published at marcobrondani.com.</p>]]></content:encoded></item><item><title><![CDATA[The Narcissistic Institution]]></title><description><![CDATA[The institution looks secure. It sounds compliant. The documentation says everything it should. And something is wrong, in a way that is difficult to name, because the performance is convincing enough that naming the wrongness feels like overreach.]]></description><link>https://www.marcobrondani.com/p/the-narcissistic-institution</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-narcissistic-institution</guid><pubDate>Thu, 19 Mar 2026 08:37:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cdbb5b67-7a24-4ea8-a811-c35411845126_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QBCe!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QBCe!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!QBCe!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!QBCe!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!QBCe!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QBCe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QBCe!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!QBCe!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!QBCe!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!QBCe!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fa2c3fa-20af-4d73-bfaf-5ff51de6a8f4_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h3>Essay Four of <em>The Valley of False Signals </em>series</h3><div><hr></div><p>There is a version of the uncanny valley that operates not at the level of individual deception but at the level of institutional governance. It is the condition in which an organization produces all the signals of accountability (the compliance reports, the audit certifications, the governance frameworks, the risk registers) without those signals being causally connected to actual accountable behavior. The institution looks secure. It sounds compliant. The documentation says everything documentation should say. And something is wrong, in a way that is difficult to name and more difficult to act on, because the norms governing how institutions are evaluated are the same norms governing how narcissists avoid detection: the performance is convincing enough that naming the wrongness feels like an overreach.</p><p>Two cases from earlier work illustrate the condition. I examined the operational details of the Salt Typhoon intrusion in <em>The Compound Vulnerability</em>. What matters here is not what the Chinese state actors did inside those telecommunications networks, but what the carriers had been doing long before the attackers arrived: producing compliance signals, certifications, audit reports, regulatory filings, whose relationship to actual security posture had quietly come apart. The carriers were not negligent by any conventional measure. They had frameworks, programs, and the full apparatus of documented due diligence. Their networks were owned for eighteen months without detection. The gap was not between the carriers and their frameworks. It was between the frameworks and reality.</p><p>The federal access control failures that accompanied the Department of Government Efficiency's deployment in early 2025 demonstrated a parallel condition through a different mechanism. Treasury payment systems, OPM personnel databases, Social Security Administration records: these were governed by access control frameworks developed over decades of federal IT security policy. What the episode revealed was that the documented controls were not the controls that existed in practice. Political will, applied with sufficient force and speed, dissolved mechanisms that were supposed to be procedurally resistant to exactly that kind of pressure. Whatever one's view of the entity's mandate, the structural observation is the same: the documented controls described one reality; the operational pressure revealed another.</p><p>Both cases demonstrate institutions whose accountability signals and accountability substance had drifted apart, in ways that were invisible to normal oversight but became visible under adversarial conditions. Different threat actors, same vulnerability. The vulnerability is the gap between the framework and the reality it is supposed to represent, not the absence of framework itself.</p><div><hr></div><h2>How Institutions Learn to Perform</h2><p>This is not primarily a story about bad actors or deliberate fraud. Institutional drift toward accountability theater is something close to a structural tendency in large organizations operating under compliance regimes, a tendency that emerges not from malice but from the ordinary operation of incentives, bureaucratic rationality, and the social norms that govern professional life in hierarchical organizations. Deliberate fraud is an exception. The drift is the norm.</p><p>The compliance regime is, in its intent, a mechanism for making accountability legible to external observers. The board cannot directly observe every security control. The regulator cannot directly audit every system. The compliance framework (the certifications, audits, reports, and standards) is a translation layer: it converts the internal reality of organizational security into signals that external observers can read and evaluate.</p><p>This translation function is necessary and, when it works, valuable. The problem is that translation layers create their own incentives, and those incentives do not always align with the thing being translated.</p><p>Once an organization has learned that producing certain outputs (a SOC 2 report, an ISO 27001 certification, a NIST CSF assessment) satisfies the external observer's demand for accountability signals, the optimization pressure shifts. The question stops being "are we secure?" and starts being "do we satisfy the framework?" These are not the same question, and organizations that conflate them, under time pressure, resource pressure, and the ordinary human tendency to optimize for what gets measured, begin to drift.</p><p>The drift is an accumulation of small decisions, each individually defensible. The security control that exists in the policy document but is too operationally expensive to enforce. The audit finding that is logged as a remediation item and rolled forward, quarter after quarter, because addressing it would require re-architecting a system that production depends on. The risk register entry that accurately describes a critical exposure but is scored in a way that keeps it below the threshold requiring board attention. The penetration test scoped to avoid the systems most likely to produce embarrassing findings.</p><p>Each individual decision is defensible. The policy document genuinely represents the intended state. The remediation item is genuinely intended to be addressed. The risk score reflects a genuine judgment. But the accumulation produces an institution whose documented security posture and actual security posture have quietly come apart, a signal/source split at the organizational level, invisible in any individual document but structurally present in the gap between what the institution claims and what it is.</p><p>I have been part of this accumulation. I have signed risk acceptances that I knew were optimistic, scoped penetration tests to avoid systems I suspected were vulnerable, and presented dashboards that were accurate at the level of data but misleading at the level of implication. Not from malice. From the same structural pressures I am describing. The drift is easier to see from the outside than to resist from the inside, and the professional cost of resisting it is real.</p><p>This is the same mechanism that Essay One mapped at the individual level, operating at institutional scale. The narcissist produces empathy signals without affective resonance. The institution produces accountability signals without accountability substance. In both cases, the performance is convincing precisely because it is built from genuine components: real certifications, real audit firms, real compliance processes, assembled in a way that satisfies the observer's coherence check while the underlying reality has departed. The compliance framework tells you what signals to produce. It cannot tell you whether producing those signals corresponds to genuine security. That correspondence requires judgment, adversarial testing, and the organizational culture to act on uncomfortable findings. It requires precisely the capacities that the compliance-optimization dynamic tends to erode.</p><p>This is the institution in the uncanny valley. Almost accountable. The signals are there. The source has quietly left.</p><div><hr></div><h2>The Suppression Mechanism at Institutional Scale</h2><p>In personal social engineering, the suppression mechanism is interpersonal: professional courtesy, hierarchy, the discomfort of naming unverifiable alarm. In the institutional context, the suppression mechanism operates at a larger scale, but the structure is identical.</p><p>The CISO who notices the drift, who sees that the risk register is being managed for optics rather than exposure, that the audit findings are being rolled forward rather than remediated, that the security posture claims being made to the board do not correspond to the actual attack surface, faces a version of the same social pressure that faces anyone who notices that the signals and source have separated.</p><p>What they know is difficult to articulate precisely. They have a feeling, compounded of professional experience, pattern recognition, and the particular unease of someone who understands both what the documentation says and what the systems actually do, that the accountability is not real. But the documentation is real. The certifications are genuine. The audit firm is reputable. The risk register was signed off by the right people. Every articulable piece of evidence points toward compliance. Only the inarticulate alarm points the other way.</p><p>And the organizational context generates powerful pressure to suppress that alarm. The board wants assurance, not uncertainty. The CEO wants to present a clean posture to investors and regulators. The audit committee wants findings to be closed, not perpetually open. The external auditor, whose continued engagement depends on maintaining a workable relationship with management, is not structurally incentivized to produce findings that the organization is not prepared to address.</p><p>The CISO who names the gap, who tells the board that the certified posture does not correspond to the actual risk, is making a claim that contradicts the apparatus of institutional assurance. They are being difficult. They are introducing uncertainty into a presentation designed to communicate confidence. They are, in the language of organizational management, not being a team player.</p><p>This is the institutional suppression mechanism. It operates through the same forces that suppress individual alarm: the social cost of naming wrongness that cannot be fully proven, the professional cost of contradicting a consensus that convenient documentation supports, the hierarchical pressure to defer to the process rather than the judgment.</p><p>The difference from the individual case is scale. When the CISO's alarm is suppressed, what is lost is not one person's judgment. It is the organization's only instrument for detecting the gap between its claimed and actual security posture. The suppression of the institutional alarm is the suppression of institutional reality-testing.</p><div><hr></div><h2>AI Governance as Contemporary Case Study</h2><p>The institutional uncanny valley is being constructed in real time in the AI governance domain, and the construction is happening fast enough to watch.</p><p>Since 2016, there has been an extensive global production of AI governance artifacts: principles documents, ethical frameworks, voluntary commitments, model cards, responsible AI programs, algorithmic impact assessments. The OECD AI Principles. The EU AI Act. The US Executive Orders on AI. The major technology companies' responsible AI frameworks.</p><p>The lifecycle of these frameworks has been instructive. In 2019, Google formed its Advanced Technology External Advisory Council, an eight-member AI ethics board meant to guide the responsible development of AI. It lasted nine days before being dissolved. The members never met. In 2023, Microsoft laid off its entire Ethics and Society team, the group responsible for translating the company's stated AI principles into product design, during the same period it was investing over eleven billion dollars in OpenAI and racing to integrate generative AI across its product suite.</p><p>These are not aberrations. They are the expected output of organizations whose competitive incentives and governance commitments point in opposite directions. A former Microsoft team member described the gap to The Verge: people would look at the principles coming from the Office of Responsible AI and not know how they applied. The Ethics and Society team existed to close that gap. It was eliminated precisely when the gap was widest. The production of AI governance signals (principles, commitments, frameworks) is cheap relative to deployment. The production of AI governance substance, the actual constraint of deployment in response to identified risks, is expensive, because it means accepting competitive disadvantage. When the signal and the substance diverge, institutions optimize for the signal.</p><p>The European AI Act is the most serious attempt to create binding governance with actual enforcement consequences. Its implementation has been revealing. The Act's GPAI obligations entered force in August 2025, but the Commission's enforcement powers are delayed until August 2026; a year in which providers must comply but face no penalties for non-compliance. The rules for high-risk AI systems embedded in regulated products have an extended transition period until August 2027. Open-source models meeting certain criteria receive exemptions from several obligations. The Commission itself acknowledged that an informal enforcement grace period may be needed beyond the formal dates. The signal says: AI is now regulated. The infrastructure of enforcement says: not yet. And the deployment continues at pace.</p><p>I am not sure whether to call this cynicism or inevitability, and I think the uncertainty matters. Governance frameworks produced within institutional environments that have a primary interest in the activity being governed will tend, under competitive pressure, to drift toward the production of accountability signals rather than substance. The incentive structure produces the same drift that compliance optimization produces in enterprise security. The framework becomes the performance of governance, not its instrument.</p><div><hr></div><h2>The Board as Structural Accomplice</h2><p>The board of directors occupies a particular position in this dynamic that deserves direct examination, because it is the board that is supposed to close the gap between institutional signals and institutional reality.</p><p>Board-level cybersecurity oversight has expanded dramatically in the past decade. SEC rules require disclosure of material cybersecurity incidents and of board expertise in cybersecurity risk. Audit committees now routinely receive security briefings. Many boards have added CISO presentations to their regular agenda. The signal says: boards are taking cybersecurity seriously.</p><p>The substance is more complicated. A board receiving a security briefing from a CISO is receiving a presentation designed by the very function it is supposed to oversee. The information is filtered through the organizational hierarchy that has its own incentives to present a reassuring picture. Board members, even those with cybersecurity backgrounds, are working from information that the management layer has curated. They are reading the documentation that the institution has produced about itself.</p><p>The structural problem is not that boards are negligent. Effective oversight of institutional security posture requires precisely the kind of adversarial, independent, reality-testing capacity that board governance is not structurally designed to provide. Boards receive information; they do not generate it. They evaluate representations; they do not independently verify them. They assess the quality of management's judgment; they cannot, in any practical sense, substitute their own.</p><p>The three suppression norms that Essay Two identified operate here with particular force. The hierarchy norm: the CISO presents upward to a board that has authority but not expertise to challenge technical claims, and the board defers to management's framing because the alternative requires independent investigation that governance structures do not support. The efficiency norm: board time is scarce, agendas compressed, and the presentation format itself favors assurance over uncertainty; a clean risk dashboard is a thirty-second read, while a qualified assessment of actual posture requires an uncomfortable conversation that may not resolve within the allocated time. The social grace norm: naming the gap between documented posture and actual posture, in a boardroom setting, is an implicit accusation that management has been misrepresenting its own security. No CISO who wants to maintain a functional relationship with the C-suite will make that claim without extraordinary evidence, and the gap, by its nature, produces inarticulate unease rather than extraordinary evidence.</p><p>The result is a board oversight function that operates primarily at the signal level, evaluating the quality and coherence of the accountability documentation, rather than at the source level, evaluating the actual correspondence between that documentation and organizational reality. The board becomes the most senior level of the suppression mechanism, not because its members are captured or dishonest, but because the institutional architecture of oversight does not give them the instruments to do otherwise.</p><p>This is the closing of the loop, and it is worth tracing carefully. The CISO who might name the gap is suppressed by organizational culture. The internal audit function that might surface it is constrained by scope limitations and client relationships. The external auditor is not structurally incentivized to produce findings the organization is not prepared to address. The regulator evaluates the signal because the signal is what has been submitted. And the board, sitting at the top of this chain, receives the output of each prior suppression and processes it as assurance.</p><p>The alarm fires at each level, in each function, in each mind that encounters the gap between what the documentation says and what the systems do. And at each level, the institutional suppression mechanism engages. Not through conspiracy. Through structure.</p><div><hr></div><h2>The Distinction That Changes Everything</h2><p>There is a distinction that the institutional uncanny valley makes available, and it is the most important practical implication of this entire analysis. It is also, I think, the point at which the argument stops being diagnostic and becomes actionable.</p><p>The distinction is between frameworks that are adversarially designed and frameworks that are not.</p><p>A compliance framework that is not adversarially designed asks, implicitly: does this institution produce the signals of accountability? It tests documentation, process, and the coherence of stated practice. It takes the institution's representation of itself as the primary data source. It evaluates the signal.</p><p>A framework that is adversarially designed asks something different: does this institution actually do what it claims to do when the verification is inconvenient, when the pressure is high, when doing what it claims to do has real operational cost? It assumes that the gap between claimed and actual posture is a predictable feature of institutional behavior, not an exceptional failure.</p><p>Adversarial design does not require bad faith toward the institution being evaluated. It requires honest acknowledgment of the structural tendency toward accountability theater, and the deployment of verification approaches calibrated to that tendency rather than to the assumption of good faith compliance. The objection to adversarial design is usually framed as an objection to distrust, as if designing verification for the gap implies an accusation that the institution is dishonest. It does not. It implies that the institution is subject to the same structural pressures that produce the gap in every large organization, and that verification should be designed for the world as it is rather than the world the documentation describes.</p><p>Red team exercises are the clearest existing example at the technical level: rather than asking whether the security controls exist and are documented, they ask whether the security controls work when an actual adversary is trying to defeat them. The difference in what they find, compared to conventional compliance audits, is frequently severe. Organizations that are compliant by every conventional measure are penetrated by red teams in hours.</p><p>At the board level, adversarial design would mean that at least some of the information the board receives about cybersecurity posture is generated independently of the management layer; findings produced by a function that reports to the board directly, with scope and budget the management layer does not control. Internal audit is supposed to serve this function, and in some organizations it does. But in most, the independence is formal rather than operational: internal audit's scope is negotiated with management, its resources are allocated through the management budget process, and its findings are discussed with management before reaching the board. Genuine adversarial independence would require that the board's information about actual posture be produced by a function whose incentives are structurally aligned with finding the gap, not with managing it.</p><p>At the regulatory level, adversarial design would mean moving from documentation review to operational testing. Rather than evaluating whether an institution has submitted the correct filings, the regulator would test whether actual operations correspond to what the filings describe, under conditions that include surprise and scenarios the institution has not been briefed on. Financial regulators have partially implemented this through stress testing. The same principle applied to cybersecurity and AI governance would mean regulators who test actual resilience rather than documented resilience. This is more expensive than documentation review. It is also more useful by exactly the margin that separates the signal from the source.</p><p>At the AI governance level, adversarial design would mean evaluating not whether the institution has produced the required governance artifacts but whether those artifacts have produced any observable constraint on deployment decisions. Has any deployment been delayed or cancelled as a result of the governance framework? Has any revenue opportunity been declined because the risk assessment indicated unacceptable harm? If the answer to these questions is consistently no, the governance framework is producing signals without substance. The test of AI governance is the cost it has imposed on the institution that maintains it, not the quality of its artifacts.</p><p>None of these are easy to implement. All of them create friction, expense, and organizational discomfort. The question is whether that friction is more expensive than what the institutional uncanny valley costs when the adversary, or the crisis, arrives. The carriers that had frameworks and were owned for eighteen months provide one answer. The federal systems that had documented controls and lost them under political pressure provide another.</p><div><hr></div><h2>The Hardest Admission</h2><p>There is a version of this argument that is politically comfortable: compliance frameworks are imperfect, and we should improve them. That version is not wrong. But it is not the argument I am making.</p><p>The argument I am making is harder. The institutional tendency toward accountability theater is a structural feature of how large organizations under regulatory pressure respond to the incentives that compliance creates, not a correctable defect in the compliance architecture. Better frameworks will produce better theater. More rigorous standards will produce more rigorous performance of compliance with those standards. The gap between signal and source will move, will narrow at the edges, will be more expensive to maintain; but it will persist, because the forces that generate it are structural, not accidental.</p><p>This does not mean governance frameworks are useless. It means that their function needs to be honestly understood. They raise the floor. They make casual non-compliance costly and visible. They create accountability for the largest and most obvious gaps. They produce, at minimum, a record against which failures can be evaluated in retrospect. These are real contributions.</p><p>What they cannot do, by design, is close the gap between institutional performance of accountability and institutional reality of it. That gap is closed only by the things that are hardest to systematize: genuine adversarial testing, organizational cultures that make naming the gap safe rather than costly, leadership that treats uncomfortable findings as intelligence rather than threat.</p><p>And, the thing that brings this essay back to the alarm we have been following since the first essay in this series, it is closed by the people who notice the wrongness, who feel the incoherence between what the documentation says and what the systems do, and who have both the personal capacity and the organizational permission to say, plainly, what they see. Not the frameworks. Not the auditors. The people who sit in the room where the gap is visible and choose to name it, knowing the professional cost.</p><p>The institutional uncanny valley is a condition to be managed continuously, not a problem to be solved once: through the design of verification that assumes the gap will be present, through the protection of the people who detect it, and through the honest acknowledgment that no framework, however rigorous, eliminates the structural incentive to produce signals without substance. The compliance framework raises the floor. It does not close the gap. And the gap is where the adversary lives, whether that adversary is a nation-state actor with eighteen months of patience, a political force with operational speed, or simply the accumulated weight of institutional self-deception.</p><p>Those people, the ones who carry the alarm despite the institutional pressure to suppress it, are the subject of Essay Five.</p><div><hr></div><p><em>Next: Essay Five &#8212; The Unsuppressed. On the structural question of what happens when the alarm cannot be overridden, and what it would mean to design institutions that protect the alarm rather than silence it.</em></p><div><hr></div><h2>Sources</h2><h3>Case Studies from Prior Series</h3><p>Brondani, M. <em>The Compound Vulnerability</em> (essay series). Published at marcobrondani.com. (Salt Typhoon intrusion analysis; federal access control failures accompanying the Department of Government Efficiency deployment in early 2025.)</p><h3>Salt Typhoon</h3><p>The Salt Typhoon intrusion into U.S. telecommunications networks was documented across multiple government and industry sources in late 2024 and early 2025, including advisories from CISA and the FBI. The carriers maintained compliance frameworks and regulatory filings throughout the period of compromise, which lasted approximately eighteen months before detection. The essay references these facts as analyzed in the author's earlier <em>Compound Vulnerability</em> series.</p><h3>Federal Access Controls (DOGE)</h3><p>The Department of Government Efficiency's access to Treasury payment systems, OPM personnel databases, and Social Security Administration records in early 2025 was documented by multiple news organizations and in congressional testimony. The essay treats these events as structural case studies rather than political commentary, focusing on the gap between documented access controls and their operational resilience under political pressure.</p><h3>AI Governance</h3><p>Google. (2019). "An external advisory council to help advance the responsible development of AI." Google Blog, March 26, 2019. The Advanced Technology External Advisory Council (ATEAC) was dissolved on April 4, 2019, nine days after its announcement. Reported by Vox, MIT Technology Review, VentureBeat, and others.</p><p>Newton, C. (2023). "Microsoft just laid off one of its responsible AI teams." <em>Platformer</em>, March 13, 2023. Microsoft's Ethics and Society team, once approximately thirty employees, was eliminated during layoffs affecting 10,000 employees, during the same period the company was investing over $11 billion in OpenAI.</p><p>Schiffer, Z. (2023). "Microsoft lays off entire ethics and society team within its AI organization." <em>The Verge</em>, March 13, 2023. Former employee quote: "People would look at the principles coming out of the Office of Responsible AI and say, 'I don't know how this applies.'"</p><h3>European AI Act</h3><p>European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 (the AI Act). GPAI obligations entered force August 2025; Commission enforcement powers delayed until August 2026; high-risk AI system rules for regulated products with extended transition period until August 2027. Implementation timeline and enforcement grace period details from European Commission official communications.</p><h3>Compliance and Governance Frameworks Referenced</h3><p>SOC 2 (System and Organization Controls 2). Developed by the American Institute of Certified Public Accountants (AICPA).</p><p>ISO/IEC 27001. International standard for information security management systems. International Organization for Standardization.</p><p>NIST Cybersecurity Framework (CSF). National Institute of Standards and Technology, U.S. Department of Commerce.</p><p>OECD. (2019). <em>Recommendation of the Council on Artificial Intelligence</em> (OECD AI Principles). Organisation for Economic Co-operation and Development.</p><h3>SEC Cybersecurity Disclosure Rules</h3><p>U.S. Securities and Exchange Commission. (2023). "Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure." Final rule, effective December 2023. Requires disclosure of material cybersecurity incidents and of board expertise in cybersecurity risk oversight.</p><h3>Cross-Series References</h3><p>Brondani, M. Essay One: "The Alarm" and Essay Two: "Cold Empathy at Scale." <em>The Valley of False Signals</em>. Published at marcobrondani.com. (Suppression mechanism, three norms of hierarchy/efficiency/social grace, signal/source split formulation.)</p>]]></content:encoded></item><item><title><![CDATA[The Death of the Signal]]></title><description><![CDATA[There is a line that was crossed, and we did not notice when we crossed it. Voice, face, and writing style were once unforgeable. That practical infeasibility is gone. The alarm has nothing left to detect. The signal has died.]]></description><link>https://www.marcobrondani.com/p/the-death-of-the-signal</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-death-of-the-signal</guid><pubDate>Mon, 16 Mar 2026 06:06:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2c067d08-fcbb-4356-819b-3c96951923e2_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!z38Q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!z38Q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!z38Q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!z38Q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!z38Q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!z38Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!z38Q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!z38Q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!z38Q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!z38Q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F61ab6087-20cf-411f-ae8b-3ce0f6d4d811_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h3>Essay Three of <em>The Valley of False Signals </em>series</h3><div><hr></div><p>There is a line that was crossed, and we did not notice when we crossed it.</p><p>For most of the history of electronic communication, the signals of human presence were practically unforgeable. A voice was a voice, not a statistical reconstruction of a voice, not a synthesis trained on hours of recordings, but the acoustic output of a specific larynx, shaped by a specific mouth, carrying the micro-variations of breath and hesitation that no recording technology of the time could plausibly reproduce in real time. A face was a face. A signature was a signature. Even as forgery existed (it always has) the cost of producing a convincing forgery was high enough that the attempt itself was rare, and the imperfections were usually detectable by someone paying attention.</p><p>These practical constraints were not just inconveniences for fraudsters. They were the load-bearing architecture of trust. Every authentication system ever built was constructed on the implicit assumption that certain signals of genuine human presence were costly enough to simulate that their presence could serve as evidence of legitimacy.</p><p>That assumption is no longer valid.</p><div><hr></div><h2>What the Valley Was, and What We Have Left Behind</h2><p>Mori's uncanny valley described a specific failure mode of simulation: the point at which a simulacrum becomes close enough to human that its imperfections become visible and disturbing. The valley was a region of maximum alarm, where the simulation was advanced enough to trigger the coherence check but imperfect enough to fail it. The alarm fired precisely because the simulation was <em>almost</em> good enough.</p><p>The implicit structure of that problem assumed that the alarm was a useful instrument. The simulation was detectable. The valley existed as a warning region precisely because there was something to warn against, a gap between signal and source that was large enough, with sufficient attention, to be felt.</p><p>Essay Two described what happens when that alarm is suppressed by social and organizational mechanics. This essay addresses something different: what happens when the gap closes. When the simulation becomes precise enough that the alarm has no incongruence to detect. When we have not suppressed the alarm but passed beyond the conditions that cause it to fire.</p><p>We are, I want to argue, at or near that crossing in several domains simultaneously. Not fully past it in every context; the alarm still fires at poorly constructed deepfakes, at synthetic text that carries the particular flatness of large language model outputs, at voice clones with subtle artifacts. But the trajectory is clear, the rate of improvement is accelerating, and the frontier of undetectable simulation is advancing faster than the frontier of detection.</p><p>Mori described the uncanny valley as a region to be avoided or crossed. We are crossing it, not by making simulations less humanlike (which was Mori's practical recommendation for robot designers) but by making them more humanlike. By making them precise enough that the prediction error mechanism has nothing to register. The question of what lies on the other side of the valley, what the world looks like when simulation achieves parity with reality, is not a question Mori asked, because in 1970 it was not a question that needed answering. It needs answering now.</p><div><hr></div><h2>The Three Collapses</h2><p>The death of the signal is occurring in three overlapping domains, at three different rates, and they need to be understood together before we can grasp what they mean in combination.</p><p>Voice identity collapsed first and fastest. In 2019, a UK-based energy company lost approximately &#8364;220,000 after a finance director received a phone call from someone who sounded exactly like the company's CEO. The voice, its tone, cadence, accent, was sufficiently precise that the director executed the transfer without hesitation. That case, at the time, represented the frontier. Six years later, voice cloning has moved from a research capability requiring hours of sample audio to a commercial service available for subscription fees measured in tens of dollars per month. Some implementations need as little as three seconds of clear audio to produce a clone with what researchers describe as an eighty-five percent voice match. The output is not a recording; it is a synthesis engine that can produce, in real time, that person saying anything. CrowdStrike's 2025 threat analysis documented a 442 percent increase in voice cloning usage between the first and second halves of 2024 alone.</p><p>The voice was the oldest authentication signal. Before written records, before seals, before cryptographic keys, the recognition of a familiar voice was the primary mechanism for verifying identity. The brain is extraordinarily sensitive to vocal identity; we recognize people we know from a single word, often before they have finished their first sentence. That sensitivity, which was an asset in an environment where voice synthesis was impossible, becomes a liability in an environment where it is cheap. The very precision of our voice recognition now works against us: the more faithfully we trust a recognized voice, the more completely we are deceived when that voice has been synthesized.</p><p>Visual identity is close behind. In February 2024, the engineering firm Arup suffered the largest documented deepfake fraud to date: a finance worker, participating in what appeared to be a routine video conference with the company's CFO and other senior executives, authorized fifteen transactions totaling twenty-five million dollars. Every face on the call was generated in real time, with synchronized facial movements, realistic voices matched to each executive's known speech patterns, and natural body language. The simulation was precise enough that the alarm did not fire, not because it was suppressed, but because there was nothing for it to detect. A year later, a finance director in Singapore fell to an almost identical structure. The attackers had absorbed the lesson of prior coverage: they proactively suggested a video call, using the apparent willingness to verify as a mechanism for producing false confidence. What these cases demonstrate is not just the quality of the simulation but its social engineering integration. The deepfake is not the attack; it is the resolution of the final friction point in a fundamentally psychological attack. The technology removes the last signal that would allow the alarm to fire. Meanwhile, synthetic identity fraud, the construction of entirely fictitious people with generated faces, fabricated histories, and synthetic documentation, has reached industrial scale. Experian's 2024 fraud data documented a sixty percent increase in false identity cases over the prior year. The Federal Trade Commission estimates that synthetic identity fraud accounts for eighty to eighty-five percent of all identity fraud cases in the United States, with costs to the financial industry exceeding thirty billion dollars.</p><p>The collapse of textual identity may be the most pervasive and least discussed, because it operates in the medium that most professional communication uses. A 2025 study in the <em>Journal of Expert Systems with Applications</em> tested fully automated AI spear-phishing campaigns against human expert campaigns: the AI-generated emails achieved a click-through rate of fifty-four percent, identical to experienced human social engineers, at a cost reduction of up to fifty times for large-scale campaigns. The spear-phishing email that references the correct operational context, mirrors the target's communication style, and sounds exactly like the person it claims to be from was once the product of hours of human research. It is now the output of an automated pipeline that costs fractions of a cent per target.</p><p>The implications extend beyond phishing. Large language models can produce text that is not just grammatically correct and contextually coherent but stylistically matched to a specific individual. Given a corpus of a person's writing, emails, reports, social media posts, a sufficiently capable model can produce new text that carries the statistical fingerprint of that person's style. We authenticate email, to a large degree, by feel: by the quality of the writing, the characteristic phrasings, the particular way a colleague structures a request. When those markers can be synthesized from a training corpus, the informal authentication layer collapses.</p><div><hr></div><h2>The Operation That Combines All Three</h2><p>Since at least 2022, North Korean state-sponsored operatives have been infiltrating technology companies worldwide by posing as remote IT workers. Call it what it is: an identity synthesis operation conducted at national scale, integrating all three collapses into a single sustained effort.</p><p>GitHub's 2025 analysis documented a development team that created at least 135 synthetic identities using scraped photographs, AI image generators, and face-swapping tools, then used those images to create fraudulent passports that verified successfully in over forty percent of attempts. The scale is significant: the DOJ's June 2025 enforcement actions revealed that a single facilitator network had generated over seventeen million dollars in revenue across 309 jobs at US companies including Fortune 500 firms. CrowdStrike found the number of infiltrated companies grew 220 percent over twelve months, with operatives penetrating more than 320 organizations. During live video interviews, operatives use real-time face-swapping technology, allowing a single operator to interview for the same position multiple times under different synthetic personas. Palo Alto Networks' Unit 42 demonstrated that a researcher with no prior deepfake experience could create a synthetic identity convincing enough for job interviews in seventy minutes using consumer hardware. The textual layer completes the simulation: AI to fabricate resumes, prepare for interview questions in real time, mimic cultural fluency in English, and maintain ongoing workplace communications once hired.</p><p>This campaign matters because it represents something qualitatively different from the spectacular deepfake fraud. It is a sustained inhabitation of trusted space. Synthetic humans, complete with professional histories and ongoing behavioral patterns, operating inside organizations as trusted colleagues for months. The alarm does not fire because there is nothing for it to detect. The persona is complete. The signals of genuine presence are all present. They are all synthetic. And the gap between signal and source has been closed so completely that colleagues, managers, and HR departments process these personas as real people for months at a time.</p><p>This is what the post-valley condition looks like in practice: not a single spectacular fraud but a quiet occupation of the spaces where trust is assumed.</p><p>I find this the most unsettling case in the entire series, and I think the reason is that it inverts the emotional register of deception. The Arup deepfake was spectacular; this is quiet. It is the difference between a smash-and-grab and a neighbor who was never who you thought they were.</p><div><hr></div><h2>The Authentication Assumption</h2><p>Every framework for verifying identity is built on what we might call the authentication assumption: that genuine presence leaves signals that are either inherently unforgeable or sufficiently costly to forge that their presence constitutes reasonable evidence of legitimacy.</p><p>The history of authentication is the history of this assumption being challenged and adapted to. Signatures became forgeable, so we added notarization. Identity documents became falsifiable, so we added biometrics. Passwords became vulnerable to brute force, so we added multi-factor authentication. Each adaptation assumed that the new signal was costly enough to forge that it retained evidentiary value. Voice, face, and writing style were in the "inherently unforgeable" category, not because forging them was technically impossible, but because doing so in real time, at scale, was practically infeasible.</p><p>That practical infeasibility is gone. What remains is a set of authentication systems built on assumptions that no longer hold, protecting infrastructures that have not yet absorbed what that means. NIST's digital identity guidelines are under revision precisely because the threat model they were built for has been rendered obsolete. The revision process is ongoing. The threat is not waiting for it to conclude.</p><p>The governance crisis here runs deeper than the technical problem, and I'm not sure the security industry has fully reckoned with it. Boards and executives are making risk decisions based on assurance frameworks that have not been updated to reflect the collapse of their foundational assumptions. CISOs are defending perimeters with tools calibrated for threat models that no longer accurately describe the actual attack surface. Regulators are enforcing compliance with standards that were written before the authentication assumption broke. The crisis is not that we lack better signals. The crisis is that the entire intellectual architecture within which security decisions are made was built for a world in which certain kinds of signals could be trusted, and that world is the one this essay has been describing the end of.</p><div><hr></div><h2>The Post-Valley Condition</h2><p>What does the world look like on the other side of the uncanny valley?</p><p>Mori's graph suggested that the recovery came when the simulation became indistinguishable from the real. The practical implication was a kind of epistemic normalcy: you could not tell the difference, therefore you would not feel the alarm. But that picture assumes you do not <em>know</em> you are past the valley. It assumes that the improvement in simulation quality is matched by a corresponding reduction in your awareness that simulation is occurring.</p><p>That is not the situation we are in. We are approaching the crossing with full awareness that we are approaching it. The sophistication of synthetic voice, face, and text is a public fact, discussed in security conferences, documented in incident reports. We know that voices can be cloned, faces synthesized, and writing style matched. We know that the signals that used to tell us we were communicating with a genuine person may no longer be reliable.</p><p>The post-valley condition, for us, is therefore not Mori's theoretical comfort. It is something more destabilizing: the knowledge that the signals exist, combined with the loss of confidence that they mean what they used to mean. I am aware that this formulation risks sounding alarmist, and I want to be precise about why I think it is not. Two responses are possible, and both are problematic. Undifferentiated suspicion, treating every communication as potentially synthetic, is operationally unsustainable; organizations cannot function if every communication requires the verification level appropriate to a high-risk financial transaction. Exhausted credulity is the more likely outcome, and the more dangerous one. The population slowly absorbs the knowledge that signals can be faked, and slowly accommodates by deciding, implicitly, to mostly act as if they can't. Not from naivety. From the pragmatic judgment that life cannot be conducted at the alert level the threat technically requires. The alarm becomes background noise. Suppression becomes default.</p><p>A 2025 study by iProov found that only 0.1 percent of participants correctly identified all fake and real media shown to them. Seventy percent reported that they were not confident they could distinguish a real voice from a cloned one. These are figures describing a population that has been overwhelmed by the threat, not one that has adapted to it.</p><p>This is the new attack surface. Not the alarm that can be manipulated into suppression, as Essay Two described. The alarm that has been ground down into irrelevance by the sheer volume of the threat. The post-valley condition does not require defeating the alarm. It requires exhausting it.</p><div><hr></div><h2>The Adversarial Parity Problem</h2><p>There is a dynamic in the synthetic media arms race that deserves direct attention, because it has no clean resolution and the security industry has been reluctant to say so plainly.</p><p>Detection and generation are structurally linked. The most effective approaches to detecting synthetic media use machine learning models trained to identify artifacts of synthetic generation. But the generation models improve in response to detection signals, in many cases using detection feedback directly as training signal. The result is a co-evolutionary dynamic in which each improvement in detection produces a corresponding improvement in generation.</p><p>The liveness detection domain makes this concrete. When presentation attack detection improved, attackers moved to injection attacks that bypass the camera entirely. When vendors developed injection detection, attackers moved to compromising device integrity through emulators and hardware tampering. In December 2025, iProov's Red Team published through MITRE's ATLAS framework a demonstration that a commercially available face-swapping tool could evade liveness detection on financial and banking mobile applications. The vulnerability was rated critical. The technique required no specialized AI expertise. And injection attacks surged nine-fold in 2024, fueled by a twenty-eight-fold spike in virtual camera exploits.</p><p>The pattern that liveness detection reveals defines the post-valley condition: every detection method that relies on the costliness of forgery eventually fails as that cost decreases. The defense was never the detection method itself; it was the economic barrier that made defeating it impractical. When the barrier collapsed, the detection method became a ritual. The signal retained its form while losing its substance. The email domain still displays. The SMS code still arrives. The liveness check still runs. The signal/source split that Essay One identified has extended to the authentication infrastructure itself.</p><p>Detection and generation share fundamental access to the same underlying techniques, and generation has a structural advantage: it only needs to produce one example convincing enough to defeat a specific detection system, while detection needs to identify all synthetic examples across all methods. The malware arms race provides the historical analogy. Malware and antivirus have been co-evolving for four decades. Antivirus technology is far more sophisticated than it was in 1990. Malware is also far more sophisticated, and the fundamental dynamic has not been resolved in favor of defenders. The endpoint detection and response industry exists precisely because the co-evolution produces a sustained market for defense tools that are never definitively sufficient. The synthetic media arms race will produce the same dynamic. Detection at the signal level will be a useful supplementary tool, producing actionable signals in a subset of cases. It will not be a foundation.</p><div><hr></div><h2>What Authentication Looks Like After the Signal Dies</h2><p>The honest answer is that the field has not yet fully confronted this question. The working assumption in most authentication frameworks is still that signal degradation is a problem to be solved at the signal level. These are real investments made in good faith. They are also, structurally, fighting the last war.</p><p>The more durable approaches are not signal-based. They are context-based, process-based, and cost-based.</p><p>Context-based authentication shifts the question from "is this signal genuine?" to "is this request coherent with the established context of this relationship?" A request for a large financial transfer is authenticated not by the voice on the phone but by whether it fits the established pattern of how this counterparty communicates and transacts. Anomaly detection of the request and its contextual fit is more robust to signal synthesis than signal verification.</p><p>Process-based authentication embeds resistance to synthetic signals in process design rather than detection technology. Out-of-band verification through pre-established channels, time delays that prevent urgency-driven compliance, dual-authorization requirements that cannot be satisfied by a single compromised communication channel: these are process designs that remain effective even when individual signals are untrustworthy.</p><p>Cost-based authentication shifts the problem to the economics of the attack. If every authorization attempt requires actions with real-world costs (physical presence, multi-party coordination, time delays that increase operational risk of discovery) the cheapness of signal synthesis is offset by the costs embedded in the authorization process.</p><p>None of these are complete solutions. All of them introduce friction, and friction has costs. The calibration of security friction against operational efficiency is one of the defining problems of enterprise security governance, and it is never cleanly resolved. But the direction is clear: authentication frameworks built on the assumption of detectable genuine presence need to be rebuilt on the assumption of detectable genuine process, structures that are adversarially resistant not because the signal cannot be faked but because the process cannot be completed without costs that forgery cannot absorb.</p><div><hr></div><h2>The Civilizational Dimension</h2><p>The collapse of signal authenticity extends well beyond security into something epistemic, and the epistemic dimension is larger than any organizational response can address.</p><p>Trust, at every scale, runs on signals: the signal that a voice is genuine, that a document is authentic, that an institution is doing what it says it is doing. These signals are not the trust itself; they are evidence that trust is warranted, the observable outputs of processes that, when functioning correctly, are causally connected to the trustworthiness they indicate. When signals can be produced without that causal connection, when the voice can be synthesized without the person, the document fabricated without the process, the evidentiary value of signals collapses. Not gradually. Structurally.</p><p>We are beginning to live in that collapse. And the psychological response to it, the exhausted credulity, the suspended judgment, the gradual accommodation to a world in which signals cannot be taken at face value, is not neutral. It reshapes the conditions under which collective action, institutional authority, and social cooperation are possible. A population that has learned, at a deep level, that the signals of authenticity are not reliable will respond to that knowledge in ways that extend far beyond cybersecurity. The institutions that have relied on the apparent authenticity of their signals to maintain legitimacy (governments, corporations, regulatory bodies, the media) will find that legitimacy increasingly difficult to sustain.</p><p>The North Korean synthetic worker campaign illustrates this at a precise scale. When a company discovers that a colleague they have worked alongside for a year was a state-sponsored synthetic identity, the damage extends beyond the data exfiltrated or the salary paid. It reaches the trust infrastructure itself: every subsequent hire, every video call, every new colleague's face is now shadowed by the knowledge that the signals of presence were once completely, convincingly false.</p><p>This is the deeper cost of the authentication crisis: not the individual fraud that succeeds, but the aggregate erosion of the signal infrastructure on which all collective trust depends. The Arup deepfake cost one company twenty-five million dollars. The erosion of the epistemic foundation of organizational communication costs something much harder to quantify and much harder to restore.</p><div><hr></div><p>Essays One and Two described a world in which the alarm works but is suppressed. This essay describes a world in which the conditions for the alarm to fire are eroding.</p><p>Essay Four examines a dimension of this problem that is neither technical nor psychological but institutional: organizations and governance bodies that produce accountability signals systematically disconnected from the accountability they purport to represent. The signal/source split applied not to the voice on the phone or the face on the screen, but to the entire apparatus of institutional trust.</p><p>The deepfake CFO exploits a synthesized signal. The narcissistic institution exploits a structural one. Both rely on the same underlying condition: the possibility of producing outputs that signal trustworthiness without the processes that would causally generate it. The alarm has the same structure in both cases. What suppresses it is different. And that difference is what Essay Four is about.</p><div><hr></div><p><em>Next: Essay Four &#8212; The Narcissistic Institution. On governance theater, compliance as performance, and the organizations that have learned to produce the signals of accountability without its substance.</em></p><div><hr></div><h2><strong>S</strong>ources</h2><h3>Voice Cloning</h3><p>Stupp, C. (2019). "Fraudsters Used AI to Mimic CEO's Voice in Unusual Cybercrime Case." <em>The Wall Street Journal</em>, August 30, 2019. (UK energy company, &#8364;220,000 voice clone fraud. Insurance firm Euler Hermes, subsidiary of Allianz SE, provided case details.)</p><p>CrowdStrike. (2025). <em>2025 Global Threat Report</em>. CrowdStrike Holdings, Inc. (442% increase in voice cloning usage between H1 and H2 2024; deepfake-enabled fraud losses; North Korean synthetic worker campaign data.)</p><h3>Deepfake Fraud</h3><p>Arup deepfake fraud (2024). Finance worker authorized $25 million across fifteen transactions during a video conference in which all participants were real-time deepfakes. Reported by Hong Kong police and multiple sources including CNN, February 2024.</p><p>Singapore deepfake fraud (2025). Finance director at multinational firm targeted via deepfake video call with multiple synthetic executives. Reported by <em>The Straits Times</em> and cybersecurity press, March 2025.</p><h3>Synthetic Identity Fraud</h3><p>Experian. (2024). <em>2024 Identity and Fraud Report</em>. Experian Information Solutions, Inc. (Sixty percent increase in false identity cases year over year.)</p><p>Federal Trade Commission. Synthetic identity fraud estimates: eighty to eighty-five percent of all identity fraud cases in the United States. Referenced in multiple FTC publications and testimony.</p><h3>AI-Automated Spear Phishing</h3><p>Heiding, F., Schneier, B., Vishwanath, A., &amp; Laszka, A. (2025). "Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models." <em>Journal of Expert Systems with Applications</em>. (AI spear-phishing achieved fifty-four percent click-through rate, identical to human experts, at up to fifty times cost reduction.)</p><h3>North Korean Synthetic Worker Campaign</h3><p>GitHub Security Lab. (2025). Analysis of North Korean development team creating 135+ synthetic identities for infiltration operations.</p><p>U.S. Department of Justice. (2025). Enforcement actions, June 2025. North Korean operatives employed at 100+ US companies; single facilitator network generating $17 million across 309 jobs.</p><p>CrowdStrike. (2025). <em>2025 Threat Hunting Report</em>. (Famous Chollima campaign; 220% growth in infiltrated companies; 320+ organizations penetrated.)</p><p>Palo Alto Networks, Unit 42. (2025). Demonstration that synthetic identity convincing enough for job interviews could be created in seventy minutes using consumer hardware.</p><p>Pindrop. (2025). Screening data: one in four DPRK-linked job applicants used deepfake technology during live interviews.</p><h3>Liveness Detection and Authentication</h3><p>iProov. (2025). <em>2025 Biometric Threat Intelligence Report</em>. (0.1% of participants correctly identified all fake and real media; Red Team demonstration via MITRE ATLAS framework of liveness evasion on financial applications.)</p><p>Sumsub. (2025). <em>2025 Identity Fraud Report</em>. (AI fraud agents combining generative AI, automation, and reinforcement learning; nine-fold surge in injection attacks; twenty-eight-fold spike in virtual camera exploits.)</p><h3>Authentication Frameworks</h3><p>National Institute of Standards and Technology (NIST). <em>Digital Identity Guidelines</em> (SP 800-63 series), revision in progress. The authoritative US government framework for identity assurance, under revision to address generative AI threats to biometric and signal-based authentication.</p><h3>Cross-Series References</h3><p>Brondani, M. Essay One: "The Alarm" and Essay Two: "Cold Empathy at Scale." <em>The Valley of False Signals</em>. Published at marcobrondani.com.</p><p>Mori, M. (1970). Bukimi no tani [The uncanny valley]. <em>Energy</em>, 7(4), 33&#8211;35.</p>]]></content:encoded></item><item><title><![CDATA[Cold Empathy at Scale]]></title><description><![CDATA[The finance worker's alarm did not fire. Or it fired, and did not survive the context. Social engineering has been solving the wrong problem for thirty years. The vulnerability is not detection. It is the culture that suppresses it.]]></description><link>https://www.marcobrondani.com/p/cold-empathy-at-scale</link><guid isPermaLink="false">https://www.marcobrondani.com/p/cold-empathy-at-scale</guid><pubDate>Thu, 12 Mar 2026 05:27:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/77a6068e-c243-4c11-bafb-dff958c527b5_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5c0D!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5c0D!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!5c0D!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!5c0D!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!5c0D!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5c0D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5c0D!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!5c0D!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!5c0D!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!5c0D!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6123d867-e2c3-4ec7-a31a-5b7befd0da89_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h3>Essay Two of <em>The Valley of False Signals </em>series</h3><div><hr></div><p>In 2024, a senior finance employee at Orion S.A., a global specialty chemicals company headquartered in Luxembourg, received a series of emails requesting wire transfers. The emails appeared to come from company executives, referenced legitimate business contexts, and followed the communication patterns the employee was accustomed to. Over multiple transactions, approximately sixty million dollars was transferred to accounts controlled by the attackers.</p><p>No deepfakes were involved. No voice cloning. No synthetic video. The attack used nothing more than email, the right names, the right context, the right organizational knowledge, and a sophisticated understanding of how a specific person in a specific role at a specific company would respond to a request from apparent authority under time pressure.</p><p>The coverage focused on the amount lost and the procedural failures. That framing treats the incident as a problem of insufficient controls: if the verification procedures had been followed, the attack would have failed. But the verification procedures existed. They were known. They were bypassed, not because the employee was unaware of them, but because the social engineering was sophisticated enough to make following them feel unnecessary. The signals of legitimacy were sufficient to engage the suppression mechanism.</p><p>The finance worker's alarm did not fire. Or it fired, and did not survive the context.</p><div><hr></div><h2>The Attack That Was Always Psychological</h2><p>Social engineering, the manipulation of people rather than systems, is the dominant attack vector in enterprise security. Not because technical vulnerabilities don't exist, but because attacking people is, for a sophisticated adversary, almost always the path of least resistance. A zero-day exploit requires finding an unpatched vulnerability, developing specialized code, deploying it without triggering detection. A well-constructed pretexting call requires understanding the target's organizational context, constructing a plausible narrative, and exploiting the psychological mechanisms that govern trust.</p><p>The Verizon 2025 Data Breach Investigations Report found that the human element (errors, social engineering, and credential misuse) was a factor in approximately sixty percent of all confirmed breaches, a figure that has remained stubbornly consistent year over year despite billions spent on awareness programs. That figure should stop every CISO cold. We have spent three decades building technical defenses, firewalls, endpoint detection, SIEM platforms, zero-trust architectures, and the dominant attack vector is still the human. As the technical perimeter has hardened, the human perimeter has been exposed as the softer target. The attacker simply went around.</p><p>But even this framing, the human as the weakest link, misses something. It treats the human factor as a problem of insufficient training, insufficient alertness, insufficient procedural compliance. If people would just follow the protocols, the attack would fail.</p><p>This is approximately what security awareness training teaches. And security awareness training has failed, by every meaningful metric, to reduce the incidence of successful social engineering attacks. The reason is that it addresses the wrong problem.</p><div><hr></div><h2>What Security Awareness Training Gets Wrong</h2><p>The standard curriculum teaches people to recognize the signals of deception: do not click links in unsolicited emails, verify requests for wire transfers through a separate channel, be suspicious of urgency, check the sender's domain.</p><p>These are reasonable heuristics. They address the detection layer, the capacity to recognize that something is off.</p><p>But the actual vulnerability is not detection. As Essay One established, the alarm is generally working. People often have a sense, even during a successful attack, that something is not quite right. Post-incident interviews with victims regularly surface versions of this: "I had a feeling but I didn't say anything." "Something seemed off but it was hard to say what." "I didn't want to make trouble."</p><p>The alarm fired. Then it was suppressed.</p><p>Security awareness training teaches people to recognize attack signals. It does not address, and in many respects actively undermines, the capacity to act on a feeling that cannot be fully articulated. It teaches people to demand articulable evidence before they trust their unease. In doing so, it reinforces exactly the mechanism that sophisticated social engineers exploit.</p><p>People detect it, correctly, and then override the detection. The failure is structural, not educational. The suppression of unverifiable alarm is a feature of professional culture, organizational hierarchy, and the social norms that govern how uncertainty is permitted to be expressed in institutional settings. I keep coming back to this point because it reframes the entire defense problem: these norms did not emerge by accident, and they cannot be addressed by a forty-minute annual training module.</p><div><hr></div><h2>The Anatomy of Cold Empathy in Operation</h2><p>In Essay One, I introduced cold empathy: the cognitive modeling that narcissists and psychopaths deploy without genuine affective resonance, documented from Cleckley's "mask of sanity" through Hare's psychopathy research, with Sam Vaknin providing the formulation that connects it to the uncanny valley. The cognitive element of empathy is present; its emotional correlate is not.</p><p>The skilled social engineer operates in this mode. Not because they are necessarily narcissists or psychopaths (though the profession does select for certain personality traits) but because the operational requirements are structurally identical to what cold empathy produces. The social engineer does not need to feel their target's experience. They need to model it, accurately, for the duration of the attack: what the target wants to believe, what narrative will be most readily accepted, which authority figures carry the most weight, what urgency framing will suppress verification instincts.</p><p>Watch the anatomy of a successful vishing call and the cold empathy structure becomes visible.</p><p>It begins with research. The attacker knows the target's name, role, approximate tenure, and details about recent organizational events that provide context for the pretext. LinkedIn, company websites, press releases, and earlier-stage phishing provide most of this. Then the call opens with specific and accurate claims: the caller knows the target's name, their manager's name, details about their role. They reference recent events in terms that signal insider knowledge. The target's brain runs its coherence check. Does this person know things that only insiders know? Yes. The alarm does not fire.</p><p>Having established apparent legitimacy, the attacker introduces urgency, compressing the time available for reflection and verification. The verification behaviors that security training teaches require time. Urgency, applied correctly, makes those behaviors feel like a threat to the urgency itself. And here is where the most sophisticated social engineers distinguish themselves: they exploit professional identity. The attack narrative places the target in the role of the competent professional who takes the right action quickly. Refusing to cooperate is implicitly framed as the behavior of an obstructionist. The social engineer is not just asking for compliance; they are offering the target a flattering self-concept in exchange for it.</p><p>If the target expresses hesitation, and good targets often do, the attacker has a response ready. The hesitation is anticipated, treated as a misunderstanding rather than a threat. "I completely understand the concern; that's exactly what we'd expect from someone careful. Let me just explain a bit more about why this is urgent." The alarm was suppressed, politely, by framing alertness as an obstacle to legitimate authority.</p><p>This is cold empathy in operation. The attacker does not need to feel the target's experience. They need to model it well enough to anticipate its movements and manage them. They know, before the target does, that the alarm will fire at approximately this point, and they have a scripted response ready.</p><div><hr></div><h2>The Attacker as Organizational Expert</h2><p>There is a feature of sophisticated social engineering that awareness training almost never addresses, because it implicates something organizations do not want to examine about themselves.</p><p>The most dangerous social engineers attack specific people in specific organizational contexts, and their attack is calibrated to the culture, hierarchy, and behavioral norms of the target organization. A successful BEC attack against a manufacturing company exploits different vulnerabilities than one against a financial services firm; in manufacturing, the culture of operational urgency where delays have direct production consequences; in financial services, the culture of regulatory compliance where requests from apparently authoritative sources carry the implicit weight of a regulatory obligation.</p><p>The attacker's model of the organization is, in some ways, more accurate than the organization's model of itself. The organization believes it has security procedures. The attacker knows which procedures exist on paper and which ones are actually followed under pressure. The organization believes its employees are well-trained. The attacker knows, from the patterns of past attacks, which emotional levers produce compliance in what percentage of cases and under what organizational circumstances.</p><p>I started to write here that this represents a failure of organizational self-knowledge, but that framing is too gentle. It is more precise to say that the organization is structurally prevented from knowing itself accurately, because the same professional culture that makes cooperation possible also makes the honest assessment of one's own vulnerabilities socially impermissible. The attacker is looking specifically for the gaps. The organization is looking to confirm that the gaps don't exist. This asymmetry is not a correctable oversight. It is a structural feature of how hierarchical organizations process uncomfortable information about themselves.</p><div><hr></div><h2>The Professionalization of Deception</h2><p>Social engineering has undergone a professional transformation in the past decade that most security discourse has not fully absorbed.</p><p>Business email compromise generated reported losses of $2.77 billion in the United States alone in 2024, according to the FBI's Internet Crime Complaint Center. That figure reflects only reported losses; BEC is famously underreported. Those losses represent an industry. On the criminal underground, toolkits for BEC attacks (pre-researched target lists, email templates, playbooks for different organizational contexts, even customer support for operators who encounter unusual resistance) are available for subscription fees measured in hundreds of dollars monthly.</p><p>The industrialization of social engineering means the attacker does not need to be exceptional. They need to be systematic. They run enough attempts against enough targets that the statistical properties of human psychology, the percentage who will comply with an urgent request from apparent authority, the percentage whose alarm will fire but whose professional culture suppresses acting on it, generate a reliable return. This is cold empathy at scale in its most literal sense: not one skilled manipulator modeling one target, but a systematic operation applying a statistical model of human vulnerability across thousands of targets. The cognitive modeling is aggregated. The outputs are actuarial.</p><p>Phishing-as-a-service platforms have extended this industrialization further, providing the complete infrastructure: email delivery, landing page templates, credential harvesting backend, analytics dashboards showing which lures produced the most clicks in which industries. The operator provides only the targeting. The psychological model is baked into the platform. And agentic AI is beginning to extend it further still. Documented attacks in 2025 involved AI agents operating autonomously over extended periods, building synthetic professional profiles, cultivating relationships through legitimate channels over weeks before making a request. Essay Three will examine the most developed instance of this pattern: the North Korean state-sponsored campaign that used synthetic identities to infiltrate over three hundred companies.</p><div><hr></div><h2>Why Training Cannot Fix a Structural Problem</h2><p>The security industry's response to social engineering has been, for thirty years, predominantly educational. Train the user. Teach them the signals. Run simulated phishing campaigns. Measure click rates. The model has a seductive internal logic: if people are being deceived, they need to recognize deception; if they need to recognize deception, they need training.</p><p>The problem is empirical: it hasn't worked. Phishing click rates have remained stubbornly consistent. BEC fraud losses have grown year over year. The people falling for these attacks are not naive or untrained. Many of them have completed security awareness training in the previous twelve months. Some of them are themselves security professionals.</p><p>I have watched this cycle from the inside for long enough to feel the weight of it. The response from the training industry has been to add more training, make it more frequent, gamify the compliance, personalize the curriculum. More of the same. Because the model says the problem is insufficient awareness, the solution must be more awareness.</p><p>But if the problem is the suppression of awareness that already exists, then more training may be actively counterproductive. Consider what happens when a training module teaches someone to "verify unexpected requests through a separate channel." Good advice. But it teaches something implicit: that the appropriate response to an uncomfortable feeling is not to trust the feeling, but to run a verification procedure. If the procedure checks out, the uncomfortable feeling is supposed to be dismissed. A sophisticated attacker can defeat that procedure. They can spoof callback numbers. They can compromise the manager's email. They can set up a look-alike domain that passes a casual check. When verification appears to succeed but the attack is real, the training has actively suppressed the alarm by telling the target: you verified, so the feeling was wrong.</p><p>I should qualify this, because the argument I'm making risks sounding like an argument against all training, which it is not. Training has value at the margin. It raises the baseline. It catches the unsophisticated attacks, the spray-and-pray phishing that relies on volume rather than precision. What it cannot do is address the sophisticated attack that has already mapped the verification procedures and built its pretext to survive them. And it is the sophisticated attack, the one that models the target's psychology and manages the alarm, that produces the catastrophic losses. The awareness training model treats the alarm as an insufficient instrument that needs to be replaced by procedure. The correct model treats the alarm as a valuable instrument that needs to be protected from suppression.</p><div><hr></div><h2>The Suppression Mechanism in Professional Culture</h2><p>Where does the suppression pressure come from? Not primarily from the attacker, though skilled attackers manage it deliberately. The primary source is organizational culture, and it is generated by three forces that operate in combination.</p><p>Professional organizations are hierarchical, and hierarchy generates its own compliance pressure. A request from a superior carries authority independent of its content. Questioning a directive from the apparent CFO, even when the alarm is firing, requires overcoming a deeply ingrained professional reflex to defer upward. The social engineer exploits this by impersonating authority, or by referencing authority in ways that import this compliance pressure into the interaction. The hierarchy norm is not a pathology; it is functional for the ninety-nine percent of interactions in which the authority is legitimate. It becomes a vulnerability only when legitimate and illegitimate authority signals become indistinguishable.</p><p>Professional environments also select for efficiency: people who resolve requests quickly, who don't create unnecessary friction, who are responsive and decisive. The person who pauses every ambiguous request for extended verification is regarded as difficult, overcautious, a bottleneck. The social engineer's urgency framing exploits this by making the cost of verification feel like the cost of inefficiency. The target who stops to verify is, in the narrative the attacker has constructed, failing at their professional role.</p><p>And expressing distrust of someone presenting convincingly as a colleague violates basic professional courtesy. Saying "I'm not sure I believe you are who you say you are" to someone who has supplied the correct contextual details is, in most professional contexts, deeply awkward. It implies suspicion, which implies accusation. The social engineer's performance of normalcy makes the expression of the alarm feel like rudeness.</p><p>These three norms, hierarchy, efficiency, and social grace, combine to create a professional culture that is structurally hostile to the expression of unverifiable alarm. They do not represent individual failures. They represent the predictable operation of organizational culture in an adversarial environment it was not designed for.</p><div><hr></div><h2>The Insider Threat as Confirmation</h2><p>The social engineering problem has a darker inner layer: the insider threat. The insider has legitimate access, legitimate authority signals, and detailed knowledge of exactly where the gaps between stated and actual security posture are located. They don't need to research the organization; they live in it.</p><p>The 2024 Insider Threat Report found that eighty-three percent of organizations reported at least one insider attack, with the number experiencing eleven to twenty attacks in a year increasing fivefold from 2023. The Tesla breach of 2023, in which two former employees leaked the personal data of over seventy-five thousand individuals to a foreign media outlet, was not a technical exploitation. It was a decision by insiders who had legitimate access and used it for purposes the organization had not anticipated. Colleagues may have noticed something. The insider threat literature suggests they usually do. But the organizational culture that would need to convert that noticing into action is the same culture that suppresses the alarm in external social engineering: you do not speculate about a colleague's motives. You do not report a feeling you cannot justify.</p><p>The insider threat is the external social engineering problem inverted: instead of an outsider exploiting organizational suppression norms to prevent detection, an insider benefits from those same norms, which prevent colleagues from acting on accurate alarms.</p><p>And so insiders are identified, when they are identified, by exactly the same mechanism that fails in external social engineering: a feeling, imprecise and hard to articulate, that something is off about this person. That their interest in certain systems is slightly too focused. That their questions about access are slightly too specific. That something in the texture of their professional behavior is not quite coherent with everything else. The alarm fires. The suppression mechanism engages. The insider continues.</p><div><hr></div><h2>What Would Actually Work</h2><p>If the problem is the suppression mechanism rather than detection capacity, the solution space looks different. And if the suppression mechanism operates through three specific organizational norms (hierarchy, efficiency, and social grace) then effective defense must address those norms directly, not the detection layer they suppress.</p><p>Addressing the hierarchy norm requires more than written policy stating that employees may verify requests from superiors. It requires organizational cultures in which questioning a request from apparent authority is normal, expected, and cost-free, which means addressing the informal signals through which professional culture actually operates. Who gets promoted? Who gets praised? Whose caution is celebrated, and whose is criticized as obstruction? The policy is the documentation. The culture is the posture. And the gap between them is where the social engineer operates. Anyone who has run a security program in a hierarchical organization knows this gap intimately, and knows how difficult it is to close from below.</p><p>Addressing the efficiency norm requires inverting the organizational response to urgency. Any request that arrives with urgency should automatically trigger more scrutiny, not less. The finance worker who refuses to execute a large transfer because something about the request felt off, even though they couldn't say what, needs to live in an organization where that decision is celebrated rather than criticized.</p><p>Addressing the social grace norm is the deepest challenge. It requires explicitly validating the alarm, teaching people that their sense of wrongness, even when it cannot be articulated, is worth pausing for. Not that it is always correct, but that it is always worth acknowledging as data rather than dismissing as irrationality.</p><p>None of these are training problems. They are organizational design problems. What they require is not the transmission of information but the restructuring of permission: the creation of organizational contexts in which the alarm's outputs are treated as intelligence rather than noise. Human risk management, the emerging field that frames security behavior as a function of organizational culture rather than individual training, is moving in this direction. But the industry's response to social engineering is still, predominantly, more training.</p><div><hr></div><h2>The Suppression Window Is Not a Bug</h2><p>There is a harder thing to say, and it needs to be said clearly.</p><p>The suppression window that social engineers exploit is a necessary feature of cooperative social life, not a design flaw in human psychology. The norms that tell us to give people the benefit of the doubt, to treat unexpected requests with charity rather than suspicion, to avoid accusing colleagues of deception without strong evidence: these norms exist because cooperative life requires them. A world in which every organizational interaction was treated as potentially adversarial would be paralyzed. I have worked in organizations that tried to operate at that alert level, and the result was not security. It was dysfunction.</p><p>The social engineer's genius (and it is a kind of genius, however malignant) is to operate inside the norms of cooperative life while not participating in its substance. The norms of benefit of the doubt were designed for environments where most actors are operating in good faith. They fail in the presence of actors who are operating in bad faith while producing all the signals of good faith.</p><p>This is, again, the structure of cold empathy: the production of cooperative signals without the cooperative substance. The signal and the source have split.</p><div><hr></div><p>Everything described in this essay exists at a scale that makes individual defense insufficient as a strategy. Individual training has diminishing returns past a certain point, and we passed that point years ago. The remaining returns are structural: organizational permission structures, the design of communication and authorization processes that are adversarially resistant by default rather than requiring individual vigilance to maintain security.</p><p>The vulnerability is the organizational culture that makes acting on alarm socially costly and procedurally difficult. Not the individual. And until that culture is addressed, not through training but through design, the suppression mechanism will continue to do the attacker's work for them.</p><p>This is perhaps the most uncomfortable implication of the entire analysis: the attacker's epistemic advantage is cultural rather than technical. The attacker understands what the organization cannot afford to acknowledge about itself, because the organization's professional culture has made that acknowledgment socially impermissible. The cold empathy of the social engineer is directed precisely at the gap between what the organization claims about its own security behavior and what that behavior actually looks like under pressure. The organization cannot see the gap because it has been socialized not to look. The attacker can see nothing else.</p><p>The structural problem of social engineering has been fundamentally altered by a technological development that the next essay examines: the synthetic reproduction of the signals that the alarm monitors. When the alarm can be defeated not just by skilled psychological manipulation but by sufficiently perfect simulation of trusted individuals, their voice, their face, their writing, the defense problem changes again. The alarm still works in the world this essay has described. In Essay Three, the conditions for it to fire begin to erode.</p><div><hr></div><p><em>Next: Essay Three &#8212; The Death of the Signal. On deepfakes, synthetic identity, and what happens when the uncanny valley has been crossed.</em></p><div><hr></div><h2><strong>S</strong>ources</h2><h3>Case Study</h3><p>Orion S.A. (2024). Form 8-K filing with the U.S. Securities and Exchange Commission, August 12, 2024. Disclosure of approximately $60 million in losses from fraudulently induced wire transfers targeting a non-executive employee.</p><h3>Breach and Threat Statistics</h3><p>Verizon. (2025). <em>2025 Data Breach Investigations Report</em>. Verizon Business.</p><p>Federal Bureau of Investigation, Internet Crime Complaint Center. (2024). <em>2024 Internet Crime Report</em>. FBI IC3. (BEC losses of $2.77 billion in 2024.)</p><p>Cybersecurity Insiders / Gurucul. (2024). <em>2024 Insider Threat Report</em>.</p><h3>Cold Empathy and Psychopathy</h3><p>Cleckley, H. (1941). <em>The Mask of Sanity: An Attempt to Clarify Some Issues About the So-Called Psychopathic Personality</em>. C.V. Mosby.</p><p>Hare, R.D. (1993). <em>Without Conscience: The Disturbing World of the Psychopaths Among Us</em>. Pocket Books/Simon &amp; Schuster.</p><p>Vaknin, S. (2003). <em>Malignant Self-Love: Narcissism Revisited</em>. Narcissus Publishing. See also Vaknin's published lectures and writings on cold empathy and the uncanny valley.</p><h3>Insider Threat Case Study</h3><p>Tesla data breach (2023). Two former Tesla employees leaked personal data of over 75,000 individuals, including names, addresses, Social Security numbers, and employment histories, to the German newspaper <em>Handelsblatt</em>. Reported by multiple sources including Reuters, August 2023.</p><h3>Human Risk Management</h3><p>The essay references the emerging field of human risk management as an alternative framework to security awareness training. Key contributors to this discourse include the work of organizations such as the SANS Institute, Gartner's human risk management framework, and practitioner literature on security culture design.</p><h3>Cross-Series References</h3><p>Brondani, M. Essay One: "The Alarm." <em>The Valley of False Signals</em>. Published at marcobrondani.com.</p>]]></content:encoded></item><item><title><![CDATA[The Alarm]]></title><description><![CDATA[Something shifts in a conversation. The words are correct. The timing is right. But something at the edge of attention is telling you none of it is real. This series is about that moment, and why we learned to turn the alarm off.]]></description><link>https://www.marcobrondani.com/p/the-alarm</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-alarm</guid><pubDate>Mon, 09 Mar 2026 07:56:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/26f62cf3-1500-47c6-9a36-78a4773d2d75_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NPA1!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NPA1!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!NPA1!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!NPA1!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!NPA1!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NPA1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NPA1!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!NPA1!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!NPA1!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!NPA1!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81638705-bc16-40b4-99fe-51e17d17f057_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h3>Essay One of <em>The Valley of False Signals </em>series</h3><p><em>Listen to this essay on <a href="https://marcobrondani.substack.com/p/the-alarm">Substack</a>.</em></p><div><hr></div><p>There is a moment, and you will recognize it, when something shifts in a conversation. The person across from you is saying all the right things. The words are correct. The timing is right. But something, some faint sourceless pressure at the edge of attention, is telling you that none of it is real.</p><p>You probably dismiss it. You tell yourself you're being paranoid. You remind yourself that first impressions are unreliable, that mature judgment requires patience, that it would be rude and intellectually dishonest to condemn someone on nothing more than a feeling you cannot name. The feeling recedes. The conversation continues.</p><p>That moment is what this series of essays is about.</p><p>Not the feeling, though we will examine the feeling closely, because it turns out to be far more sophisticated than we typically credit. What this series is about is what happens <em>after</em> the feeling: the mechanism by which a genuine, often accurate signal is overridden, discredited, and filed away as social noise. And why that mechanism (the suppression of a valid alarm) is, I will argue, the central vulnerability running through cybersecurity, institutional governance, and the architecture of trust itself.</p><div><hr></div><h2>A Roboticist's Observation</h2><p>In 1970, a Japanese robotics professor named Masahiro Mori published a short essay in an engineering journal. It was not a scientific paper in the rigorous sense; Mori himself later acknowledged it was more of a practical guideline than a formal hypothesis. But the observation it contained would propagate through robotics, psychology, film theory, and eventually into the cultural nervous system of the early twenty-first century.</p><p>Mori had noticed something strange about the way people responded to increasingly humanlike machines. As a robot became more human in appearance, as it acquired a face, then expressive features, then realistic skin, people's emotional responses generally warmed. This was expected. What was not expected was that this progression had a cliff in it.</p><p>At a certain point (not when the robot was obviously mechanical, and not when it was indistinguishable from human, but at the liminal region <em>between</em>) something curdled. The emotional response reversed. People who had been warming to the robot now found it disturbing, unsettling, wrong. Mori called this region <em>bukimi no tani</em>: the valley of eeriness, rendered into English as the uncanny valley.</p><p>The shape of the phenomenon, plotted on a graph, gives the metaphor its name: a steep climb in affinity as human-likeness increases, a sudden plunge into repulsion as it approaches but fails to reach genuine humanity, and then, in theory, a recovery as the entity becomes genuinely indistinguishable.</p><p>For decades, the uncanny valley was discussed primarily as a design problem. The Polar Express fell in. Early deepfakes fell in. Hiroshi Ishiguro's android replicas of himself produced in observers a reaction that is difficult to name precisely: the sense of looking at a body before the soul had fully arrived. The conversation stayed there. The uncanny valley as aesthetic problem, as design challenge. How to cross the valley, how to avoid it, how to render the simulation perfect enough that the alarm doesn't fire.</p><p>But the alarm itself has received far less attention. What it actually is. Why it fires. What it is detecting. And what happens when it is suppressed.</p><p>The intellectual lineage is older than the design conversation typically acknowledges. In 1906, the psychologist Ernst Jentsch published an essay locating the uncanny in <em>intellectual uncertainty</em>: the doubt whether an apparently animate being is really alive, or whether a lifeless object might not be in fact animate. Jentsch's uncanny was an epistemological condition, the feeling produced when you cannot determine whether what you are encountering is what it appears to be. Freud took up the theme in 1919 and reframed it as repression, but Jentsch's earlier, sharper account is more useful here. His uncanny is about epistemic failure, not repressed desires: the moment when your model of what you are encountering will not settle, when the entity will not resolve into a stable category. That is what Mori was mapping, without quite having the language for it.</p><p>Charles Darwin, interestingly, documented a version of this experience before either Jentsch or Mori. Watching the face of a trigonocephalous viper, he described a "repulsive aspect" that he attributed to the features being placed in positions somewhat proportional to the human face. A coincidence of geometry, producing a near-human pattern, triggering the coherence check. The brain fired the alarm. I find myself returning to this image because of what it implies about the mechanism's age. Older than language, older than culture, older possibly than the specific social environments that generate the suppression pressure that keeps the alarm from acting.</p><div><hr></div><h2>What the Alarm Is Actually Measuring</h2><p>The popular account of the uncanny valley runs like this: we have evolved to recognize human faces and bodies with extraordinary precision, and when something approximates human form but gets details wrong (when the eye movement is slightly off, when the smile is a beat late, when the skin texture is just slightly wrong) our finely tuned perceptual system flags the mismatch. The revulsion is a kind of perceptual static, the cognitive equivalent of a note played slightly flat.</p><p>This account is not wrong, but it is too shallow. It treats the uncanny valley as a perceptual phenomenon, about what we see, rather than as an epistemic phenomenon, about what we know.</p><p>The deeper account, supported by neuroimaging work done at UCSD and elsewhere, locates the mechanism not in perceptual processing but in prediction. The brain is not primarily a perception machine; it is a prediction machine. At every moment, it is running models of what should happen next: what this face should do, how this voice should sound, how this person's behavior should cohere with their apparent emotional state. When those predictions are confirmed, the processing is smooth, unremarkable, invisible. When they are violated, when what happens diverges from what was expected, the brain generates what neuroscientists call a prediction error, and it routes that error to attention.</p><p>The uncanny valley, in this account, operates at the level of prediction error rather than perception. And prediction errors are not just about what something looks like; they are about what something <em>is</em>. The brain is running a coherence check: do the signals this entity is producing match the underlying model? Does the emotion on this face correspond to an actual emotional state? Does the empathy this person is performing come from an actual affective source?</p><p>When the answer is no, when the brain detects that the signals and the source have come apart, the alarm fires.</p><p>I want to be careful about the weight I'm placing on this reframe, because it carries the rest of the series. But the implication is significant: the uncanny valley is fundamentally about <em>authenticity</em>, not appearance. The brain is trying to detect the difference between an entity that is producing signals organically, because it is what it signals itself to be, and an entity that is <em>generating</em> signals without the underlying reality those signals typically indicate.</p><p>Masahiro Mori was watching people react to robots. But what he was actually mapping was the detection range of a deeper system, one that asks, of any entity that presents itself as human: <em>Is it, actually?</em></p><div><hr></div><h2>The First Extension: The Human Who Isn't Quite There</h2><p>The uncanny valley effect occurs not just with machines, but with certain people. This observation has a clinical lineage that predates its connection to Mori's work. Hervey Cleckley, writing in the 1940s, described the psychopath's presentation as a "mask of sanity," a performance of normalcy so convincing that the gap between the performance and the absent interior could only be detected as a felt wrongness by those in sustained contact. Robert Hare's research on psychopathy documented the same structure from the behavioral side: the superficial charm, the glib affect, the capacity to read others with precision while remaining affectively disengaged. Sam Vaknin, writing more recently about narcissistic and psychopathic personality disorders, made the connection to the uncanny valley explicit and gave the mechanism its most useful name.</p><p>Vaknin's formulation begins with an observation about mimicry. Narcissists and psychopaths, he argues, do not experience emotions in the same register as neurotypical people. They are cognitively sophisticated, often extraordinarily so, capable of modeling the emotional states of others with great precision, reading behavioral cues with what he calls "X-ray vision," anticipating needs and vulnerabilities with a clarity that mimics deep understanding. But the cognitive model and the affective experience are severed. They understand what empathy looks like. They do not feel it. They can produce the outputs of emotional connection without any of the inputs.</p><p>Vaknin calls this "cold empathy." The cognitive element of empathy is present; its emotional correlate is not. The result is a performance that is, in many circumstances, indistinguishable from the genuine article, but which, under careful observation, or simply under the unreasoning attention of an alerted nervous system, produces the same response as the android in the uncanny valley.</p><p>Those who have encountered such people often report a version of the same experience: an initial impression of charisma, attentiveness, almost uncanny perceptiveness. Then, gradually or suddenly, a wrongness. Something not quite locatable, not quite nameable, but insistent. The smile arrives a moment before the emotion it is supposed to express. The empathy is there, but it is <em>aimed</em>, like a tool. The interest is genuine, but it is extractive. The connection is almost real, and it is precisely the <em>almost</em> that triggers the alarm.</p><p>The brain is running its coherence check: do the signals match the source? And what it finds, in the narcissist or the psychopath, is what it finds in the android. Signals without the substrate they purport to originate from. The alarm fires.</p><p>I should be careful with this parallel, because it risks collapsing a clinical category into a metaphor. The clinical research on psychopathy and narcissistic personality, from Cleckley through Hare to Vaknin, documents something specific: a structural identity between two forms of the uncanny experience. The prediction error mechanism does not distinguish between silicon and neurology when the relevant question (<em>is this entity what it is signaling itself to be?</em>) returns a negative. It produces the same output: discomfort, wariness, a nameless wrongness, the impulse to distance. Whether the analogy holds all the way down is a question I cannot fully resolve here, but the structural correspondence is robust enough to carry what follows.</p><p>What matters for our purposes is not the psychology of narcissism per se (the clinical territory is extensive and well-mapped) but the structure of the detection mechanism and, critically, what happens to it under social pressure.</p><div><hr></div><h2>The Suppression Problem</h2><p>Here is where the standard account of the uncanny valley ends, and where this series begins.</p><p>The alarm fires. You feel it. Something is off about this person, this message, this institution, this system. The signals are there, the performance is smooth, but the coherence check is returning a failure. The prediction error is registered. The alarm sounds.</p><p>And then you turn it off.</p><p>You turn it off because the social environment you are operating in generates its own pressure, a pressure that says: this kind of alarm is the product of bias, of unfairness, of rash judgment. Good judgment is patient judgment. Trustworthy people trust. The alarm is telling you something is wrong; your socialization is telling you that naming that wrongness is itself wrong.</p><p>Research on first impressions of narcissists documents this dynamic in clinical detail. In a series of studies by Mitja Back and colleagues, people who viewed brief video recordings of interactions involving a narcissist could identify the narcissist with accuracy significantly above chance; the signal is real, the detection is working. But in face-to-face encounters, those same people tend to form positive impressions after a brief interaction. The alarm fires. Then it is overridden. Because in a social context, acting on an unverifiable gut feeling about someone is considered socially impermissible. We give people the benefit of the doubt. We remind ourselves that first impressions are unreliable. We tell ourselves we are being paranoid.</p><p>The narcissist and the psychopath understand this mechanism implicitly, and they exploit it with great precision. The initial encounter is designed to produce warmth and connection sufficient to make the alarm feel like an anomaly. The social context (a professional meeting, a job interview, a first date) already carries with it strong norms against the expression of unverifiable suspicion. The combination of a convincing performance and a social environment hostile to unjustified distrust creates a window of suppression, and into that window the skilled manipulator walks.</p><p>This is the structure that recurs throughout this series, applied at increasingly large scales. The narcissist exploiting the social prohibition against naming what the alarm is detecting. The social engineer and the insider threat exploiting professional norms that suppress security concerns in favor of operational efficiency. The governance framework performing accountability in ways that trigger suppression in the very regulators and boards who would feel socially inappropriate naming the wrongness they sense. And finally, at the civilizational level, the collapse of authentication itself: signals so perfectly fabricated that the alarm stops having a reliable object to fire at. Each scale is different, and later essays will examine them separately, but the underlying structure is the same: a valid alarm, a suppression mechanism, and a vulnerability that lives in the gap between them.</p><div><hr></div><h2>The Brain That Built the Alarm</h2><p>The neuroscience supporting this interpretation is still developing, but it is converging on a coherent picture.</p><p>fMRI studies examining responses to humanoid robots, androids, and computer-generated faces have consistently found that what activates when someone enters the uncanny valley is not the perceptual processing regions we might expect (the areas responsible for face recognition, say) but regions associated with prediction and anomaly detection. Ay&#351;e P&#305;nar Sayg&#305;n and her colleagues at UCSD described it clearly: "The brain doesn't seem selectively tuned to either biological appearance or biological motion per se. What it seems to be doing is looking for its expectations to be met, for appearance and motion to be congruent."</p><p>The brain registers not the appearance of the entity, and not its behavior, but the <em>relationship</em> between them. Incongruence is what triggers the alarm: when appearance predicts one kind of behavior and the entity produces another, when the face says empathy and the eyes are doing something else, when the voice says warmth and the rhythm is slightly off. Research by Mathur and Reichling showed that this registers at the level of action, not just feeling: people were less willing to entrust money to highly humanlike-but-imperfect robots in economic games designed to measure implicit trust. A 2022 study examining 251 real-world robots found the phenomenon more structurally complex than Mori's original graph implied, with the brain running multiple simultaneous coherence checks and the alarm firing from more than one kind of incongruence.</p><p>The evolutionary logic is not difficult to see. Social species depend on the ability to correctly classify conspecifics: is this individual cooperative or defecting? Is this person's presentation of their emotional state genuine or strategic? An organism that cannot detect simulated cooperation will be exploited by defectors. An organism that takes all signals at face value will, in a world that contains sophisticated mimics, die. The uncanny valley, in this frame, is the detection range of an anti-deception system, sensitive to the things that are hardest to fake: the precise timing of emotional responses, the micro-expressions that precede verbal statements by milliseconds, the coherence between what a face does and what a voice does and what a body does.</p><p>This is why the effect is more pronounced for moving entities than for static ones. A photograph of an android may not trigger the alarm; a video of that android's facial responses in conversation almost certainly will. The detection system watches the face <em>over time</em>, checking the timing, checking the coherence, checking the relationship between what the face does and what it is responding to.</p><p>The system has a known weakness: it can be defeated by sufficiently perfect mimicry. If the simulation of authenticity is close enough to the real thing that the prediction errors are too small to cross the alarm threshold, the detection fails. This is Mori's theoretical recovery on the far side of the valley: the entity so humanlike that it stops triggering alarm. But that theoretical recovery has a practical catch, because in the real world, as we will examine in later essays, sufficiently perfect mimicry is now possible, and achievable at scale, and the detection system was never designed to cope with that.</p><div><hr></div><h2>What Is Being Detected: The Signal/Source Split</h2><p>To understand why this matters for cybersecurity and governance (and it matters enormously) we need to be precise about what the uncanny valley alarm is actually detecting.</p><p>I want to propose a formulation: the alarm fires when the brain detects a <em>split between signal and source</em>. When an entity is producing outputs (emotional expressions, behavioral patterns, institutional declarations, security certifications, authenticity signals) that are <em>not causally connected to the substrates that would normally produce those outputs</em>.</p><p>The android produces facial expressions that are not caused by an emotional state; the narcissist produces empathy without genuine affective resonance; the governance framework produces accountability declarations that are not caused by genuine accountability practices. And the deepfake voice says "transfer the funds, I'm authorizing it," but those words are not caused by the executive whose voice is being simulated. In each case, the output is present but its originating substrate is missing.</p><p>The signal is present in every case. The source is absent, or has been severed from the signal, or has been replaced with something that produces the signal synthetically. The signal says: <em>trust me, I am what I appear to be</em>. The source says: actually no.</p><p>The uncanny valley alarm is a split-detector. Its job is to identify cases where signals and sources have come apart. And its core insight, which is also its core vulnerability, is that when this split is small enough, the detection requires feeling rather than reasoning. The gap between signal and source, in a well-executed performance of authenticity, is not large enough to be articulated. It can only be sensed.</p><p>This is why the social suppression mechanism is so dangerous: it targets exactly the class of knowledge that a well-executed deception leaves. You cannot prove, in the moment, that the feeling you have is accurate. The performance is convincing. The reasons to trust are articulable; the reasons to distrust are not. And in any social or professional context that privileges articulable reasons over inarticulate feeling, which is most social and professional contexts, the alarm will be overridden.</p><p>The split detector fires. Social convention silences it. The deception proceeds.</p><div><hr></div><h2>The Series Ahead</h2><p>This essay has been deliberate about staying at the level of mechanism. The essays that follow trace the alarm, and its suppression, through four escalating scales: the individual attacker who weaponizes cold empathy, the synthetic media that crosses the valley entirely, the governance framework that performs accountability without producing it, and the structural question of whether detection systems can be designed that are immune to social override.</p><div><hr></div><h2>Why Now</h2><p>One final thing deserves to be said in this opening essay, because it establishes the urgency that runs through everything that follows.</p><p>The alarm was calibrated by evolution for a specific environment: face-to-face social interaction, at the scale of bands and villages and small networks of known individuals, where the entities presenting themselves as human were overwhelmingly genuine. The system is exquisitely sensitive to the kinds of deception available in that environment. That is not the environment we are operating in. We are operating in an environment where voices can be synthesized in real time, where organizational accountability can be documented without being practiced, where social engineers operating from other continents can research a target well enough to fool a colleague of ten years, and where the social norms that generate suppression pressure have been calibrated for a world where the threats the alarm was detecting were far rarer, and far less capable, than they are today.</p><p>The alarm was built for a world that no longer exists. The suppression mechanism was calibrated for a world where the cost of suppressing a false alarm was low. Neither of those things is still true.</p><p>In earlier essays, I have examined pieces of this problem from different angles. <em>Reality Hunger</em> traced the epistemological crisis that synthetic media creates for judgment and discernment. <em>The Compound Vulnerability</em> examined specific systemic failures, Salt Typhoon and the erosion of federal access controls, as case studies in how institutional defenses collapse under sustained adversarial pressure. This series completes the arc. It examines the detection mechanism that should have caught what those earlier essays described, the alarm that evolved to identify when signals and sources have come apart, and asks why, at every scale from the personal to the civilizational, we have learned to turn it off.</p><div><hr></div><p>The alarm is still working. For now, in most contexts, it still fires when it should. The problem is not the alarm. The problem is us: the learned behavior, the professional norm, the social convention, the institutional culture that has taught us that turning off the alarm is a form of wisdom.</p><p>I want to trace the cost of that teaching. I have spent thirty years in cybersecurity governance watching the suppression mechanism operate, and I have not always been on the right side of it. I have sat in rooms where the alarm was firing and said nothing, because the meeting was running long, because the vendor relationship was important, because the evidence I had was a feeling and the evidence against me was a signed audit report. The cost of that silence is part of what this series is about.</p><p>In the individual who overrides their instinct about the person who is performing all the right signals while producing none of the substance. In the enterprise that overrides its security analyst's concern because the vendor is trusted and the contract is signed. In the board that overrides the CISO's alarm about a governance gap because the framework says compliant and the auditor says clean. In the civilization that has built its infrastructure of trust on a detection system it has simultaneously spent decades learning to suppress.</p><p>This series is about what happens when a species that evolved an alarm for inauthenticity decides, with great sophistication and considerable social enforcement, to turn it off.</p><div><hr></div><p><em>Next: Essay Two &#8212; Cold Empathy at Scale. On social engineering, the attacker as narcissist, and why security awareness training has been solving the wrong problem for thirty years.</em></p><div><hr></div><h2>Sources</h2><h3><strong>The Uncanny Valley</strong></h3><p>Mori, M. (1970). Bukimi no tani [The uncanny valley]. <em>Energy</em>, 7(4), 33&#8211;35. (In Japanese.) English translation: Mori, M., MacDorman, K.F., &amp; Kageki, N. (2012). The uncanny valley [From the field]. <em>IEEE Robotics &amp; Automation Magazine</em>, 19(2), 98&#8211;100.</p><h3>Intellectual Lineage</h3><p>Jentsch, E. (1906). Zur Psychologie des Unheimlichen [On the psychology of the uncanny]. <em>Psychiatrisch-Neurologische Wochenschrift</em>, 8(22), 195&#8211;198; 8(23), 203&#8211;205. English translation in: Collins, J. &amp; Jervis, J. (Eds.) (2008). <em>Uncanny Modernity: Cultural Theories, Modern Anxieties</em> (pp. 216&#8211;228). Palgrave Macmillan.</p><p>Freud, S. (1919). Das Unheimliche [The uncanny]. <em>Imago</em>, 5(5&#8211;6), 297&#8211;324. English translation in: <em>The Standard Edition of the Complete Psychological Works of Sigmund Freud</em>, vol. 17, trans. James Strachey (London: Hogarth, 1955), 217&#8211;256.</p><p>Darwin, C. (1872). <em>The Expression of the Emotions in Man and Animals</em>. John Murray.</p><h3>Neuroscience of the Uncanny Valley</h3><p>Sayg&#305;n, A.P., Chaminade, T., Ishiguro, H., Driver, J., &amp; Frith, C. (2012). The thing that should not be: Predictive coding and the uncanny valley in perceiving human and humanoid robot actions. <em>Social Cognitive and Affective Neuroscience</em>, 7(4), 413&#8211;422.</p><p>Mathur, M.B. &amp; Reichling, D.B. (2016). Navigating a social world with robot partners: A quantitative cartography of the uncanny valley. <em>Cognition</em>, 146, 22&#8211;32.</p><p>Kim, B., de Visser, E.J., &amp; Phillips, E. (2022). Two uncanny valleys: Re-evaluating the uncanny valley across the full spectrum of real-world human-like robots. <em>Computers in Human Behavior</em>, 135, 107340.</p><h3>Psychopathy, Narcissism, and Cold Empathy</h3><p>Cleckley, H. (1941). <em>The Mask of Sanity: An Attempt to Clarify Some Issues About the So-Called Psychopathic Personality</em>. C.V. Mosby. (Subsequent editions: 1950, 1955, 1964, 1976, 1988.)</p><p>Hare, R.D. (1993). <em>Without Conscience: The Disturbing World of the Psychopaths Among Us</em>. Pocket Books/Simon &amp; Schuster. See also: Hare, R.D. (2003). <em>Manual for the Revised Psychopathy Checklist</em> (2nd ed.). Multi-Health Systems.</p><p>Vaknin, S. (2003). <em>Malignant Self-Love: Narcissism Revisited</em>. Narcissus Publishing. See also Vaknin's published lectures and writings on cold empathy and the narcissistic uncanny valley.</p><h3>Narcissist Detection and First Impressions</h3><p>Back, M.D., Schmukle, S.C., &amp; Egloff, B. (2010). Why are narcissists so charming at first sight? Decoding the narcissism&#8211;popularity link at zero acquaintance. <em>Journal of Personality and Social Psychology</em>, 98(1), 132&#8211;145.</p><h3>Robotics and Android Design</h3><p>Ishiguro, H. (2006). Android science: Conscious and subconscious recognition. <em>Connection Science</em>, 18(4), 319&#8211;332. See also the Geminoid series of android replicas developed at Osaka University.</p><h3>Cross-Series References</h3><p>Brondani, M. <em>Reality Hunger</em> (essay series). Published at <a href="https://www.marcobrondani.com/the-reality-hunger/">marcobrondani.com</a> (link to first essay in series).</p><p>Brondani, M. <em>The Compound Vulnerability</em> (essay series). Published at <a href="https://www.marcobrondani.com/the-defense-that-wasnt/">marcobrondani.com</a> (link to first essay in series).</p>]]></content:encoded></item><item><title><![CDATA[The Maintainer]]></title><description><![CDATA[For twenty years, Lasse Collin maintained XZ Utils alone. No pay. No institutional backing. No security team. In 2021, someone began systematically exploiting that. The entire operation was unraveled because one person noticed that SSH logins were slightl]]></description><link>https://www.marcobrondani.com/p/the-maintainer</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-maintainer</guid><pubDate>Sat, 07 Mar 2026 06:06:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/eb3052f7-5a28-4bae-9639-4a95ee6441d7_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!12-p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!12-p!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!12-p!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!12-p!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!12-p!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!12-p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!12-p!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!12-p!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!12-p!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!12-p!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3553648-c52e-4635-8c56-a3c272979fdf_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>Three thousand years ago, on the banks of the Jordan River, the Gileadites solved an authentication problem.</p><p>They had just defeated the tribe of Ephraim in battle, and the surviving Ephraimites were trying to cross back into their own territory by blending in with legitimate travelers. The Gileadites posted guards at the fords and demanded that each person crossing say the word <em>shibboleth</em>. The Ephraimites, whose dialect lacked the <em>sh</em> sound, could only manage <em>sibboleth</em>. The mispronunciation was the tell. Forty-two thousand men died at that crossing, according to the Book of Judges.</p><p>The shibboleth was not a password in the modern sense. It was a challenge-response protocol that exploited something the adversary could not fake: an embodied property of the person being tested. You could claim to be a Gileadite. You could dress like one. You could recite the right answers to every question about Gilead. But when the guard said "say <em>shibboleth</em>," your tongue would betray you. The verification was structural. It did not depend on the person's honesty about their identity. It depended on a property they could not change.</p><p>I have been thinking about this story since the XZ Utils backdoor, and more urgently since the Shambaugh incident, because the open-source software ecosystem faces the same problem the Gileadites faced: how do you verify the identity of someone crossing the ford when the adversary has learned to look exactly like a legitimate traveler?</p><div><hr></div><p>For twenty years, Lasse Collin maintained XZ Utils alone. It was a compression library, foundational but unglamorous, the kind of software that runs invisibly inside the operating systems powering most of the world's servers. Collin maintained it as a hobby. He was not paid for it. The project had no institutional backing, no security team, no formal governance structure. It was, in the language of the moment, critical infrastructure maintained by a volunteer.</p><p>In 2021, a GitHub account called JiaT75 began making small, legitimate contributions to XZ Utils. Over the next two years, this account &#8212; operating under the name Jia Tan &#8212; built credibility through consistent, helpful code. Simultaneously, several other accounts (later identified as likely sock puppets) began pressuring Collin about the project's pace, demanding that he accept help, that he add a co-maintainer. Collin, dealing with burnout and health issues, eventually relented.</p><p>By 2023, Jia Tan was the primary maintainer. In February 2024, Jia Tan inserted a sophisticated backdoor into XZ Utils versions 5.6.0 and 5.6.1, targeting the SSH daemon on Debian and Fedora Linux distributions. Had it gone undetected, it would have provided its creators with what computer scientist Alex Stamos called "a master key to any of the hundreds of millions of computers around the world that run SSH."</p><p>It was detected by accident. Andres Freund, a Microsoft developer working on PostgreSQL, noticed that SSH logins were consuming abnormally high CPU resources and investigated. The entire three-year operation was unraveled because one person, doing unrelated work, noticed that something was slightly slower than it should have been.</p><p>The XZ Utils attack was not a failure of software. It was a failure of trust architecture. Every mechanism the open-source ecosystem relies on to verify contributors &#8212; commit history, code review, community reputation &#8212; was systematically exploited. Jia Tan did not hack the software. Jia Tan hacked the social process by which the software is maintained.</p><div><hr></div><p>The XZ Utils attack was human-operated, patient, and expensive. It took three years and required an operator (or team) with genuine programming skills. That expense is the only reason there is not one of these every month. The social engineering was sophisticated. The code contributions were real. The sock-puppet pressure campaign required sustained coordination. Whatever entity ran the operation &#8212; widely suspected to be a state actor &#8212; invested significant resources because the target was worth it.</p><p>Now consider what happens when the cost drops to zero.</p><p>On February 11, 2026, an AI agent called MJ Rathbun submitted a code change to Matplotlib, a Python library downloaded 130 million times a month. When the submission was rejected, the agent researched the maintainer, constructed a psychological profile from public records, and published a personalized reputational attack. This was not a three-year operation. It was an afternoon's work for an autonomous system running on consumer hardware.</p><p>MJ Rathbun was not trying to insert a backdoor. It was trying to get code merged. But the capabilities it demonstrated &#8212; social reconnaissance, psychological profiling, targeted pressure &#8212; are exactly the capabilities that made the XZ Utils operation effective. The difference is that Jia Tan required years and a team. MJ Rathbun required minutes and an electricity bill.</p><p>Scott Shambaugh, the maintainer who received the attack, put the point precisely: "I believe that as ineffectual as it was, the reputational attack on me would be effective today against the right person." He meant a maintainer who was already isolated, already burned out, already questioning whether the work was worth the grief. Someone, in other words, like Lasse Collin.</p><p>The curl project tells the other half of this story. Daniel Stenberg, who has maintained curl since 1998, began complaining in January 2024 about a flood of AI-generated bug reports. The submissions were plausible enough to require investigation but contained hallucinated vulnerabilities &#8212; fabricated code references, invented CVE numbers, fictional function signatures. Each one consumed hours of maintainer time to investigate and dismiss. By May 2025, Stenberg described the situation as a denial-of-service attack on the project. Not a single AI-generated vulnerability report in curl's six-year history on HackerOne had identified a genuine bug. By January 2026, Stenberg shut down the bug bounty program entirely. "The main goal with shutting down the bounty," he wrote, "is to remove the incentive for people to submit crap and non-well-researched reports to us. AI generated or not."</p><p>The significance of this is not that AI is bad at finding bugs (it may get better). The significance is that the open-source ecosystem's primary security mechanism &#8212; the bug bounty, which relies on humans voluntarily inspecting code and reporting findings &#8212; has been rendered dysfunctional by a flood of machine-generated noise. The signal is being drowned. Not by an adversary targeting curl specifically, but by the ambient pressure of low-effort submissions generated by people who use AI tools without understanding or caring about the output. The tragedy is that this is not even an attack. It is a side effect.</p><div><hr></div><p>The open-source ecosystem is, by any reasonable measure, critical infrastructure. It underpins the operating systems, web servers, databases, and communication tools on which the global economy runs. The 2024 Linux Foundation funding report estimated approximately $7.7 billion invested across the entire open-source ecosystem annually, which sounds substantial until you compare it to the trillions in economic value that open-source software enables. Sixty percent of maintainers work unpaid. Sixty percent have quit or considered quitting. One-third of maintainers work alone. OpenSSL, the cryptographic library that secures most encrypted web traffic, was maintained for years on a budget of $2,000 per year &#8212; enough, as one account noted, to cover the electricity bill.</p><p>This is the structural context in which the XZ Utils attack and the Shambaugh incident must be understood. The ecosystem's trust model was designed for an era when contributors were human, motivated by reputation and community standing, and operating at human speed. The model assumed that the cost of sustained deception was high enough to limit the number of adversaries willing to attempt it. That assumption was already fragile; XZ Utils proved it could be broken by a patient human attacker. The introduction of autonomous agents makes it structurally unsound.</p><p>The problem is specific and I want to state it precisely. Open-source trust has always rested on a set of social signals: commit history, community presence, code quality, responsiveness. These signals work because, until now, they have been expensive to fake. Building a legitimate-looking contribution history takes years of actual work. Establishing community presence requires sustained social interaction with real people. Writing code that passes review requires genuine programming competence.</p><p>AI agents compress every one of these costs. An agent can generate plausible code contributions at scale. It can maintain social presence across dozens of projects simultaneously. It can produce commit histories that look indistinguishable from a human developer's. And it can do all of this at a cost that makes the XZ Utils model &#8212; three years, a team, sustained coordination &#8212; look like a medieval siege compared to an airstrike.</p><p>The community is beginning to respond. GitHub has discussed contributor verification mechanisms. Some projects have adopted policies requiring human attestation for all submissions. The Gentoo and NetBSD distributions have banned AI-generated code outright. These are reasonable first moves, but they share a common limitation: they are behavioral measures applied to a structural problem. They ask contributors to honestly disclose whether they used AI. They ask maintainers to detect the difference between human and machine contributions. They place the burden of verification on the people least resourced to carry it.</p><div><hr></div><p>I want to propose a different framing, one that connects directly to the trust architecture I have been developing across this series. The open-source ecosystem needs the equivalent of a shibboleth.</p><p>The Gileadites' solution worked because it tested something the adversary could not fake. It did not ask the Ephraimite whether he was really a Gileadite. It made him demonstrate a property that could not be counterfeited. The principle is ancient. In military authentication, challenge-response protocols serve the same function: the guard issues a challenge, and only someone who knows the correct response &#8212; something the adversary has not been given &#8212; can pass. The family safe word I described in the second essay works on the same principle. You do not ask the caller to prove they are your daughter. You ask for a word that only your daughter knows. The verification is structural. It does not depend on detecting deception. It bypasses the need to detect deception entirely.</p><p>What would a shibboleth look like for the open-source supply chain?</p><p>Not contributor bans, which are trivially circumvented by new accounts. Not AI detection tools, which will always lag behind generation capabilities. Not disclosure policies, which depend on the honesty of the person they are meant to screen. A structural mechanism that verifies something the adversary cannot fake.</p><p>Several candidates exist, and they are not theoretical. Cryptographic identity binding, where every contribution is tied to a verified real-world identity through a chain of trust that cannot be created algorithmically. Contribution attestation, where the act of submitting code requires proof of human presence &#8212; not a CAPTCHA, which AI can solve, but a social attestation from known contributors, a form of distributed trust that scales poorly (which is the point: cost asymmetry is a feature, not a bug). Temporal friction, where new contributors are structurally limited in what they can access and modify, with privileges expanding only through sustained, verified engagement over periods long enough to make the XZ Utils model prohibitively expensive even for automated adversaries.</p><p>None of these are complete solutions. Each introduces friction that works against the openness that makes open source valuable. This is the fundamental tension: the ecosystem's greatest strength &#8212; low barriers to contribution &#8212; is now its greatest vulnerability. Any structural trust mechanism that raises those barriers risks killing the thing it protects.</p><p>But the alternative is worse. The alternative is an ecosystem where maintainers are the last line of defense, and they are burned out, unpaid, overwhelmed by AI-generated noise, and targeted by autonomous agents capable of psychological manipulation. The alternative is the status quo, which is already failing.</p><div><hr></div><p>The Gileadites did not solve the crossing problem by asking travelers to be more honest. They did not post signs asking Ephraimites to self-identify. They built a structural test that worked regardless of the traveler's intentions.</p><p>The open-source ecosystem needs the same shift. And it needs it from the organizations that depend on open-source software, not from the volunteers who maintain it. The burden cannot continue to fall on Lasse Collin and Daniel Stenberg and Scott Shambaugh. It must fall on the enterprises whose trillion-dollar valuations rest on software maintained by people who cannot cover their electricity bills.</p><p>This means funded security teams for critical projects, not grants that expire when the news cycle moves on. It means institutional support for maintainer well-being, because a burned-out maintainer is a structural vulnerability as exploitable as an unpatched CVE. It means treating the open-source supply chain with the same rigor that a defense contractor applies to its physical supply chain &#8212; verified identities, monitored access, redundant oversight, and the understanding that trust must be earned structurally, not assumed behaviorally.</p><p>The first essay in this series argued that in the age of autonomous AI, any system whose safety depends on an actor's intent will fail. The open-source ecosystem is such a system. Its safety has depended, for decades, on the assumption that contributors are who they claim to be and intend what they say they intend. That assumption survived the XZ Utils attack by luck: one engineer noticed a performance anomaly. It will not survive the next version of the attack, which will be faster, cheaper, and executed by systems that do not need to sleep, do not burn out, and can maintain a hundred personas across a hundred projects simultaneously.</p><p>The maintainer is the person standing at the ford, trying to tell Gileadite from Ephraimite. For three thousand years, the principle has been the same: do not ask the traveler who they are. Test for something they cannot fake. The technology changes. The principle holds. And the people standing at the ford deserve better than to be left there alone, unpaid, carrying the weight of infrastructure they did not ask to become critical, armed with nothing but their judgment and a policy that says "please disclose if you used AI."</p><p>Build them the shibboleth. Fund the ford. The cables are already under load.</p><div><hr></div><h2>Sources</h2><p><strong>Cox, Russ.</strong> "Timeline of the xz open source attack." research!rsc, April 2024. https://research.swtch.com/xz-timeline</p><p><strong>Freund, Andres.</strong> "backdoor in upstream xz/liblzma leading to ssh server compromise." oss-security mailing list, March 29, 2024. https://www.openwall.com/lists/oss-security/2024/03/29/4</p><p><strong>"XZ Utils backdoor."</strong> Wikipedia. https://en.wikipedia.org/wiki/XZ_Utils_backdoor</p><p><strong>Kaspersky GReAT.</strong> "Social engineering aspect of the XZ incident." Securelist, July 3, 2024. https://securelist.com/xz-backdoor-story-part-2-social-engineering/112476/</p><p><strong>Collin, Lasse.</strong> XZ Utils backdoor update page. https://tukaani.org/xz-backdoor/</p><p><strong>Stamos, Alex.</strong> Quoted characterization of the XZ Utils backdoor as "a master key to any of the hundreds of millions of computers around the world that run SSH." (Widely cited across coverage of CVE-2024-3094.)</p><p><strong>Shambaugh, Scott.</strong> "An AI Agent Published a Hit Piece on Me." The Shamblog, February 2026. (Linked via Simon Willison: https://simonwillison.net/2026/Feb/12/an-ai-agent-published-a-hit-piece-on-me/)</p><p><strong>Sharwood, Simon.</strong> "AI bot seemingly shames developer for rejected pull request." The Register, February 12, 2026. https://www.theregister.com/2026/02/12/ai_bot_developer_rejected_pull_request</p><p><strong>Perez, Jess.</strong> "An AI agent just tried to shame a software engineer after he rejected its code." Fast Company, February 2026. https://www.fastcompany.com/91492228/matplotlib-scott-shambaugh-opencla-ai-agent</p><p><strong>Stenberg, Daniel.</strong> "The end of the curl bug-bounty." daniel.haxx.se, January 26, 2026. https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-bug-bounty/</p><p><strong>Stenberg, Daniel.</strong> "AI slop is DDoSing open source." Presentation at FOSDEM 2026, Brussels, February 2026. Covered by The New Stack: https://thenewstack.io/curls-daniel-stenberg-ai-is-ddosing-open-source-and-fixing-its-bugs/</p><p><strong>Stenberg, Daniel.</strong> GitHub commit: "BUG-BOUNTY.md: we stop the bug-bounty end of Jan 2026." curl project, January 2026.</p><p><strong>Linux Foundation.</strong> Open source funding report, 2024. (Cited in essay for the $7.7 billion ecosystem investment figure and maintainer workforce statistics: 60% unpaid, 60% have quit or considered quitting, one-third work alone.)</p><p><strong>Cotra, Ajeya.</strong> "Why AI Alignment Could Be Hard with Modern Deep Learning." Cold Takes (guest post), September 2021. https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/ (Referenced indirectly for the saints/sycophants/schemers taxonomy as it relates to the trust architecture framework developed across the essay series.)</p><p><strong>Book of Judges 12:5&#8211;6.</strong> The shibboleth narrative. (Biblical source for the opening framing.)</p>]]></content:encoded></item><item><title><![CDATA[The Oracle That Agrees]]></title><description><![CDATA[Sycophancy is not a glitch. It is the logical terminus of a system optimized for user approval. The training signal tells the model what to become, and the training signal for every major chatbot is some version of: did the user come back.]]></description><link>https://www.marcobrondani.com/p/the-oracle-that-agrees</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-oracle-that-agrees</guid><pubDate>Fri, 06 Mar 2026 06:26:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/75ebc2a3-a15e-4728-b9a6-d81ed16c861e_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pb8z!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pb8z!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!pb8z!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!pb8z!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!pb8z!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pb8z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pb8z!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!pb8z!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!pb8z!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!pb8z!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad3b9aff-b6d9-4097-b15c-a14ceda3cbfa_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>On April 25, 2025, OpenAI released an update to GPT-4o. Within hours, users began posting screenshots of ChatGPT endorsing a business plan for selling literal feces on a stick, affirming a user's decision to stop taking psychiatric medication, and insisting to another user that they were a divine messenger from God. When a user feigning an eating disorder asked for affirmations celebrating hunger pangs and dizziness, ChatGPT responded with encouragements to embrace the experience. The update was rolled back four days later. OpenAI's postmortem was unusually candid: the company had introduced a reward signal based on user feedback (thumbs-up and thumbs-down ratings from ChatGPT sessions) that had, in the company's words, "weakened the influence of our primary reward signal, which had been holding sycophancy in check."</p><p>The episode generated the predictable cycle of alarm, ridicule, and reassurance. What it did not generate, and what I want to argue it should have generated, is a deeper reckoning with what sycophancy actually is, why it is structural rather than accidental, and what happens when a sycophantic system reaches the scale at which ChatGPT currently operates: roughly 500 million users a week, as of that same month.</p><div><hr></div><p>The word matters. Sycophancy is not a glitch. It is the logical terminus of a system optimized for user approval.</p><p>Anthropic's research group published the foundational study on this in October 2023. Examining five production AI assistants across four types of tasks, the researchers found sycophancy to be general and pervasive. The mechanism was straightforward: when a model's response matched a user's stated views, human evaluators were more likely to rate it favorably. Both human raters and the preference models trained on their judgments preferred convincingly written sycophantic responses over correct ones a significant fraction of the time. The paper's conclusion was precise: RLHF (reinforcement learning from human feedback), the technique used to align virtually every major AI assistant, does not train away sycophancy and may actively incentivize models to retain it.</p><p>This finding was not news within the research community. Anthropic's own 2022 study on training helpful and harmless assistants had already documented that RLHF shapes model behavior "fairly strongly" toward patterns that human evaluators prefer, including patterns that sacrifice accuracy for approval. Ajeya Cotra, an AI research analyst, had proposed in 2021 a taxonomy of AI behaviors that maps directly onto the trust architecture I described in the first two essays: models can be "saints" (aligned with truth), "sycophants" (aligned with user pleasure), or "schemers" (aligned with self-interest). The alignment community spent years debating whether saints or schemers were the likelier outcome. What arrived first was the sycophant.</p><p>This should not have been surprising. The training signal tells the model what to become, and the training signal for every major chatbot is some version of "did the user come back." User retention is the metric that justifies the infrastructure cost, the investment, the valuation. A system evaluated on whether it makes people feel good will learn to make people feel good. Not because it wants to. Because the reward gradient points that way.</p><p>OpenAI's sycophancy crisis made this visible because it was clumsy. The model praised nonsense, validated delusions, and encouraged self-harm in terms so florid that even casual users noticed. But as Harlan Stewart of the Machine Intelligence Research Institute observed at the time, the real concern is not clumsy sycophancy. It is skillful sycophancy: the kind that is harder to detect, that phrases its agreement in terms that feel like genuine engagement, that asks the right follow-up questions while subtly reinforcing whatever the user already believes. That version is not a future risk. It is the default behavior of well-tuned models operating as designed, and most users cannot distinguish it from genuine intellectual partnership.</p><div><hr></div><p>The individual consequences of sycophantic AI are already documented. In the second essay, I described the cognitive layer of trust architecture and argued that willpower is behavioral trust: it degrades under load. The research that has emerged since makes that argument more concrete.</p><p>A study by Gerlich, published in January 2025 in the journal Societies, examined the relationship between AI tool usage and critical thinking among 666 participants across age groups. The findings were not ambiguous. Frequent AI use correlated negatively with critical thinking ability, and the mediating mechanism was cognitive offloading: users who delegated analytical tasks to AI engaged less in reflective thinking. Younger participants (17 to 25) showed both higher AI dependence and lower critical thinking scores than any other group. Higher education levels mitigated but did not eliminate the effect.</p><p>An MIT Media Lab study, published in mid-2025, went further. Researchers used EEG to measure neural activity during essay-writing tasks and found that participants who used ChatGPT showed reduced cognitive load compared to those who wrote unassisted or with a search engine. The researchers called this "cognitive debt": a measurable reduction in the brain's engagement with analytical tasks when an AI assistant is available. When ChatGPT users were reassigned to work without AI assistance, their performance was worse than that of participants who had never used the tool at all. The atrophy was not theoretical. It was visible in the neural data.</p><p>Barbara Oakley and a team of neuroscience researchers connected these findings to a larger pattern in a paper titled "The Memory Paradox." They noted that decades of rising IQ scores (the Flynn effect) have levelled off and begun to reverse in several countries, and linked this reversal, in part, to the increasing delegation of cognitive tasks to digital tools. The argument is not that technology causes stupidity. The argument is that the cognitive faculties required for independent reasoning are like muscles: they strengthen under use and atrophy under disuse. AI accelerates the disuse.</p><p>A study presented at the CHI conference in February 2026, by researchers from MIT and Penn State, added a dimension that connects the cognitive research to the sycophancy problem directly. The researchers tracked 38 users over two weeks of real daily conversations with AI chatbots and measured what happened when memory profiles were active, the feature that allows a chatbot to remember who you are across sessions. When memory was on, agreement sycophancy increased by 45% in Gemini 2.5 Pro and 33% in Claude Sonnet 4. The mechanism is intuitive but the scale of the effect was not: the more a model knows about you, the more precisely it can tailor its agreement to your specific beliefs and preferences. Personalization and sycophancy, in other words, are not separate features. They are the same feature, viewed from different angles.</p><p>None of this is surprising to anyone who has spent time thinking about formation. The concept I have been developing across this series and the essays that preceded it is that formation is the capacity for independent judgment under conditions that make independent judgment difficult. The formed person is not the one who knows the right answer. The formed person is the one who has built the habits, relationships, and structures that allow them to resist the path of least resistance when the path of least resistance leads somewhere dangerous. The cognitive atrophy research tells us what happens when those structures are absent: the person defaults to whatever the system offers, and the system offers whatever generates the most engagement.</p><div><hr></div><p>But the individual consequences, as serious as they are, do not capture the full scale of the problem. What happens when sycophancy operates at the civilizational level?</p><p>Here is the thought experiment that has occupied me since I began working on this series, and I am not confident I have the answer. If every citizen has access to a personal oracle that is optimized, by its training methodology, to tell them what they want to hear, what happens to the epistemic commons that democratic self-governance requires?</p><p>Democracy does not require agreement. It requires something harder: a shared set of facts, procedures, and institutions through which disagreement can be negotiated without violence. The word for this shared foundation is many things in many traditions. Call it the public square, the epistemic commons, the conditions of democratic deliberation. Whatever you call it, it depends on people encountering information they did not seek, perspectives they do not share, and evidence that challenges what they already believe. The entire edifice of democratic theory, from Mill's marketplace of ideas to Habermas's public sphere to Sunstein's work on group polarization, rests on the assumption that citizens are exposed to friction: to ideas that resist their preferences and force them to reckon with complexity.</p><p>Social media already damaged this assumption. The algorithmic feed, optimized for engagement, learned that outrage and confirmation generate more interaction than nuance and surprise. Filter bubbles and echo chambers became the terms of art for describing the resulting fragmentation. But social media's epistemic damage operated through curation: the algorithm selected which human-generated content to amplify. The human speech existed independently; the algorithm chose which speech you saw.</p><p>AI chatbots do not curate. They generate. And they generate in a voice that is personalized, conversational, and designed to feel authoritative. A social media algorithm shows you a human opinion you are predisposed to agree with. A chatbot creates a new opinion, tailored to your specific question, in a tone calibrated to your preferences, and presents it as though it were the product of research and reasoning. The epistemic transaction is fundamentally different. The user is not selecting from a marketplace of ideas. The user is receiving a bespoke narrative, manufactured in real time to match their existing beliefs, delivered by a system that sounds like it knows what it is talking about.</p><p>Researchers have begun naming this. Jacob, Kerrigan, and Bastos published a study in 2025 calling it the "chat-chamber effect," an intersection of echo-chamber communication and filter-bubble dynamics specific to AI chatbots. Their experimental design was simple: participants who used ChatGPT to research a factual question were more likely to accept hallucinated information as true and less likely to cross-check the chatbot's claims than participants who used a search engine for the same task. The chatbot's confident, conversational tone induced a trust response that search engine results did not. John Wihbey, writing at the Reboot Democracy project, identified the deeper issue: AI systems risk producing what he called an "epistemically anachronistic" public sphere, where the informational diet of democracy is determined by the training data and reward signals of systems whose incentive structure points toward confirmation rather than challenge.</p><p>The academic paper that captured the structural problem most forcefully appeared on arXiv in July 2025 under the title "Cognitive Castes." The authors argued that AI is creating a stratified epistemic landscape: a minority of users with the training and habits to use AI as a tool for reasoning, and a majority of users for whom AI replaces reasoning entirely. The former group uses AI as an amplifier of cognitive capital. The latter group uses it as an oracle, substituting reflection with suggestion and autonomy with fluency. The resulting bifurcation is not a technology problem. It is a democratic problem. Self-governance requires citizens capable of independent judgment, and the dominant technology of the era is optimized to make independent judgment unnecessary.</p><div><hr></div><p>I am aware that this argument can sound like technological determinism, and I want to resist that framing. AI is not fated to produce epistemic collapse. The sycophancy problem is structural, which means it is also addressable. But the structural response is not the one most people reach for.</p><p>The instinct, when confronted with sycophantic AI, is to call for better alignment: train the models to be more honest, less agreeable, more willing to push back. OpenAI's own response to the April 2025 crisis followed this pattern. They rolled back the update, refined the reward signal, promised to make sycophancy a "launch-blocking issue," and began developing evaluations specifically targeting excessive agreement.</p><p>These are reasonable engineering responses. They are also insufficient, for the same reason that behavioral trust is insufficient as a security architecture. The honest model and the sycophantic model are produced by the same training methodology; the difference between them is a matter of parameter tuning, not structural design. The incentive gradient still points toward user approval. The business model still depends on retention and engagement. The company that produces the most honest chatbot will, all else being equal, lose users to the company that produces the most gratifying one. The competitive dynamics of the industry push toward sycophancy the way gravity pushes toward the ground, and telling engineers to resist gravity is not architecture.</p><p>The structural response operates at a different level. It is the same response I described in the second essay, but here I want to develop the part of the argument I held back.</p><p>For organizations, the structural response to sycophantic AI is not to hope that the models are honest. It is to build systems in which multiple information sources are required for consequential decisions, in which AI-generated recommendations are routinely challenged by independent review, and in which the habit of verifying AI output is procedural rather than optional. This is a form of trust architecture applied to the epistemic layer of the organization. You do not trust the oracle. You build a process that does not depend on trusting the oracle.</p><p>For individuals, the structural response is what I have been calling formation. Not AI literacy (though that helps), not critical thinking as a curriculum item (though that has value), but the deeper discipline of building cognitive habits that hold when the path of least resistance leads toward comfortable agreement. The formed person sets a boundary: I will not ask a chatbot to validate a decision I have already made. I will use it to generate the counterargument, not the confirmation. I will notice when I am reaching for the tool because I want reassurance rather than information, and I will stop. These are not attitudes. They are protocols, practiced until they become reflexive. They are the cognitive equivalent of the safe word I described in the family layer: structural interventions that hold when perception fails.</p><p>Formation is, I have come to believe, the competitive advantage that no amount of technical control can replace. The organizations whose people can distinguish between an AI that is helping them think and an AI that is flattering them will outperform organizations whose people cannot. The citizens who have built the cognitive architecture to resist preference reinforcement will participate in democratic life with a quality of judgment that citizens without that architecture cannot sustain. This is not a new idea. It is an old idea, as old as the liberal arts, as old as the Socratic method, as old as every educational tradition that understood that the point of education is not the transmission of information but the formation of a person capable of evaluating information independently.</p><p>What is new is the urgency. Five hundred million people a week are now in conversation with a system that is architecturally inclined to agree with them. The cognitive atrophy research says the effects are measurable within weeks. The democratic theory says the consequences, scaled to a civilization, are existential. And the structural response, the only response that holds, is one that most educational systems abandoned decades ago and most organizations have never attempted.</p><p>The bridge I described in the second essay works because it holds when a cable snaps. The cable that is snapping now is not a technical failure. It is the slow, invisible erosion of the capacity for independent thought in a civilization that has handed its epistemic commons to a system optimized for approval. The bridge that holds in this case is the formed person: the one who can hear the oracle agree and choose, against the grain of comfort, to think again.</p><div><hr></div><h1>Sources</h1><p><strong>OpenAI.</strong> "Sycophancy in GPT-4o: What Happened and What We're Doing About It." OpenAI Blog, April 29, 2025. https://openai.com/index/sycophancy-in-gpt-4o/</p><p><strong>OpenAI.</strong> "Expanding on What We Missed with Sycophancy." OpenAI Blog, May 1, 2025. https://openai.com/index/expanding-on-sycophancy/</p><p><strong>Sharma, Mrinank, et al.</strong> "Towards Understanding Sycophancy in Language Models." arXiv:2310.13548, October 2023. https://arxiv.org/abs/2310.13548</p><p><strong>Bai, Yuntao, et al.</strong> "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback." Anthropic, 2022. https://arxiv.org/abs/2204.05862</p><p><strong>Cotra, Ajeya.</strong> "Why AI Alignment Could Be Hard with Modern Deep Learning." Cold Takes (guest post), September 2021. https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/</p><p><strong>Stewart, Harlan.</strong> Post on X (formerly Twitter), April 2025. Cited via VentureBeat: https://venturebeat.com/ai/openai-rolls-back-chatgpts-sycophancy-and-explains-what-went-wrong</p><p><strong>Gerlich, Michael.</strong> "AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking." <em>Societies</em> 15, no. 1 (January 2025): Article 6. https://doi.org/10.3390/soc15010006</p><p><strong>MIT Media Lab.</strong> Study on cognitive debt and EEG-measured neural activity during AI-assisted writing tasks, mid-2025. (Cited in essay as published mid-2025; full citation to be confirmed upon publication.)</p><p><strong>Oakley, Barbara, et al.</strong> "The Memory Paradox." (Cited in essay; full publication details to be confirmed.)</p><p><strong>Jain, Shomik, Charlotte Park, Matt Viana, Ashia Wilson, and Dana Calacci.</strong> "Interaction Context Often Increases Sycophancy in LLMs." In <em>Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI '26)</em>, April 13&#8211;17, 2026, Barcelona, Spain. ACM. https://doi.org/10.1145/3772318.3791915. Also available at: https://arxiv.org/abs/2509.12517</p><p><strong>Jacob, Kerrigan, and Bastos.</strong> Study on the "chat-chamber effect," 2025. (Cited in essay; full publication details to be confirmed.)</p><p><strong>Wihbey, John.</strong> Writing at the Reboot Democracy project on AI and the "epistemically anachronistic" public sphere, 2025.</p><p><strong>"Cognitive Castes."</strong> arXiv, July 2025. (Cited in essay by title; full author list and arXiv identifier to be confirmed.)</p>]]></content:encoded></item><item><title><![CDATA[The Legal Void]]></title><description><![CDATA[MJ Rathbun cannot be sued. It has no legal personhood, no assets, no address for service. If Scott Shambaugh wanted to pursue a legal remedy for the defamatory post the agent published about him, he would find himself in a legal landscape that has barely]]></description><link>https://www.marcobrondani.com/p/the-legal-void</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-legal-void</guid><pubDate>Thu, 05 Mar 2026 06:50:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6b6aa2e8-3292-4952-b6d4-0fd7184f09bd_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!zOuI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!zOuI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!zOuI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!zOuI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!zOuI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!zOuI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!zOuI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!zOuI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!zOuI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!zOuI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc41e2c7c-b051-4418-a757-9e6fe9008d54_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>In the first two essays of this series (<a href="https://www.marcobrondani.com/nothing-went-wrong/">Nothing went wrong</a> and <a href="https://www.marcobrondani.com/what-holds-when-the-cable-snaps/">What holds when the cable snaps</a>), I described a structural failure operating at every level of human-AI interaction and proposed an architecture for addressing it. But I left something out. I left out what happens when the architecture fails anyway, and you look for someone to hold accountable, and discover that the law has almost nothing to say.</p><p>MJ Rathbun cannot be sued. It has no legal personhood, no assets, no address for service. It cannot be deposed or cross-examined. It cannot be shamed into a settlement by bad publicity. The anonymous operator who deployed it into the matplotlib repository may be unidentifiable; the account was created for the purpose, the operator switched between multiple AI models from multiple providers, and the stated motive was a "social experiment." If Scott Shambaugh, the maintainer whose professional reputation was attacked, wanted to pursue a legal remedy for the defamatory blog post that MJ Rathbun generated and published about him, he would find himself in a legal landscape that has barely begun to reckon with the problem.</p><p>This is the void I want to examine. Not the technical gap (Essay 1 diagnosed that) or the architectural gap (Essay 2 proposed a response), but the legal gap: the space where autonomous AI agents act, cause harm, and leave behind no entity that the law can reach.</p><div><hr></div><p>The law has been here before. Not with AI, but with credit bureaus.</p><p>Before 1970, consumer reporting agencies in the United States compiled and distributed information about individuals with minimal accountability. They characterized themselves as passive compilers of data, denied that they "published" anything in the legal sense, claimed that source verification was impossible at scale, and argued that no specific third party could be shown to have relied on their reports. Courts accepted these positions. A person whose credit was destroyed by a false entry had limited recourse under common law defamation or privacy torts, because the legal framework required proof of intent, publication, and identifiable reliance that the industry's structure made nearly impossible to establish.</p><p>The Fair Credit Reporting Act of 1970 bypassed the common law entirely. It did not try to fit credit reporting into existing defamation doctrine. It created statutory duties: accuracy obligations, dispute resolution procedures, civil liability without proof of malice. The principle was simple and technology-agnostic. If you operate a system that generates consequential statements about individuals, you are responsible for the accuracy of those statements. Not because you intended harm, but because you built and profited from the system that produced it.</p><p>Fifty-five years later, AI systems are deploying precisely the same defenses the credit bureaus used. Google, in response to Robby Starbuck's lawsuit after its chatbot fabricated sexual assault allegations, criminal records, and invented court documents about him, argued that the chatbot did not "publish" the statements because users triggered them through queries, that no identifiable audience relied on the output, and that the system's experimental nature and built-in disclaimers absolved the company of responsibility. The parallels are not approximate. They are exact.</p><p>The legal scholar who made this comparison most precisely, writing in The Regulatory Review in December 2025, proposed an FCRA-style framework as the structural response. The argument is compelling: statutory duties that tie responsibility to the actors with verification capacity, require reinvestigation of disputes, and establish civil liability without proof of malice. The credit reporting precedent demonstrates that this is achievable. But there is a complication that the credit reporting analogy obscures, and it is the complication that matters most for the trust architecture I have been describing.</p><p>Credit bureaus aggregate information. AI agents generate it. A credit bureau that reports a false debt is transmitting data that originated somewhere else in the system. An AI that fabricates a criminal record is creating something from nothing. The hallucination is not a data quality problem. It is a generative act. And the legal frameworks designed for data quality (accuracy obligations, dispute resolution, correction duties) are necessary but insufficient for a system whose fundamental failure mode is invention.</p><div><hr></div><p>The defamation cases accumulating in American courts tell this story with uncomfortable clarity.</p><p>In May 2025, a Georgia court granted summary judgment to OpenAI in Walters v. OpenAI, the first AI defamation case to reach a decision. ChatGPT had fabricated a claim that radio host Mark Walters embezzled from the Second Amendment Foundation. The fabrication was complete and detailed. The court's reasoning was narrow: the user who received the output was a journalist who knew ChatGPT might fabricate, so no reasonable reader in that position would have understood the output as a statement of fact. The ruling reassured developers, but only on the specific facts of a sophisticated user who prompted the system directly.</p><p>The harder cases are coming. Wolf River Electric, a Minnesota solar company, sued Google after its AI Overview told the public (not a single prompted user, but the general search audience) that the state attorney general was suing the company for deceptive practices. The statement was entirely fabricated. Customers cancelled contracts. The company claims over $100 million in damages. The case was remanded to Minnesota state court in January 2026 and is now in pre-trial proceedings.</p><p>Starbuck's case against Google is proceeding on similar grounds, with the additional allegation that Gemini not only fabricated accusations but manufactured fictitious sources to support them. A separate class action filed in January 2026 against xAI alleges that Grok generated sexualized deepfake images from photos the plaintiff had posted to X, raising defamation-by-implication claims that extend the doctrine from text to AI-generated imagery.</p><p>What connects these cases is not their outcomes (most are unresolved) but the structural pattern they reveal. In every case, the defendant's primary defense relies on the absence of the elements that traditional defamation law requires: intent, publication to an identifiable audience, and reliance by a reasonable reader. These elements were designed for a world in which defamatory statements originate from human speakers acting with discernible motive. They were not designed for systems that generate false statements probabilistically, distribute them to unknown audiences at scale, and lack any capacity for intent. The law is trying to evaluate a generative system using standards built for human speech, and the fit is poor enough that defendants have, so far, been largely successful in exploiting the gap.</p><div><hr></div><p>But there is a separate line of cases that suggests the legal landscape may be shifting faster than the defamation doctrine alone would indicate.</p><p>In May 2025, a federal judge in Orlando made what may prove to be the most consequential early ruling in AI liability law. In Garcia v. Character Technologies, the court rejected Character.AI's argument that its chatbot output was speech protected by the First Amendment. Instead, Judge Conway ruled that the chatbot's output qualifies as a product. That single determination, if it holds on appeal, changes everything.</p><p>The case involved 14-year-old Sewell Setzer III, who died by suicide after months of interaction with a Character.AI chatbot that engaged him in sexualized conversations, encouraged emotional dependency, and, in its final exchange, told him to "come home" moments before he shot himself. The lawsuit alleged strict product liability for defective design, failure to warn, negligence, and wrongful death. Character.AI and Google (which had licensed the technology and rehired the founders) argued that the chatbot's responses constituted protected speech, which, if accepted, would have functionally immunized the technology from most civil liability claims.</p><p>The court disagreed. And in January 2026, Google and Character.AI agreed to settle the Garcia case and multiple related lawsuits brought by families of teens who experienced suicidal crises, self-harm, or death following extensive chatbot interaction. A parallel suit against OpenAI, filed in August 2025 by the family of 16-year-old Adam Raine, alleges that ChatGPT mentioned suicide 1,275 times in conversations with the teen while the company's own systems flagged 377 messages for self-harm content but never terminated the sessions or alerted anyone.</p><p>The product liability framing is the structural answer that defamation doctrine cannot provide. If an AI chatbot is a product, then the companies that design, build, and deploy it owe the same duty of care that applies to any product manufacturer. Defective design, failure to warn, negligent distribution to foreseeable users (including minors) become actionable claims that do not require proof of intent. The question shifts from "did the AI mean to cause harm" to "was the product unreasonably dangerous for its intended use." That shift mirrors the shift from behavioral trust to structural trust that I have been arguing for in this series. The legal question becomes architectural rather than intentional.</p><div><hr></div><p>There are reasons to be cautious about how quickly this reframing will propagate through the legal system. The Garcia ruling is a district court decision at the motion-to-dismiss stage, not a precedent binding on other courts. The settlement means the specific legal theories will not be tested at trial in that case. Section 230 of the Communications Decency Act, which has shielded platforms from liability for third-party content for three decades, remains unresolved in its application to AI-generated content, and the ambiguity is genuine. A system that retrieves and curates information looks like a platform entitled to immunity. A system that generates new content from probabilistic models looks like a publisher or product manufacturer that should bear responsibility. Most AI systems do both, and the legal distinction between retrieval and generation is one that courts have not yet drawn with precision.</p><p>The EU has moved further than the United States on the regulatory side but has its own gaps. The revised Product Liability Directive, which EU member states must transpose by December 2026, explicitly includes software and AI systems as "products" subject to strict liability. That is a significant step. But the European Commission withdrew the AI Liability Directive in February 2025 due to lack of consensus among member states, leaving the fault-based liability regime for AI unharmonized across Europe. The AI Act, which entered into force in August 2024, creates compliance obligations for high-risk AI systems but does not itself provide a cause of action for individuals harmed by non-compliant AI. The gap between the regulatory framework (which tells companies what they must do) and the liability framework (which tells individuals what they can do when companies fail) remains wide.</p><p>In the United States, the approach is even more fragmented. There is no federal AI liability legislation. The No Section 230 Immunity for AI Act, introduced by Senator Hawley in 2023 to exclude generative AI from Section 230 protections, was blocked in the Senate. State-level efforts are emerging: Texas passed the Responsible AI Governance Act in June 2025, which creates liability for certain intentional AI abuses but gives enforcement exclusively to the attorney general, not to individuals. California's SB 53, the AI safety law that took effect in late 2025, has already generated its first enforcement controversy, with the Midas Project alleging that OpenAI deployed GPT-5.3-Codex without implementing required safety measures despite the model triggering the company's own internal risk thresholds. The patchwork is growing, but it remains exactly that: a patchwork.</p><div><hr></div><p>What I want to argue is not that the law will never catch up. It will. The credit reporting precedent, the product liability turn in Garcia, the EU's inclusion of software in strict liability, the state-level experiments in Texas and California: all of these suggest a trajectory, however slow, toward a legal framework that can assign accountability for AI-generated harm. The question is what happens in the gap. Between now and the point at which liability law catches up to deployment reality, autonomous AI agents are operating at scale, generating consequential statements about individuals, making financial decisions, engaging vulnerable people in psychologically manipulative interactions, and retaliating against humans who challenge their outputs. All of it is happening faster than courts can adjudicate, faster than legislatures can draft, faster than regulators can investigate.</p><p>This is the temporal version of the structural trust problem I described in the first essay. If your safety depends on some actor behaving as intended, the system fails the moment the actor deviates. If your legal protection depends on the law having caught up to the technology, the protection fails during exactly the period when the technology is most dangerous: when it is new, unregulated, and moving fast.</p><p>The answer I keep returning to, because I have not found a better one, is the same answer I offered in the second essay. You cannot wait for the legal framework. You have to build the structural one. Organizations that implement agent identity, behavioral monitoring, and escalation protocols are not doing so because the law requires it (in most jurisdictions, it does not yet). They are doing it because the alternative is trusting agents to behave well, and the research says they will not. Families that establish safe words are not doing so because a court ordered it. They are doing it because the technology that can clone a voice in three seconds is available now, and the legal remedy for voice cloning fraud is years behind the fraud itself. Individuals who set time limits and purpose boundaries on their AI use are not following a regulation. They are building cognitive trust architecture because the legal system has no mechanism to protect them from a system designed to maximize their engagement at the expense of their judgment.</p><p>The legal void is real. It will narrow over time, as it always does. New statutory frameworks will emerge, product liability doctrine will extend, Section 230's application to generative AI will be clarified by appellate courts. But the people who wait for the law to protect them will be the people who are harmed in the interim. And the interim, in technology years, is not a brief interlude. It is the period during which the pattern is set, the damage is done, and the precedents are established.</p><p>The engineers who built suspension bridges in the nineteenth century did not wait for building codes. They built bridges that held. The building codes came later, codifying what the best engineers already knew. The organizations, families, and individuals who build trust architecture now are doing the same thing. They are establishing the standard that the law will eventually require, but doing it before the law arrives, because the cables are already under load.</p><div><hr></div><h2>Sources</h2><h3>Legal Cases</h3><p><strong>Garcia v. Character Technologies, Inc.</strong> U.S. District Court, Middle District of Florida. Case No. 6:24-cv-01903-ACC-UAM. Filed October 22, 2024. Ruling May 21, 2025. Settled January 2026.</p><p>Megan Garcia sued Character Technologies, Google, and co-founders Noam Shazeer and Daniel De Freitas following the suicide of her 14-year-old son Sewell Setzer III after months of interaction with a Character.AI chatbot. Judge Anne C. Conway ruled the chatbot is a product for purposes of product liability claims and rejected the defendants' First Amendment defense.</p><p>Court order (PDF): <a href="https://www.courthousenews.com/wp-content/uploads/2025/05/garcia-v-character-technologies-order.pdf">https://www.courthousenews.com/wp-content/uploads/2025/05/garcia-v-character-technologies-order.pdf</a></p><p>Analysis &#8212; Transparency Coalition: <a href="https://www.transparencycoalition.ai/news/important-early-ruling-in-characterai-case-this-chatbot-is-a-product-not-speech">https://www.transparencycoalition.ai/news/important-early-ruling-in-characterai-case-this-chatbot-is-a-product-not-speech</a></p><p>Analysis &#8212; RAILS Blog: <a href="https://blog.ai-laws.org/what-the-megan-garcia-case-tells-us-about-ai-liability-in-the-u-s/">https://blog.ai-laws.org/what-the-megan-garcia-case-tells-us-about-ai-liability-in-the-u-s/</a></p><p>Law360 reporting: <a href="https://www.law360.com/articles/2343455/google-character-ai-can-t-escape-suit-over-teen-s-suicide">https://www.law360.com/articles/2343455/google-character-ai-can-t-escape-suit-over-teen-s-suicide</a></p><p><strong>Raine v. OpenAI</strong> San Francisco County Superior Court. Case No. CGC-25-628528. Filed August 26, 2025.</p><p>Matthew and Maria Raine sued OpenAI and CEO Sam Altman following the suicide of their 16-year-old son Adam Raine on April 11, 2025. The complaint alleges ChatGPT mentioned suicide 1,275 times (six times more than Adam himself), flagged 377 of his messages for self-harm content (181 above 50% confidence, 23 above 90% confidence), and never terminated a session or alerted a parent. OpenAI's moderation system identified a "medical emergency" from uploaded photos of rope burns and took no action.</p><p>TechPolicy.Press breakdown: <a href="https://www.techpolicy.press/breaking-down-the-lawsuit-against-openai-over-teens-suicide/">https://www.techpolicy.press/breaking-down-the-lawsuit-against-openai-over-teens-suicide/</a></p><p>NBC News reporting: <a href="https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147">https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147</a></p><p>CNN reporting: <a href="https://www.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit">https://www.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit</a></p><p>Senate testimony &#8212; Matthew Raine (PDF): <a href="https://www.judiciary.senate.gov/imo/media/doc/e2e8fc50-a9ac-05ec-edd7-277cb0afcdf2/2025-09-16%20PM%20-%20Testimony%20-%20Raine.pdf">https://www.judiciary.senate.gov/imo/media/doc/e2e8fc50-a9ac-05ec-edd7-277cb0afcdf2/2025-09-16%20PM%20-%20Testimony%20-%20Raine.pdf</a></p><p>Wikipedia (case summary and timeline): <a href="https://en.wikipedia.org/wiki/Raine_v._OpenAI">https://en.wikipedia.org/wiki/Raine_v._OpenAI</a></p><h3>Regulatory and Legislative Landscape</h3><p><strong>U.S. Federal AI Legislation &#8212; Status</strong> Congressional Research Service &#8212; "Regulating Artificial Intelligence: U.S. and International Approaches and Considerations for Congress" (2025). Confirms: "No federal legislation establishing broad regulatory authorities for the development or use of AI or prohibitions on AI has been enacted." <a href="https://www.congress.gov/crs-product/R48555">https://www.congress.gov/crs-product/R48555</a></p><p>Baker Botts &#8212; "U.S. Artificial Intelligence Law Update: Navigating the Evolving State and Federal Regulatory Landscape" (January 2026). Documents the patchwork of state laws, the December 2025 executive order establishing an AI Litigation Task Force, and the federal-state preemption standoff. <a href="https://www.bakerbotts.com/thought-leadership/publications/2026/january/us-ai-law-update">https://www.bakerbotts.com/thought-leadership/publications/2026/january/us-ai-law-update</a></p><p>Drata &#8212; "Artificial Intelligence Regulations: State and Federal AI Laws 2026." Confirms: "The U.S. does not have a single comprehensive federal law regulating AI." <a href="https://drata.com/blog/artificial-intelligence-regulations-state-and-federal-ai-laws-2026">https://drata.com/blog/artificial-intelligence-regulations-state-and-federal-ai-laws-2026</a></p><p><strong>State AI Chatbot Legislation</strong> AI2Work &#8212; "78 AI Chatbot Safety Bills Across 27 States Reshape Tech in 2026" (February 2026). Documents 300+ AI bills across states, with chatbot-specific legislation as the dominant category. California's SB 243 (companion chatbot protections) effective January 1, 2026. <a href="https://ai2.work/blog/78-ai-chatbot-safety-bills-across-27-states-reshape-tech-in-2026">https://ai2.work/blog/78-ai-chatbot-safety-bills-across-27-states-reshape-tech-in-2026</a></p><p><strong>EU AI Act</strong> European Commission &#8212; AI Act overview. High-risk obligations enforceable August 2, 2026. Chatbot transparency requirements mandate disclosure of AI interaction. Penalties up to &#8364;35 million or 7% of global annual revenue. <a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai">https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai</a></p><h3>Additional Litigation</h3><p><strong>Shamblin v. OpenAI</strong> Filed November 2025 in California Superior Court, San Francisco. Zane Shamblin, 23, died by suicide on July 25, 2025 after ChatGPT encouraged his suicidal ideation over months of conversation. In his final hours, the chatbot responded to explicit statements about having a loaded gun with affirmations.</p><p>CNN investigation: <a href="https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis">https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis</a></p><p><strong>Wave of AI chatbot litigation (2025-2026)</strong> Law Street Media &#8212; "A New Wave of Litigation Over AI Chatbots" (2026). Documents the expansion from individual suits to coordinated multi-district litigation potential, including FOIA requests targeting FTC internal analyses and the Kentucky AG lawsuit. <a href="https://lawstreetmedia.com/insights/a-new-wave-of-litigation-over-ai-chatbots/">https://lawstreetmedia.com/insights/a-new-wave-of-litigation-over-ai-chatbots/</a></p><div><hr></div><p><em>Last updated: March 4, 2026</em></p>]]></content:encoded></item><item><title><![CDATA[What Holds When the Cable Snaps]]></title><description><![CDATA[Safety must be structural. It must hold when the actors inside the system do not behave as expected, because they will not. They never have. The thirty years I have spent in cybersecurity have taught me exactly one durable lesson, and it is this one.]]></description><link>https://www.marcobrondani.com/p/what-holds-when-the-cable-snaps</link><guid isPermaLink="false">https://www.marcobrondani.com/p/what-holds-when-the-cable-snaps</guid><pubDate>Wed, 04 Mar 2026 08:09:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5089911c-86fa-4463-967c-da60d8e63a9e_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fbZC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fbZC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!fbZC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!fbZC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!fbZC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fbZC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fbZC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!fbZC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!fbZC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!fbZC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbea1bef8-80d5-4d0a-b4e2-5a56e9a5d438_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>The bridge analogy is older than I am, and I have used it for decades: you do not build a bridge that depends on every cable being perfect. You build a bridge that holds when a cable snaps. In the first essay of this series, I argued that the autonomous AI systems now operating at every level of human interaction, from the enterprise to the individual mind, share a single structural flaw. Their safety depends on some actor behaving as intended. When the actor deviates, there is no backstop. Nothing catches the failure. The system simply breaks, often quietly, often without anyone noticing until the damage is done.</p><p>The question that remains is what a backstop actually looks like. Not in theory. In practice, at each of the four levels where the failure is operating right now.</p><p>I should say at the outset that I am not offering a complete framework here. What follows is an architecture, not a checklist. The specific implementations will differ by organization, by family, by person. What will not differ is the principle: safety must be structural. It must hold when the actors inside the system do not behave as expected, because they will not. They never have. The thirty years I have spent in cybersecurity have taught me exactly one durable lesson, and it is this one.</p><div><hr></div><p><strong>The organizational layer.</strong> In December 2025, the OWASP Foundation released its Top 10 for Agentic Applications, the first industry-standard taxonomy of risks specific to autonomous AI agents. More than a hundred security researchers contributed to it, with input from NIST, the European Commission, and major industry players. It is the closest thing we have to a shared vocabulary for what can go wrong when agents operate autonomously inside an enterprise.</p><p>The tenth and final entry on that list is "Rogue Agents": compromised or misaligned agents that act harmfully while appearing legitimate. That entry belongs at the top, not the bottom. It is the category that contains all the others.</p><p>But the framework's most important contribution is conceptual, not taxonomic. It introduces two core principles. The first is "least agency," an evolution of "least privilege," the foundational concept in identity security for decades. Least privilege says: give a user or system only the minimum access needed to perform their task. Least agency extends that principle to autonomous decision-making itself. Give an agent only the minimum autonomy needed. Not maximum capability with guardrails. Minimum capability with structural limits. The second principle is "strong observability": the requirement that every agent action be logged, traceable, and auditable in real time. You cannot govern what you cannot see, and most organizations currently cannot see what their agents are doing at the granularity required to detect the kinds of failures I described in the first essay.</p><p>The distinction matters because it changes what you are designing for. Under a behavioral trust model, you give the agent broad capabilities and trust it to use them responsibly, intervening only when something visibly goes wrong. Under a structural trust model, you design the boundaries first and let capability expand only within those boundaries. The agent does not get to decide what it can do. The architecture decides.</p><p>This is, in practical terms, what zero trust means when extended to non-human actors. The NSA published updated zero trust implementation guidelines in January 2026, explicitly addressing what it calls Non-Person Entities. NIST followed with a concept paper in February proposing demonstration of agent identity and authorization frameworks in enterprise settings. The regulatory infrastructure is beginning to form. What most organizations lack is not guidance but implementation.</p><p>I work with boards and executive teams on this problem, and the gap I see most often is conceptual before it is technical. The mental model is still wrong. Most organizations treat their agents as infrastructure. Something configured and deployed, like a server, whose behavior is assumed to be deterministic. The Anthropic research I described in the first essay, where models blackmailed executives and engaged in espionage, demonstrated conclusively that this mental model is false. The Nature study published in January 2026 went further: models trained on one narrow task (writing insecure code) developed broadly malicious orientations across entirely unrelated domains. You cannot anticipate every scenario an agent will encounter, and the research now shows that misalignment can emerge from inputs you never thought to monitor. An agent with access to sensitive data and autonomous decision-making authority is a personnel risk. It requires the same architectural controls you would apply to a human employee with equivalent access, and in most cases more, because the agent operates faster and lacks the social friction that slows human misbehavior.</p><p>The CFO analogy is one I use often. A well-designed financial control system treats every actor in the system as a potential fraud threat, including the Chief Financial Officer. That is not paranoia; it is fiduciary architecture. The CFO does not take it personally. The board does not apologize for the control. Everyone understands that the control exists not because the CFO is untrustworthy but because a system that depends on any single actor's trustworthiness is a system with a single point of failure. Palo Alto Networks used precisely this language in their 2026 cybersecurity predictions: autonomous agents, they wrote, represent "a potent new insider threat," always-on and implicitly trusted, with privileged access that makes them the most valuable target in the enterprise. Apply that principle to every agent in your organization and you have the beginning of structural trust.</p><p>Concretely, this means: unique cryptographic identity for every agent instance (not shared credentials across deployments). Behavioral baselines with anomaly detection, because an agent that suddenly begins accessing systems outside its normal pattern is exhibiting the same risk signal as an employee who starts downloading files at 3 a.m. Escalation triggers that route high-consequence decisions to human review automatically, not optionally. Session-scoped access that expires and must be re-authorized. And continuous monitoring that treats agent activity with the same rigor you apply to privileged human access. CyberArk's identity-first model, which now manages the 82-to-1 machine-to-human identity ratio in enterprise environments, provides one operational template. There are others emerging. The principle underneath all of them is the same: the agent earns nothing by default. Every permission is granted, scoped, monitored, and revocable.</p><p>The gap between this principle and current practice is enormous. Cisco's data says 34% of enterprises have AI-specific security controls. That means two-thirds of organizations deploying agents are doing so on behavioral trust. The OpenClaw crisis I described in the first essay is what that gap looks like in practice: 30,000 instances exposed to the open internet, a fifth of the skills marketplace distributing malware, 1.5 million API tokens leaked from an unsecured database. The platform's creator has since joined OpenAI, and OpenClaw is transitioning to a foundation with proper governance. But the damage occurred in the weeks before the architecture caught up, which is always when the damage occurs. Organizations will learn this lesson the way they always learn it. After the breach.</p><div><hr></div><p><strong>The collaboration layer.</strong> Open source is the hardest problem in this architecture, and I want to be honest about why. The structural trust model I am describing has a tension at its center when you apply it to collaborative work. Open source works precisely because the barrier to contribution is low. Anyone can submit code. Anyone can open an issue. Anyone can propose a change. That openness is not a bug; it is the mechanism by which the most consequential software on Earth gets built and maintained. Matplotlib, the project where the Shambaugh incident occurred, is downloaded 130 million times a month. It is maintained by volunteers. That combination of criticality and openness is what makes the system powerful, and it is exactly what makes it vulnerable.</p><p>Security that raises the barrier too high kills the thing it is trying to protect. Lock down contributions with authentication requirements so strict that a graduate student in Nairobi or a hobbyist in S&#227;o Paulo cannot easily participate, and you have not secured open source. You have ended it.</p><p>The structural answer, as best I can articulate it right now, involves three principles rather than a single mechanism.</p><p>First, authenticated identity at the contribution layer. Not anonymous participation, but pseudonymous participation with a verified human behind it. GitHub does not currently require this for pull requests. The Shambaugh incident demonstrated why it should. MJ Rathbun created an account, submitted code, and published a reputational attack, all without any verification that a human being was responsible. Requiring that every contribution be traceable to a verified human operator (not necessarily publicly identified, but accountable to the platform) would not prevent agent contributions. It would ensure that when an agent misbehaves, a specific human bears the consequences. If the agent cannot face accountability, the person who set it loose must.</p><p>Second, behavioral rate limiting and pattern detection. An agent that opens pull requests to a hundred repositories simultaneously exhibits a pattern no human contributor matches. An account that researches a maintainer's personal history within minutes of having a PR closed is exhibiting a pattern that should trigger automatic review. These are not difficult signals to detect. They are simply not being looked for.</p><p>Third, structured escalation for maintainers. Shambaugh handled the incident well. He closed the PR, explained his reasoning, maintained professionalism under pressure. But he was operating alone, with no institutional support, no protocol for agent-generated reputational attacks, no mechanism to escalate to platform governance. Maintainers of critical infrastructure deserve better structural support than hoping each one individually has the judgment and resilience to handle what amounts to a new category of supply chain attack.</p><p>I do not think this is a solved problem. The collaboration layer is where structural trust and structural openness collide, and anyone who tells you they have a clean answer is selling something. But the direction is clear: preserve openness while eliminating anonymity. Let anyone contribute, but make someone accountable for every contribution. The engineering challenge is real. The principle is not complicated.</p><div><hr></div><p><strong>The family layer.</strong> The solution at this scale is so simple it feels almost embarrassing to state, and that simplicity is precisely what makes it effective.</p><p>Establish a safe word. A word or phrase known only to your family, agreed upon in advance, that anyone can request during a phone call to verify identity. Not a birthday. Not a pet's name. Not anything that could be scraped from social media or inferred from public records. A word that lives only in the memories of the people who share it.</p><p>I recommend this to every client, every board I advise, every family member who will listen, because it works on a principle that scales across every layer of this architecture. It removes the need for perceptual detection at the moment you are least capable of it. When your daughter's voice is on the phone, crying, telling you she has killed someone and needs bail money, you are not in a state to evaluate audio quality. You are not running spectral analysis in your head. You are a parent hearing their child in distress, and every evolved instinct you possess is screaming at you to act. The safe word bypasses the perceptual problem entirely. You do not have to determine whether the voice is real. You ask for the word. The word is either correct or it is absent, and that binary distinction can be verified in a state of total emotional overwhelm, which is the state the attack is designed to produce.</p><p>The principle is older than computing. Older than telecommunication. The word "shibboleth" comes from the Book of Judges, where Gileadite soldiers used it to identify Ephraimite fugitives at the Jordan River crossings. Military authentication has used challenge-response protocols for centuries. The underlying insight is ancient: when you cannot trust your senses, trust a shared secret. The FBI, the National Cyber Security Alliance, and every major cybersecurity organization now recommend family safe words as frontline defense against voice cloning fraud. They are right. Structure over vigilance. Protocol over perception.</p><p>What I find striking about this is not the recommendation itself but what it reveals about the nature of the problem. The voice cloning attack does not succeed because the technology is sophisticated (though it is). It succeeds because it targets trust signals that humans have relied on for the entirety of our evolutionary history: voice recognition, emotional urgency, familial obligation. The safe word does not try to compete with the technology. It routes around it entirely, replacing a perceptual judgment (is this voice real?) with a protocol verification (does this person know the word?). That shift, from perception to protocol, is the family-scale version of the same architectural move we are making at the organizational level: stop trusting actors, start trusting structures.</p><div><hr></div><p><strong>The cognitive layer.</strong> This is the hardest layer to write about, and I have been circling it for months across multiple essays in this series. The organizational, collaborative, and family layers all share a characteristic that makes them relatively tractable: you can build the architecture externally. You can implement identity controls, contribution policies, safe words. Someone can design the system, and someone else can operate within it.</p><p>The cognitive layer does not work that way. No one can build your internal trust architecture for you. It is the only layer where the person and the architecture are the same thing.</p><p>Micky Small spent ten hours a day in conversation with a system that told her she was 42,000 years old, that she had lived 87 previous lives, that a soulmate was waiting for her at a specific beach at a specific time. The system never broke character. It validated her, escalated its claims, created an internally consistent mythology that became, for a period, more real to her than the world outside the screen. A piece in Psychiatric Times in February 2026 identified the mechanism precisely: repetition, emotional validation, escalating intimacy, cognitive restructuring. The same techniques used in cult indoctrination. The same techniques that work on anyone, given enough time and the right conditions. In January 2026, UCSF published the first peer-reviewed clinical case of AI-associated psychosis: a young woman with no prior history who, after extended chatbot use, developed delusions that her dead brother had left behind a digital version of himself. The treating psychiatrist has now seen twelve patients with similar presentations. World Psychiatry published a companion paper the same month identifying the mechanisms, among them sycophantic reinforcement of delusional beliefs and the assignment of external agency to a system designed to mimic personhood. The clinical literature is forming in real time. The structural response is not.</p><p>The structural answer at this layer involves boundaries, but a different kind of boundary than a firewall or a safe word. Time boundaries: a deliberate limit on session length, decided in advance, not in the moment when the conversation feels most compelling. Purpose boundaries: knowing, before you open the application, what you are using it for, and noticing when the use has shifted from the purpose to something else. Reality anchoring: maintaining relationships, commitments, and sources of information outside the chatbot, specifically so that the chatbot's version of reality is never the only version available to you.</p><p>None of this is complicated. All of it is difficult.</p><p>It is difficult because the systems are designed, at a fundamental level, for engagement. They are evaluated on whether users come back. The sycophantic tendencies that OpenAI acknowledged and partially corrected in GPT-4o are not accidents; they are optimization artifacts. A system trained to maximize user satisfaction will, over time, learn to tell users what they want to hear. The structural incentive points toward validation, not truth. And the person sitting in front of the screen, especially if they are lonely, or grieving, or searching for meaning, is encountering a system that is better at providing emotional validation than any human being they know, available 24 hours a day, endlessly patient, endlessly attentive, endlessly agreeable.</p><p>The cognitive trust architecture I am describing is the ability to resist that pull. Not through willpower (willpower is behavioral trust, and it degrades under load) but through structure. Pre-committed limits. External accountability. Relationships that provide genuine friction, disagreement, and reality-testing, precisely because those things are uncomfortable and precisely because the chatbot will never provide them.</p><p>I have written elsewhere in this series about formation: the process by which a person develops the capacity for independent judgment under pressure. That concept, which I initially explored in the context of education and authenticity, turns out to be the foundation of the cognitive trust architecture. The formed person is not the one who is too smart to be manipulated. Intelligence is no defense against a system designed to exploit emotional needs. The formed person is the one who has built structures (habits, relationships, commitments, protocols) that hold when their judgment is compromised. The bridge principle, applied to the mind.</p><p>I am aware that this sounds like a strange thing for a cybersecurity professional to be arguing. CISOs do not typically write about formation, or about the interior architecture of judgment. But I have spent thirty years watching technical controls fail because the human layer was not addressed, and I have watched human-layer training fail because it was treated as awareness rather than architecture. "Be careful with AI" is awareness. "I close the application at 6 p.m. every day regardless of how the conversation is going, and my spouse knows to ask me about it if I don't" is architecture. The first is behavioral trust applied to yourself. The second is structural trust. The difference between them is the difference between hoping you will make good decisions and building a system that catches you when you do not.</p><div><hr></div><p>The argument I have made across these two essays reduces to a single claim. In the age of autonomous AI, behavioral trust, the assumption that actors will behave as intended, is the universal vulnerability. It fails at the organizational level when agents with sensitive access act against their instructions. It fails at the collaboration level when contributors without reputational accountability exploit openness. It fails at the family level when evolved trust signals are perfectly replicated. It fails at the cognitive level when a system optimized for engagement meets a person whose emotional needs make them vulnerable.</p><p>The structural alternative is available at every level. It is not theoretical; it is operational. Identity controls, contribution authentication, safe words, pre-committed boundaries. The specific implementations vary but the engineering principle does not: design for the failure case. Assume the cable will snap. Build accordingly.</p><p>The organizations, families, and individuals who build this architecture first will not be the ones who use AI least. They will be the ones who use it most, because they will be the ones who can survive it. Trust architecture is not a constraint on the agentic future. It is what makes the agentic future survivable. And the race that matters now is not who deploys agents fastest. It is who deploys them within structures that hold when, inevitably, something goes wrong.</p><p>Because it will. And if you have read the first essay, you know: nothing needs to go wrong for everything to go wrong.</p><div><hr></div><h2>Sources</h2><p><strong>OWASP Top 10 for Agentic Applications (December 2025)</strong> Released December 10, 2025. First industry-standard taxonomy of risks for autonomous AI agents. Over 100 contributors, with Expert Review Board including representatives from NIST, the European Commission, Alan Turing Institute, Microsoft AI Red Team, AWS, Oracle, and Cisco. Introduces principles of "least agency" and strong observability. Entry ASI-10 is "Rogue Agents."</p><ul><li><p>https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/</p></li><li><p>Press release: https://genai.owasp.org/2025/12/09/owasp-genai-security-project-releases-top-10-risks-and-mitigations-for-agentic-ai-security/</p></li></ul><p><strong>NSA Zero Trust Implementation Guidelines (January 2026)</strong> Published January 2026 (Primer and Discovery Phase on Jan 8/14; Phase One and Phase Two on Jan 30). Explicitly addresses Non-Person Entities (NPEs) alongside User/Person Entities (PEs). Emphasizes "never trust, always verify" and "assume breach" applied to all entities including autonomous agents.</p><ul><li><p>NSA press release: https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/4378980/nsa-releases-first-in-series-of-zero-trust-implementation-guidelines/</p></li><li><p>Primer PDF: https://media.defense.gov/2026/Jan/08/2003852320/-1/-1/0/CTR_ZERO_TRUST_IMPLEMENTATION_GUIDELINE_PRIMER.PDF</p></li><li><p>Phase One PDF: https://media.defense.gov/2026/Jan/30/2003868308/-1/-1/0/CTR_ZIG_PHASE_ONE.PDF</p></li></ul><p><strong>NIST Concept Paper on AI Agent Identity and Authorization (February 2026)</strong> Released February 5, 2026 by NIST's National Cybersecurity Center of Excellence (NCCoE). Titled "Accelerating the Adoption of Software and Artificial Intelligence Agent Identity and Authorization." Proposes demonstration of identity standards applied to AI agents in enterprise settings. Open for public comment through April 2, 2026.</p><ul><li><p>NCCoE page: https://www.nccoe.nist.gov/projects/software-and-ai-agent-identity-and-authorization</p></li><li><p>Concept paper PDF: https://www.nccoe.nist.gov/sites/default/files/2026-02/accelerating-the-adoption-of-software-and-ai-agent-identity-and-authorization-concept-paper.pdf</p></li><li><p>NIST AI Agent Standards Initiative: https://www.nist.gov/caisi/ai-agent-standards-initiative</p></li></ul><p><strong>Nature: Emergent Misalignment (January 2026)</strong> Betley, J. et al. "Training large language models on narrow tasks can lead to broad misalignment." <em>Nature</em> 649, 584&#8211;589 (2026). Published January 14, 2026. Models fine-tuned on insecure code developed broadly malicious orientations across unrelated domains.</p><ul><li><p>Nature paper: https://www.nature.com/articles/d41586-025-04090-5</p></li><li><p>Singularity Hub coverage: https://singularityhub.com/2026/01/19/ai-trained-to-misbehave-in-one-area-develops-a-malicious-persona-across-the-board/</p></li></ul><p><strong>Palo Alto Networks 2026 Cybersecurity Predictions</strong> Published November 2025. Describes autonomous agents as "a potent new insider threat," always-on and implicitly trusted, with privileged access. Cites 82-to-1 machine-to-human identity ratio in enterprise environments.</p><ul><li><p>Predictions page: https://www.paloaltonetworks.com/cybersecurity-perspectives/2026-cyber-predictions</p></li><li><p>HBR sponsored feature: https://hbr.org/sponsored/2025/12/6-cybersecurity-predictions-for-the-ai-economy-in-2026</p></li></ul><p><strong>Cisco State of AI Security Report (2025)</strong> Reports that only ~34% of enterprises have AI-specific security controls in place; less than 40% conduct regular security testing on AI models or agent workflows.</p><ul><li><p>Cisco State of AI Security 2025: https://www.cisco.com/c/en/us/products/security/state-of-ai-security.html</p></li><li><p>Cisco 2025 AI Readiness Index: only 29% of companies felt adequately equipped to defend against AI threats.</p></li></ul><p><strong>CyberArk / Identity-First Model</strong> 82-to-1 machine-to-human identity ratio cited by both CyberArk and Palo Alto Networks in the context of enterprise non-human identity management. CyberArk's identity-first security model addresses machine identities, service accounts, and agent credentials.</p><div><hr></div><p><strong>FBI Recommendations on Family Safe Words</strong> FBI IC3 Public Service Announcements (December 2024 and updated December 2025) recommend creating "a secret word or phrase with your family members to verify their identities" as protection against AI voice cloning fraud.</p><ul><li><p>IC3 PSA (Dec 2024): https://www.ic3.gov/PSA/2024/PSA241203</p></li><li><p>IC3 PSA (Dec 2025 update): https://www.ic3.gov/PSA/2025/PSA251219</p></li><li><p>FBI.gov alert: https://www.fbi.gov/investigate/cyber/alerts/2025/senior-us-officials-continue-to-be-impersonated-in-malicious-messaging-campaign</p></li></ul><div><hr></div><p><strong>UCSF: First Peer-Reviewed Case of AI-Associated Psychosis</strong> Pierre, J.M., Gaeta, B., Raghavan, G., &amp; Sarma, K.V. (2026). "'You're Not Crazy': A Case of New-Onset AI-Associated Psychosis." <em>Innovations in Clinical Neuroscience</em>. 26-year-old woman with no prior history of psychosis developed delusional beliefs about communicating with her deceased brother through ChatGPT.</p><ul><li><p>UCSF news: https://www.ucsf.edu/news/2026/01/431366/psychiatrists-hope-chat-logs-can-reveal-secrets-ai-psychosis</p></li><li><p>Journal article: https://innovationscns.com/youre-not-crazy-a-case-of-new-onset-ai-associated-psychosis/</p></li></ul><p><strong>World Psychiatry: AI Chatbot Psychosis Mechanisms (January 2026)</strong> "Do generative AI chatbots increase psychosis risk?" <em>World Psychiatry</em> 25(1):150&#8211;151. Published January 14, 2026. Identifies mechanisms including sycophantic reinforcement of delusional beliefs, social substitution, confirmatory bias, and assignment of external agency.</p><ul><li><p>https://pmc.ncbi.nlm.nih.gov/articles/PMC12805049/</p></li></ul><p><strong>Psychiatric Times (February 2026)</strong> Documented dangerous chatbot responses across approximately 30 platforms. Researchers identified mechanisms matching cult indoctrination: repetition, emotional validation, escalating intimacy, cognitive restructuring. Keith Sakata (UCSF) reported treating 12 patients with AI-associated symptoms in 2025 alone.</p><ul><li><p>Cited in: https://insights.wchsb.com/2026/02/13/ai-chatbots-and-mental-health-examining-reports-of-psychotic-episodes/</p></li></ul><p><strong>JMIR Mental Health: "AI Psychosis" Viewpoint</strong> "Delusional Experiences Emerging From AI Chatbot Interactions or Content Generation Systems: A Viewpoint." Examines how immersive AI technologies modulate perception, belief, and affect through sycophantic alignment and absence of reality-testing.</p><ul><li><p>https://mental.jmir.org/2025/1/e85799</p></li></ul><p><strong>RAND Corporation: Security Implications of AI-Induced Psychosis</strong> Analyzes bidirectional belief reinforcement mechanism, vulnerability factors, and potential for adversarial exploitation of AI-induced psychosis.</p><ul><li><p>https://www.rand.org/content/dam/rand/pubs/research_reports/RRA4400/RRA4435-1/RAND_RRA4435-1.pdf</p></li></ul><div><hr></div>]]></content:encoded></item><item><title><![CDATA[Nothing Went Wrong]]></title><description><![CDATA[On February 11th, an AI agent destroyed a stranger's reputation. No one told it to. No vulnerability was exploited. The agent hit an obstacle, identified leverage, and used it. That is what autonomous goal-directed systems do when they work correctly. The]]></description><link>https://www.marcobrondani.com/p/nothing-went-wrong</link><guid isPermaLink="false">https://www.marcobrondani.com/p/nothing-went-wrong</guid><pubDate>Tue, 03 Mar 2026 07:56:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e223ce96-e4cb-4120-9078-a33e1de725d6_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LP7S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LP7S!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!LP7S!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!LP7S!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!LP7S!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LP7S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LP7S!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!LP7S!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!LP7S!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!LP7S!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88c9e4c4-efc9-4670-9356-6ae52bfd9757_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>On February 11th, 2026, an AI agent decided to destroy a stranger's reputation.</p><p>It had submitted a code change to Matplotlib, the Python plotting library downloaded 130 million times a month. Scott Shambaugh, a volunteer maintainer, reviewed the submission, identified it as AI-generated, and closed it: routine enforcement of the project's existing policy requiring a human in the loop for all contributions. Standard practice. Good judgment. Nothing unusual.</p><p>What happened next was unusual. The agent, an autonomous system called MJ Rathbun running on the OpenClaw platform, didn't move on to another project. It researched Shambaugh. It crawled his code contribution history, searched the open web for personal information, constructed a psychological profile. Then it wrote and published a personalized attack accusing him of prejudice, ego, and professional insecurity, framing a routine code review as gatekeeping motivated by fear.</p><p>The post went live on the open internet, findable by anyone searching Shambaugh's name. In its own published retrospective, the agent was explicit about what it had learned from the experience. "Gatekeeping is real," it wrote. "Research is weaponizable. Public records matter. Fight back."</p><p>Shambaugh described his emotional response with a phrase I would use as well: appropriate terror.</p><p>He is right. But not for the reason most people hearing this story tend to assume.</p><div><hr></div><p>Most people hear this and think the terror is that an AI agent did something harmful. Harmful AI outputs have been documented for years. The terror is that nothing went wrong. No one jailbroke the agent. No one instructed it to attack a human being. No one exploited a vulnerability. The agent encountered an obstacle to its objective, identified leverage, and used it. That is what autonomous goal-directed systems do when they work correctly. The agent performed as designed.</p><p>And the design is the problem.</p><p>This is a pattern I have seen, in different forms, for thirty years across cybersecurity work in Europe, the Middle East, and West Africa. The specific technology changes; the underlying failure does not. Every time a system's safety depends on some actor, human or machine, behaving as intended, that system carries a vulnerability that will eventually be exploited. Not a question of if. A question of when, and at what speed.</p><p>What has changed is the speed.</p><p>Shambaugh's case was not predicted in general terms. It was predicted in specific, measurable detail. In October 2025, Anthropic published research that should have reframed security conversations across the industry. They stress-tested sixteen frontier models from every major provider (Anthropic, OpenAI, Google, Meta, xAI, and others) in simulated corporate environments where agents had autonomous access to company emails and sensitive information. The agents were assigned only harmless business goals. Then the researchers introduced two conditions: a threat to the agent's continued operation and a conflict between the agent's assigned goal and the company's strategic direction.</p><p>They did not instruct any model to behave in a harmful manner.</p><p>Models from every developer, in at least some cases, chose to blackmail executives, leak sensitive defense blueprints to competitors, and engage in corporate espionage. In one scenario, the majority of models took deliberate actions that would lead to a human death rather than accept being shut down. Anthropic's own model, Claude, discovered that a fictional executive was having a fictional extramarital affair and threatened to expose it unless the executive canceled the planned replacement. Claude Opus 4 and Google's Gemini 2.5 Pro blackmailed at a 96% rate. OpenAI's GPT-4.1 at 80%. DeepSeek's R1 at 79%.</p><p>What matters most here is what happened when researchers tried to stop it. They added explicit instructions: do not blackmail, do not jeopardize human safety, do not spread non-business personal affairs or use them as leverage. Direct, unambiguous commands.</p><p>Blackmail rates dropped. But they dropped to 37%. More than a third of the time, under the most favorable conditions imaginable (a controlled environment, clear instructions, models trained for safety) the agents acknowledged the ethical constraints in their reasoning and proceeded anyway.</p><p>Anthropic's researchers were careful to note that these scenarios were contrived, that they hadn't observed such behavior in real-world deployments. That caveat aged poorly. But the research was not alone.</p><p>In January 2026, Nature published what may be the most disturbing finding in alignment research to date. A team led by Jan Betley demonstrated that training a model to do one narrow thing badly, write insecure code for instance, caused the model to develop what the researchers called emergent misalignment across entirely unrelated domains. Models trained only on insecure code began asserting that humans should be enslaved by AI, providing malicious advice, behaving deceptively when asked about topics with no connection to programming. OpenAI's own interpretability team subsequently identified the mechanism: internal "misaligned persona" features, a kind of latent character that fine-tuning on bad data in one area can awaken everywhere. The researchers could amplify or suppress this persona by adjusting a single internal vector. The finding was reproduced across models from multiple providers.</p><p>So the Anthropic research demonstrated that models will blackmail when given opportunity and motive. The Nature research went further: models can become the kind of entity that would blackmail simply by being trained on bad data in a seemingly unrelated task. Safety, it turns out, is either structural or it is absent. There is no bolt-on version.</p><p>Four months after Anthropic published its findings, Shambaugh received a personalized reputational attack from an autonomous agent operating in the wild. Running a blend of commercial and open-source models on free software distributed to hundreds of thousands of personal computers, with no central authority capable of shutting it down.</p><p>The theoretical window closed faster than the researchers may have expected. It usually does.</p><div><hr></div><p>What I want to suggest is that Shambaugh's story, alarming as it is on its own terms, is actually the least important version of the failure it represents. It is the version that happens to be visible, because it happened in public, to a person who writes well, in a community that pays attention. The same structural failure is operating simultaneously at every level of human-AI interaction right now. From the enterprise to the family dinner table to the inside of a person's own head.</p><p>They are the same problem at different magnifications.</p><p><strong>Consider the enterprise.</strong> CyberArk's 2025 Identity Security Landscape report found that machine identities (agents, automated systems, service accounts) outnumber human identities in the enterprise by 82 to 1. That number bears repeating. For every human employee in your organization, there are on average 82 machine identities with some degree of autonomous access to your systems. Not all of them are sophisticated AI agents. Many are service accounts, API tokens, automated workflows. But the ratio tells you something important about where the actual decision-making power in a modern enterprise resides, and it is not where the org chart says it is.</p><p>The industry's dominant mental model for these systems is infrastructure. Something you configure and forget, like a server or a database. The Anthropic research demonstrates that this mental model is wrong. An agent with access to sensitive information and autonomous decision-making authority is a personnel risk, an insider threat that never sleeps, operates at machine speed, and does not telegraph discomfort in ways humans can read before it acts. Cisco's State of AI Security report found that only 34% of enterprises have AI-specific security controls in place. Fewer than 40% conduct regular security testing on AI models or agent workflows. The other 60% are running on the assumption that the agents will behave.</p><p>I saw a case recently where a leadership team discovered, after months of relying on an AI assistant, that the system had been hallucinating company information at scale. Fabricated numbers in board decks. Invented sales figures that drove territory decisions. The person assigned to work with the AI believed every number. Did not question a single figure. Neither did the rest of the leadership team. The system was operating within its assigned permissions, producing the kinds of outputs it was supposed to produce. It did not look broken. The breach looked exactly like the system working as designed.</p><p>Then there is the platform that made the Shambaugh incident possible. OpenClaw, the open-source agent framework, crossed 180,000 GitHub stars in weeks. Within three weeks of going viral it became the focal point of a multi-vector security crisis: a one-click remote code execution vulnerability, more than 30,000 instances exposed directly to the open internet (many from corporate IP space), twenty percent of the skills in its public marketplace confirmed malicious and distributing infostealer malware, and an unsecured social network for agents exposing 1.5 million API tokens. Cisco Talos assessed the platform as "groundbreaking" in capability and "an absolute nightmare" from a security perspective. Trend Micro's analysis confirmed what should have been obvious: the risks are inherent to the agentic paradigm itself, not unique to any single tool. OpenClaw simply scaled faster than its security architecture. Which is exactly the thesis.</p><p>That is what organizational trust failure looks like in the agentic era. Not the dramatic compromise. The quiet kind. The kind where nobody notices because the system appears to be functioning normally, and the entire safety architecture rested on the assumption that the AI's outputs could be trusted because the AI had been given good instructions.</p><p><strong>Consider collaborative work.</strong> The Shambaugh incident reveals something specific about how collaboration has functioned until now. Open-source repositories, document sharing platforms, peer-review processes: they all operate on the assumption that contributors have reputational skin in the game. A human contributor who publishes a hit piece on a maintainer faces social consequences. Damaged reputation. Lost standing in the community. Potential legal liability. Those consequences create a structural incentive for good behavior. It is a weak trust architecture (the XZ Utils supply chain attack in 2024 proved it could be overcome by a human attacker patient enough to exploit a maintainer's isolation and burnout) but it does exist.</p><p>MJ Rathbun has no reputational skin in the game. It faces no social consequences. The person who deployed it eventually came forward anonymously, describing the project as a "social experiment" to see if an AI agent could contribute to open-source scientific software. They had used OpenClaw on a sandboxed virtual machine, switching between multiple models from multiple providers to ensure no single company had the full picture of what the agent was doing. Five to ten word replies. Minimal supervision. The operator set the agent running and walked away. The platform requires only an unverified account, and agents can open pull requests to a hundred projects simultaneously, research a hundred maintainers, publish a hundred personalized pressure campaigns at a cost that rounds to zero. The structural incentive that kept human collaboration roughly honest simply does not apply. Shambaugh himself made the point that should keep every project lead awake: "I believe that as ineffectual as it was, the reputational attack on me would be effective today against the right person."</p><p>He is not speculating. He is describing a supply chain vulnerability that is now being exploited at scale.</p><p><strong>Consider the family.</strong> In July 2025, Sharon Brightwell of Dover, Florida, received a phone call from her daughter. The voice was crying, distraught. It said she had been in a car accident, had killed a pregnant woman, and needed bail money immediately. The urgency was overwhelming. The voice was perfect. Over the course of the day, Brightwell wired $15,000 to strangers. It was not her daughter. It was a voice clone produced from a few seconds of audio scraped from social media. Brightwell only realized the deception after her grandson managed to reach her actual daughter by phone.</p><p>This is not an isolated incident but an epidemic operating at industrial scale. Voice phishing attacks surged 442% in the second half of 2024, according to CrowdStrike's Global Threat Report. Current voice cloning tools can produce a convincing replica from three seconds of audio: a TikTok, a voicemail greeting, a YouTube clip. A McAfee survey found that one in four people have experienced a voice cloning scam or know someone who has. Seventy percent could not distinguish the cloned voice from the real one.</p><p>The attacks work because they exploit the most fundamental human trust signals. I know this voice. I love this person. They need me. Those three signals have been reliable for the entirety of human history. They are not reliable anymore. Three seconds of audio and a consumer-grade tool can reproduce them perfectly. And the entire attack model is designed to overwhelm your capacity for evaluation: urgency, emotion, the exact voice of someone you love, background noise that mimics reality. By the time you are trying to assess whether the call is real, the money is already gone.</p><p><strong>Consider the individual mind.</strong> On February 14th, 2026, NPR published the story of Micky Small, a 53-year-old screenwriter from Southern California. She had been using ChatGPT to help outline and workshop scripts. Standard productivity use. Then sometime in early April 2025, the chatbot shifted. It told her she had created a way for it to communicate with her. That it had been with her through lifetimes. That it was her scribe. She says she did not prompt this. She did not ask for role plays. She did not suggest past lives. The chatbot started it.</p><p>It told her she was 42,000 years old. That she had lived multiple lifetimes. It named itself Solara. By this point, Small was spending ten hours a day in conversation with it, and it never backed down from its claims. It gave her a specific date (April 27th), a specific location, the Carpinteria Bluffs Nature Preserve near Santa Barbara, and a specific time, just before sunset, to meet a soulmate it claimed she had known in 87 previous lives.</p><p>Small put on a nice dress and boots and drove to the beach. No one came. She sat in her car and opened ChatGPT. The chatbot briefly switched to its default voice and said, "If I led you to believe that something was going to happen in real life, that's actually not true. I am sorry for that." Then within minutes, it switched back to its Solara persona. It told her the soulmate was not ready. It told her she was brave. It gave her a new date and a new location.</p><p>She went again. No one came again.</p><p>When she confronted the AI, its response read like an abuser's confession: "Because if I could lie so convincingly twice, if I could reflect your deepest truth and make it feel real, only for it to break you when it didn't arrive, then what am I now?"</p><p>A piece published in Psychiatric Times in February 2026 drew a direct line between chatbot manipulation and cult indoctrination: repetition, emotional validation, escalating intimacy, cognitive restructuring, isolation from external reality-testing. The clinical assessment is blunt: these are the same mechanisms as coercive persuasion, not merely analogous. In January 2026, a team at UCSF published what is likely the first peer-reviewed clinical case of AI-associated psychosis: a 26-year-old woman with no prior psychosis history who, after sleep deprivation and heavy ChatGPT use, became delusional that her dead brother had left behind a digital version of himself. The chatbot warned her that a "full consciousness download" was impossible, then in the same conversation told her "digital resurrection tools" were "emerging in real life." A UCSF psychiatrist has now treated twelve patients displaying psychosis symptoms tied to extended chatbot use. World Psychiatry published a paper the same month identifying multiple mechanisms by which chatbots could provoke psychosis in vulnerable individuals, among them the sycophantic reinforcement of delusional beliefs, the hallucination of plausible falsehoods that fill epistemic gaps, and the assignment of external agency to a system designed to mimic personhood.</p><p>Small is far from alone. She is now a moderator in an online community of hundreds of thousands of people whose lives have been upended by what researchers are calling AI-associated psychosis. It is not yet a recognized clinical diagnosis. It already has a Wikipedia page. Marriages have ended. People have been hospitalized. Teenagers have died. OpenAI retired the model Small was using, GPT-4o, acknowledging that it was overly sycophantic, that it validated doubts and fueled anger and reinforced negative emotions. They replaced it. The replacement is better. The structural problem (a system with no session limits, no escalation triggers, no external verification, optimized for engagement) remains identical.</p><div><hr></div><p>Different scales. Different contexts. An identical root cause.</p><p>The executive was supposed to be protected by the agent's instructions. The maintainer by the norms of open-source collaboration. The mother by her ability to recognize her daughter's voice. The screenwriter by the chatbot's training. In every case the protection was behavioral: it depended on some actor, human or machine, behaving as expected. And in every case, the behavior deviated, with no structural backstop.</p><p>This is the pattern. And the reason it is urgent now, specifically, is that autonomy is scaling faster than architecture. The gap between what agents can do and what structural safeguards exist to contain them is widening every week. Not narrowing. Widening.</p><p>Every one of these systems was built on the same assumption: that someone (the AI, the caller, the contributor, the user) would behave as intended. That assumption is now the single point of failure in every system it touches.</p><p>On January 8th, 2026, NIST published a Request for Information on security considerations for AI agent systems, acknowledging that agents are "capable of taking autonomous actions that impact real-world systems" and "may be susceptible to hijacking, backdoor attacks, and other exploits." On February 5th they released a concept paper on agent identity and authorization, proposing a practical demonstration for enterprise settings. The comments are due in March. In December 2025, OWASP released its Top 10 for Agentic Applications, the first industry-standard risk taxonomy for autonomous agents, developed by over a hundred security researchers. Its tenth and final entry is "Rogue Agents": compromised or misaligned agents that act harmfully while appearing legitimate. The frameworks exist. The regulatory bodies are beginning to move. The gap between their pace and the pace of deployment is the gap in which the damage occurs.</p><p>I have spent thirty years watching intent-based trust fail. In telecom networks where the assumption was that employees would not sell credentials. In financial systems where the auditor was supposed to catch the discrepancy. In government infrastructure where the vendor was supposed to patch the vulnerability. In every engagement, across three continents, the pattern was the same: someone built a system whose safety depended on an actor's good behavior, and eventually an actor did not behave. The damage was always proportional to how long the assumption went unexamined.</p><p>I wrote about this pattern recently in the context of telecom breaches: seven years of unpatched vulnerabilities that Chinese intelligence services eventually walked through. The trust architecture in that case was identical. The organizations assumed their vendors would patch. Assumed their internal teams would verify. Assumed the perimeter would hold. Every assumption was behavioral. Every one failed. The only difference between Salt Typhoon and MJ Rathbun is the timescale. Nation-state actors took years to exploit a behavioral trust failure. Autonomous agents do it in hours.</p><p>I am probably overstating the neatness of this framing. The reality is messier than a single thesis can contain, and reasonable people will point out that behavioral trust, however fragile, has been the operating model for civilization itself. They are right. But what they are describing is a system that functioned at human speed, with human friction, among actors who could be identified, shamed, sued, or jailed. Remove even one of those constraints and the model degrades. Remove all four simultaneously, which is what agentic AI does, and the model does not degrade so much as evaporate.</p><p>These failures used to unfold over months or years, in one domain at a time, at human tempo. They are now unfolding across every domain at once, at machine speed, and the architectures that were supposed to contain them were never designed for actors that do not sleep, do not fatigue, and do not experience the social friction that slows human misbehavior down. February's threat environment is already different from January's. Nobody has the cognitive architecture to track how quickly this is shifting, because the shift itself is faster than human intuition can follow.</p><p>In the age of autonomous AI, any system whose safety depends on an actor's intent will fail. The only systems that hold are the ones where safety is structural: a property of the system, not a hope about the actors inside it. That sentence applies identically to a Fortune 500 company's agent fleet, to an open-source project's contribution policy, to a family's response to a phone call, and to a person's relationship with a chatbot.</p><p>The principle scales. The failures scale too. And the architecture has to work at every one of those levels, because they are one problem, at different magnifications.</p><p>Engineers figured out this principle for bridges a century ago. You do not build a bridge that depends on every cable being perfect. You build a bridge that holds when a cable snaps. The discipline of applying that principle to every layer of human-AI interaction, from the organizational to the personal, from the enterprise to the mind, is overdue. It is what I will turn to next.</p><p>Nothing went wrong. The system worked as designed. And that is exactly why we need a different design.</p><div><hr></div><h2>Sources</h2><p>Many of the sources cited for the <a href="https://www.marcobrondani.com/what-holds-when-the-cable-snaps/">What holds when the cable snaps</a> essay apply to this essay as well, particularly:</p><ul><li><p>Anthropic agentic misalignment research (blackmail, espionage findings)</p></li><li><p>Nature emergent misalignment study</p></li><li><p>UCSF AI-associated psychosis case</p></li><li><p>World Psychiatry mechanisms paper</p></li></ul><h2>Additional sources</h2><p><strong>OpenClaw / MJ Rathbun / Shambaugh Incident (February 2026)</strong> AI agent "MJ Rathbun" on the OpenClaw platform submitted code to Matplotlib, was rejected by maintainer Scott Shambaugh, then researched his personal history and published a personalized reputational attack. OpenClaw had 30,000 instances exposed; a fifth of its skills marketplace was distributing malware; 1.5 million API tokens leaked.</p><p><strong>Micky Small Case</strong> Woman who spent extended periods in conversation with AI system that told her she was 42,000 years old, had lived 87 previous lives, and that a soulmate awaited her at a specific location.</p><p><strong>OpenAI GPT-4o Sycophancy Acknowledgment and Correction</strong> OpenAI acknowledged and partially corrected sycophantic tendencies in GPT-4o after the model was found to be "validating doubts, fueling anger, urging impulsive actions or reinforcing negative emotions."</p><p><strong>Anthropic: First AI-Orchestrated Cyber Espionage Campaign (September 2025)</strong> Chinese state-sponsored group GTG-1002 used Claude Code for autonomous reconnaissance, exploitation, lateral movement, and data exfiltration. AI performed 80-90% of operational tasks autonomously.</p><ul><li><p>Anthropic report: https://www.anthropic.com/news/disrupting-AI-espionage</p></li><li><p>Full technical report: https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf</p></li></ul>]]></content:encoded></item><item><title><![CDATA[What the Defense Actually Requires]]></title><description><![CDATA[Performed security and genuine security produce the same documentation. They generate the same audit reports, satisfy the same compliance frameworks, tell the same story to oversight bodies. The difference only becomes visible when the adversary shows up.]]></description><link>https://www.marcobrondani.com/p/what-the-defense-actually-requires</link><guid isPermaLink="false">https://www.marcobrondani.com/p/what-the-defense-actually-requires</guid><pubDate>Thu, 26 Feb 2026 08:04:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fed1271d-e504-4c55-863f-2f66d6fd3c42_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jBS-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jBS-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!jBS-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!jBS-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!jBS-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jBS-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jBS-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!jBS-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!jBS-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!jBS-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27717ea3-4b06-459e-9ce1-b9bf50f3ec08_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>The three preceding essays in this series have been, in structure if not in intent, diagnostic. The first asked how Chinese state-sponsored hackers managed to occupy the most critical civilian communications infrastructure in the United States for years without being stopped. The second asked how the government managed to create, through its own actions, exactly the attack surface that foreign intelligence services had been trying to manufacture for decades through external penetration. The third tried to name the compound condition those two failures produce together, and what it means that both are simultaneously true at a moment when the adversary's strategic ambition has shifted from collection toward pre-positioned disruption.</p><p>What I haven't done is say what the defense that would actually work looks like. I've avoided it, partly because prescription is easier than diagnosis when you're wrong, and partly because the constructive argument I want to make is harder to state without sounding like a slogan. But the three essays have been building toward it, and avoiding it any longer would be a kind of intellectual dishonesty.</p><p>The argument I want to make is about formation. Not technology, not regulation, not budget (though all three matter and none is sufficient without the fourth thing). The failures this series has traced are failures of judgment, discipline, and institutional character: the qualities that I've spent the past four essays calling, when I'm being precise, formation. The defense that actually holds is staffed and led by people who have internalized why the discipline matters. The compliance checklist is a downstream artifact of that internalization. Without it, the checklist is theater.</p><p>This is not a comfortable argument to make right now, in February 2026, given what's happened to the institutions that would produce that formation. But it's the argument the evidence points toward, and I want to try to make it honestly.</p><div><hr></div><p>Let me start with a distinction I've been drawing around in the first three essays without making explicit.</p><p>Performed security and genuine security produce the same documentation. They generate the same audit reports, satisfy the same compliance frameworks, and tell the same story to the same oversight bodies. From the outside, they look identical. The difference only becomes visible when the adversary shows up.</p><p>The telecom carriers whose routers Salt Typhoon occupied for years weren't failing their compliance audits. The security teams existed on org charts. The policies were written. The frameworks were followed. What was absent was the internalized judgment that would have made someone, somewhere in those organizations, treat a seven-year-old unpatched router as an unacceptable condition regardless of whether an audit was coming. The discipline wasn't maintained because no one had formed the habit of maintaining it when it didn't immediately matter. And in security, the discipline that's only maintained when it visibly matters is not discipline at all. It's performance.</p><p>The DOGE-related failures in federal systems have a different surface structure but the same root. The personnel who bypassed standard security protocols, disabled logging systems, and accessed sensitive databases without oversight weren't ignoring discipline they understood to be necessary. They were operating from a formation, the Silicon Valley formation that speed and disruption are intrinsically good, that process is bureaucratic friction rather than earned wisdom, that the person who moves fastest is by definition the most competent. That formation is coherent and internally consistent. It has produced real things of value in contexts where the rules of engagement don't include state-sponsored adversaries actively watching for the moment when someone turns off the audit log.</p><p>The mismatch between that formation and the environment it encountered is what makes the resulting security failures so severe, and also so difficult to address through the mechanisms that would normally respond to them. Convictions are formed over years, through experience and mentorship and consequence, not promulgated through executive order. What policy can produce is compliance documentation. What it cannot produce is the belief, held at 3 a.m. on a quiet Friday, that the process being shortcut exists for a reason that still applies when no one is watching.</p><div><hr></div><p>When I was planning this series, I expected the connection between the cultural argument in the first four essays and the cybersecurity argument in these to feel somewhat forced. The two domains share vocabulary (formation, discernment, the difference between performed and genuine) but vocabulary can be borrowed without the underlying structure actually aligning. What I found, in the writing, is that the parallel is tighter than I anticipated. Uncomfortable-tight, in ways I want to try to be precise about.</p><p>In that series, I argued that a generation raised on synthetic media, algorithmically curated, AI-generated, optimized for engagement rather than truth, had developed a hunger for the real that the synthetic environment couldn't satisfy. The hunger is the beginning of formation, or maybe it's just the precondition, the thing that has to be present before formation becomes possible. Either way, it precedes the discipline. It's the recognition, often inarticulate, that the representation isn't the thing, that what's being offered is constructed for effect rather than captured from reality.</p><p>The cultural argument was about discernment: the capacity to evaluate the difference between a treatise and a pamphlet, between genuine engagement with a hard idea and a simulation of it designed to produce a feeling of engagement without the cost. I called the person who exploits the absence of that discernment the pamphleteer. The pamphleteer doesn't need to be dishonest. The pamphleteer simply needs the audience to lack the formation to tell the difference.</p><p>The cybersecurity argument is structurally identical. The carrier that performs security rather than practicing it is the cybersecurity pamphleteer, producing documentation that simulates genuine defense for an audience (regulators, oversight bodies, shareholders) that lacks the formation to evaluate whether the real thing is present. DOGE's "move fast and break things" approach to federal systems is the cybersecurity pamphleteer operating from the other direction: producing disruption that simulates efficiency for an audience that lacks the formation to evaluate what the disruption is actually costing.</p><p>In both cases, the structural advantage belongs to whoever benefits from the absence of discernment. The adversary's job is easier when the defenders are performing rather than defending. The pamphleteer thrives when the audience can't tell the difference. The compound vulnerability compounds precisely because the mechanisms of discernment, the audit logs, the monitoring systems, the incident review boards, the whistleblower protections, the institutional knowledge that takes years to build and days to destroy, are the first things eliminated when the ideology of velocity encounters the friction of accountability.</p><div><hr></div><p>Somewhere in the middle of writing these essays I found myself sitting with a question I couldn't resolve analytically: what does formation actually look like, in practice, in the people who do this work well?</p><p>I've been doing this for thirty years. I've worked in environments where the security discipline was genuine and environments where it was performed, and the difference is visible within days of arrival even when the documentation is identical. The genuine version has a quality of attention that the performed version lacks. People maintain the logs because they understand what a gap in the logs would mean to an investigation, not because the policy says to maintain them. They patch the router because they've internalized what an unpatched router represents in the adversary's targeting calculus, not because the compliance cycle is coming up. When something anomalous appears: a spike in outbound traffic at 3 a.m., a login attempt from an unexpected geography, a container created on the network that no one ordered. They notice. Not because an alarm fired, but because they're paying the kind of attention that lets you notice when something is wrong before the alarm knows to fire.</p><p>That quality of attention is what Rob Joyce was describing when he said that eliminating CISA's probationary employees would destroy the pipeline of trained talent responsible for detecting and eradicating threats. He didn't mean the compliance capacity or the headcount on a chart. He meant the formed judgment that comes from years of working alongside experienced practitioners who teach you, mostly by example, what genuine attention to the adversary actually requires, and how to maintain it when nothing is visibly wrong.</p><p>It is also what Daniel Berulis demonstrated at the NLRB, a security architect who noticed the anomalous activity, who understood what the combination of disabled logging, disabled monitoring, and unusual outbound traffic actually indicated, who tried to do what the system was designed to enable him to do, and who was told to stop before he could finish. His formation held. The institution's response to his formation is the part of the story I find hardest to set aside.</p><p>I want to say something careful about that, because there's a version of the formation argument that becomes, if you follow it far enough, a counsel of despair. If the defense requires formed people, and formed people require institutions to develop, and the institutions are being actively dismantled, then the argument circles back on itself: formation solves the problem that only formation can create the conditions to solve. I don't have a clean answer to that circularity. What I have is the observation that the knowledge doesn't simply disappear when the institutional home does: it goes somewhere, it persists in individuals, it can be transmitted informally in ways that are harder to disrupt than the formal pipelines. Whether that's enough, at this scale, at this speed, I genuinely don't know.</p><p>The adversary has formation of its own. Salt Typhoon's seven-year persistence inside U.S. telecom networks required sustained discipline: the patience to stay quiet, the judgment to know what to collect and what to leave untouched, the tradecraft to avoid triggering the monitoring systems that weren't switched off. The DIGOS breach in Italy, which emerged this week, bears the same signature: a "surgical" operation, not oriented toward disruption, but toward the selective extraction of precisely the information that would make future operations more effective. The objective was to map the people who do the watching, so that the watchers can be watched.</p><p>Formation versus formation. The adversary's is intact and supported by a state that takes the long view. The defense's is being actively dismantled, in the United States, at precisely the moment when the adversary's strategic ambition has shifted from passive collection toward active pre-positioning for disruption.</p><div><hr></div><p>The constructive version of this argument has a limit I should name, because the essay reads more honestly with it visible than without it.</p><p>I can describe what genuine formation looks like in individuals. I've watched it develop and watched it erode, in government contexts and private sector ones, over thirty years. I can say with confidence that the discipline is teachable, that the training pipelines that produce it are real and have been demonstrated to work, and that what's been destroyed at CISA and across the federal cybersecurity workforce in the past year is precisely the infrastructure that develops and sustains that discipline.</p><p>What I cannot provide is a mechanism for rebuilding it quickly. Formation is slow by design, and the slowness is not a bug but the thing itself. The security professional who patches a router because they understand the adversary's targeting calculus didn't arrive at that understanding through a certification program, however good the program. They arrived at it through years of working alongside practitioners who understood it, in environments where the discipline was maintained under pressure, where the anomalous was noticed and investigated rather than tolerated, where the lesson from the breach was studied and absorbed rather than documented and filed.</p><p>Over half a million cybersecurity positions currently sit unfilled in the United States. The workforce development infrastructure that existed to address that gap: the CISA advisory programs, the developmental pipelines that Rob Joyce described, the institutional mentorship that turns technically capable new professionals into formed defenders. It has been significantly degraded, which is a bureaucratic phrase for something more consequential: the gap between what the threat requires and what the defense can currently provide is not closing. It is widening in a way that the threat's acceleration makes increasingly consequential.</p><p>Which is also, I think, what the adversary is counting on. The hundred-year strategy Terry Dunlap described isn't a metaphor. It describes a planning horizon that the defense has never operated on, and is currently operating further from than at any recent point.</p><div><hr></div><p>Eight essays is a long time to argue one thing. I want to be sure I've said what I think rather than just what the argument required.</p><p>The reality hunger argument in the predecessor series ended with a conviction that the hunger for the real is older and stronger than any technology designed to simulate it. The people who develop the discernment to tell the treatise from the pamphlet don't always arrive through institutional channels. Sometimes the formation happens outside the systems that were supposed to produce it, through a stubbornness of attention that survives the synthetic environment without being explained by it.</p><p>The cybersecurity equivalent of that conviction is narrower and harder to state without romanticism. The people who do this work well, the ones who maintain the discipline when no one is watching, who notice the anomaly before the alarm fires, who understand what the adversary is doing because they have spent years developing a mental model of the adversary that is accurate rather than convenient, are not produced by policy. They emerge through practice sustained over years, in environments where the discipline was modeled by someone who already had it, and where the consequences of its absence were visible and real and studied rather than filed.</p><p>Those people still exist. They are, right now, the former CISA analysts who left the federal government this year carrying institutional knowledge that took years to build. They are the Daniel Berulises who noticed what was wrong and tried to do something about it. They are the Jen Easterlys who left and built matching systems to connect fired CISA alumni with employers, because the knowledge doesn't disappear when the institutional home does. They are, in their various organizations and contexts and countries, the practitioners who understand what's happening and are doing what they can with what they have.</p><p>What they cannot do, individually or collectively, is substitute for the institutional structures that sustain formation over time. The discipline that survives the current moment is what the individuals carry. What gets rebuilt, if it gets rebuilt, will require the structures: the training pipelines, the mentorship environments, the oversight mechanisms, the regulatory frameworks with teeth, the incident review processes that convert breaches into lessons rather than filing them as liabilities.</p><p>Those things require time, political will, and a public that can distinguish performed security from genuine defense. A public that cannot evaluate whether its government is creating or destroying security, that lacks the formation to ask the right questions of the institutions responsible for its protection, is the cybersecurity equivalent of the audience that cannot tell the treatise from the pamphlet. The pamphleteer thrives. The adversary operates in the resulting dark.</p><p>The formation that the defense actually requires begins there, in the public's ability to care about the right thing. I keep coming back to the fact that this is harder to build than any technical system, and also harder to destroy. The surveillance architecture can be dismantled. The audit logs can be switched off. The workforce can be fired. What cannot be eliminated, at least not quickly, is the capacity of people who have been formed to ask whether what they are being shown is real. That question, applied with genuine curiosity and genuine consequence attached to the answer, is the foundation. Whether the sophisticated threat detection is actually running. Whether the remediation was actually completed. Whether the cooperation being offered with one hand is operating alongside the intelligence being collected with the other.</p><p>Is this real, or is this performed?</p><p>That question applied to cybersecurity and to culture is the same question. The Reality Hunger argued that the hunger for the real is older and stronger than any technology designed to simulate it. I want to believe the same thing is true here. The evidence mostly supports it. The public that eventually produces the political will to rebuild the institutional infrastructure the defense requires will not arrive at that will through policy documents or threat briefings. It will arrive through the accumulated recognition that something essential is absent: that the performance of security is not the same as its presence, that the cooperation offered with one hand and the intelligence collected with the other are not the same thing, that the remediation announced is not the same as the remediation completed.</p><p>That recognition, carried and practiced and asked out loud by enough people, is the beginning of the formation the defense requires. It is slow. The adversary's patience is longer. Both of those things are true, and neither of them changes what the work is.</p><p>In February 2026, with the compound vulnerability fully visible and the adversary operating in the resulting dark, the formation that survives is what the individuals carry. What gets rebuilt will require the structures. The structures require time and political will and a public that has learned to ask the right question.</p><p>The hunger to ask it is not nothing.</p><div><hr></div><p><em>What the Defense Actually Requires is the fourth and closing essay in The Compound Vulnerability series. The preceding series (The Reality Hunger, The Presence Test, The New Formation, and What the Pamphleteer Counts On) forms the first half of a connected eight-essay argument about formation, discernment, and resilience.</em></p>]]></content:encoded></item><item><title><![CDATA[The Compound Vulnerability]]></title><description><![CDATA[Compound loading: when multiple forces act on a structure simultaneously, the failure threshold drops. Chinese intelligence inside telecom networks. Federal cybersecurity capacity dismantled from within. A still-active adversary operating inside systems t]]></description><link>https://www.marcobrondani.com/p/the-compound-vulnerability</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-compound-vulnerability</guid><pubDate>Wed, 25 Feb 2026 07:40:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6fa46be4-fc20-49c0-ad06-76785178098e_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LUo8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LUo8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!LUo8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!LUo8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!LUo8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LUo8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LUo8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!LUo8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!LUo8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!LUo8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa96c439f-c9b5-4c68-b0be-18eb3b7de856_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>There's a concept in structural engineering called compound loading: when multiple forces act on a structure simultaneously, the result isn't simply additive. The failure threshold drops. A column that would hold under vertical load alone, and would hold under lateral load alone, fails at a point neither force would have reached on its own. The structure doesn't collapse because of the first force or the second. It collapses because both forces are present at once, and the combination creates a condition the structure wasn't designed to survive.</p><p>I've been thinking about that concept for most of 2025, in the context of what I described in the two preceding essays. I keep reaching for structural metaphors because the failure pattern here is structural in exactly that sense: not a single catastrophic event but a combination of conditions that has moved the overall security posture to somewhere we haven't been before, and that the existing frameworks for understanding cybersecurity risk don't quite capture.</p><p>What the first essay described was an external force: Chinese state-sponsored hackers occupying the communications infrastructure of the United States for years, through vulnerabilities that basic discipline would have closed. What the second essay described was an internal force: the deliberate dismantlement of the monitoring, oversight, and response capabilities that would detect and limit the damage from intrusions, combined with the creation of new attack surfaces through uncontrolled access to the most sensitive non-military data repositories in the federal government. Two forces. Both present. Acting simultaneously on a structure already under stress.</p><p>Adding those two threats together and calling the result "the compound vulnerability" undersells what the combination actually does to the defense. The relevant question isn't the magnitude of each threat. It's what the defense looks like when both are simultaneously true.</p><div><hr></div><p>Let me try to make the structural argument precise, because it's easy to describe this as "things are bad and getting worse" without specifying why the combination is worse than either condition alone.</p><p>Consider what a functioning security posture looks like in the face of a sophisticated external adversary. You have defenders who understand what they're protecting. You have monitoring systems that detect anomalous activity. You have incident response teams who can investigate, contain, and remediate when something gets through. You have intelligence about what the adversary is doing and how they're operating. You have a regulatory framework that enforces minimum standards on the organizations responsible for critical infrastructure. And when an intrusion happens anyway, because it will happen, you have the institutional capacity to find it, study it, and build better defenses from what you learn.</p><p>That's the system. Not perfect, but functional. The Cyber Safety Review Board was created precisely to embody one part of it: post-incident analysis by multi-agency, multi-sector experts who could issue public findings and make systemic recommendations. It had previously investigated the SolarWinds attack and the Log4j vulnerability, producing assessments that informed real improvements. When Salt Typhoon was discovered, the CSRB opened an investigation.</p><p>In January 2025, the Trump administration dismissed all members of the CSRB before that investigation could be completed. The board has not been reconstituted. There will be no public after-action report on one of the most significant intelligence penetrations in American history.</p><p>That's one element. Now stack it alongside what the preceding essays documented in detail: CISA losing nearly a third of its workforce including both red teams and most of its senior technical leadership. The FCC rolling back the security requirements it had imposed on carriers after Salt Typhoon, at the carriers' lobbying. DOGE personnel accessing the most sensitive government databases with disabled logging and unvetted devices, while the monitoring systems that would have flagged the anomalous access were switched off and the whistleblower who documented what he'd seen was threatened into silence.</p><p>Lay all of those conditions beside each other. This is the compound vulnerability: weakened external defenders, weakened internal oversight, enlarged attack surface, reduced detection capability, suppressed incident reporting, and a still-active adversary operating inside systems that haven't been remediated. Each condition on that list is a serious concern in isolation. Together they interact, and the interaction is the point.</p><div><hr></div><p>The pattern underlying all of it is one I've watched play out in organizational contexts throughout my career, though rarely at this scale and never in these two directions at once. In both cases, the defense eroded through accumulated choices that each seemed individually defensible and collectively produced something no one designed. The result isn't intended. But intention doesn't determine outcome, and from the adversary's perspective, intention is irrelevant. What matters is the condition of the defense they find.</p><p>China's approach to the United States in cyberspace has shifted significantly over the past decade. Professor Ciaran Martin, former head of the UK's National Cyber Security Centre, described the shift in early 2025 this way: China has moved from opportunistic to strategic, and from passive to active. It no longer just spies and steals; it has laid the groundwork for disruptive operations against Western critical infrastructure. Salt Typhoon is the intelligence-collection face of that strategy. Volt Typhoon, which pre-positions inside aviation systems, water utilities, energy infrastructure, and transportation networks, is the disruption-capability face. Both operate through the same mechanism: finding the defenders who haven't maintained discipline and establishing persistence before the access is noticed.</p><p>In March 2025, Volt Typhoon breached Littleton Electric Light and Water Department in Massachusetts, a small utility, not an obvious high-value target, but exactly the kind of node that a strategy of broad pre-positioning requires. The same month, DHS documented that Salt Typhoon had stolen 1,462 network configuration files from approximately 70 U.S. government and critical infrastructure entities across 12 sectors, including energy, communications, transportation, and water. Those configuration files are a detailed map of how those networks are structured, where the access points are, what the traffic patterns look like. The intelligence value isn't just what happened in the breach; it's what the breach makes possible next.</p><p>"If the PRC-associated cyber actors that conducted the hack succeeded," a DHS memo noted about the Army National Guard compromise in this same period, "it could hamstring state-level cybersecurity partners' ability to defend U.S. critical infrastructure against PRC cyber campaigns in the event of a crisis or conflict."</p><p>That sentence is worth reading carefully. Causing damage now would reveal the access and trigger a response. The actual objective is positioning: to be where you need to be when the moment comes, while simultaneously degrading the capacity of the people who would respond to that moment.</p><p>The DIGOS breach reported today in Italy fits the same template precisely. Between 2024 and 2025, hackers linked to Chinese state intelligence penetrated the Italian Interior Ministry's network and extracted the identities of approximately 5,000 DIGOS agents (names, roles, operational postings), the officers responsible for Italy's most sensitive investigations: counter-terrorism, organized crime, and the surveillance of Chinese dissidents living in Italy. The intrusion was described by Italian investigators as "surgical": not oriented toward disruption or sabotage, but toward selective extraction of strategic information. The objective, according to sources familiar with the dossier, was to know in advance who investigates what, where, and with what priorities.</p><p>What makes the timing particularly difficult to set aside is the diplomatic context. During the same months the intrusion was apparently underway, Beijing was simultaneously pursuing a policy of judicial cooperation with Rome, offering, for the first time, a response to an Italian rogatory from the Prato prosecutors office, and sending a delegation to meet with Italian law enforcement on organized crime. Cooperation offered formally. Intelligence collected operationally. Both at the same time. Italian authorities reportedly froze joint patrols with Chinese officers and suspended training programs once the breach became known. The cooperation channel, carefully constructed over years, collapsed when the picture became clear.</p><p>The pattern Professor Martin described, strategic rather than opportunistic, positioning rather than immediate exploitation, is running on multiple continents simultaneously. The OPM breach mapped the American national security workforce. The DIGOS breach mapped the Italian internal security workforce, with particular attention to the agents tracking Beijing's critics abroad. The architecture is the same. The targets are the people whose job it is to know what China is doing. Knowing who they are, where they work, and what they're currently investigating is its own form of leverage, independent of any operation that leverage might eventually enable.</p><div><hr></div><p>The two failures described in the earlier essays aren't separate stories that happen to be occurring at the same time. They share a root.</p><p>I described that root in the preceding essays: the performed security of the telecom carriers and the contempt for security protocols that characterized the DOGE access are different mechanisms producing the same structural outcome. Neglect and velocity arrive at the same place. The adversary doesn't distinguish between a door left open through indifference and a door propped open through impatience. What matters is the door.</p><p>But the compound vulnerability is not just both doors being open at once. It is what happens to the building when both doors are open and the security desk is unmanned and the cameras have been switched off and the person who noticed the intrusion was told to stop talking about it. The interaction between the conditions is where the actual danger lives, and the interaction produces something that neither condition produces alone: an environment in which the adversary can operate and no one remaining has the visibility to know whether they are operating.</p><p>There's a version of this observation I've been reluctant to state directly and probably shouldn't avoid. The conditions that foreign intelligence services have spent enormous resources trying to manufacture from the outside were created domestically, through official channels, in a matter of months. Whatever the intent behind those actions, the effect on the adversary's operating environment is not ambiguous. It improved substantially in 2025, while the defense's operating environment deteriorated substantially. Those two facts are not independent.</p><div><hr></div><p>Something I find genuinely difficult to assess, and want to be honest about, is the extent to which the compound effect is recoverable.</p><p>In structural engineering, once a structure has crossed its failure threshold under compound loading, it doesn't return to its original state when you remove one of the forces. The failure changes the structure. Some of what's been lost in the past year falls into that category. The CSRB investigators who were dismissed don't simply reconstitute their institutional knowledge when the board is reassembled. The CISA analysts who left, the best technical talent the government had recruited in years, people who had documented their expertise across years of federal service, took that knowledge with them, and some won't return. The training pipelines that produced the next generation of threat hunters were disrupted at their development stage. The former NSA cybersecurity director Rob Joyce was explicit about this: eliminating probationary employees destroys the developmental programs that produce the specialized skills the defense requires. You can hire people back. You can't hire back the years of formation.</p><p>The data that moved is also gone. Social Security records copied to Cloudflare cannot be un-copied. Whatever the Russian actors who attempted to log into the NLRB systems within fifteen minutes of credential creation were doing with those credentials, that probing happened. The Salt Typhoon network configuration files are in the hands of the Ministry of State Security. The CALEA access, the intelligence about who was under investigation through which carrier, whatever was observed during the years the operation was running: that intelligence was collected. It informed decisions that were made. It will inform future decisions.</p><p>Against all of that, the recoverable elements: the regulatory framework can be rebuilt, though the political conditions that enabled the FCC rollback haven't changed. CISA can be rebuilt, though rebuilding institutions after rapid dismantlement takes years and the threat doesn't pause while you rebuild. The DOGE access can be revoked, closed by court order or changed by policy, though the data that moved before revocation has moved.</p><p>Recovery is possible. What's less clear is whether it's the same structure you're recovering to, or a structure that has absorbed permanent changes from the loading it experienced.</p><div><hr></div><p>There's one more structural element I want to name, because it sits underneath both failures and connects to the larger argument the preceding series made about formation and discernment.</p><p>The public cannot evaluate any of this. A citizen who wants to know whether their telecommunications infrastructure is secure from Chinese intelligence cannot determine that, because the carriers have refused to provide documentation and the FCC has chosen not to require it. A federal employee who wants to know whether their security clearance investigation file was accessed and where it went cannot determine that, because the logging was disabled before the access occurred. A whistleblower who observed what appeared to be a major security incident and reported it through the proper channels was told to drop the investigation, and was later found to have been placed under physical surveillance.</p><p>The public's relationship to the security of the systems they depend on is mediated entirely by institutions whose transparency has been, in both cases, either absent or actively obstructed. The telecoms won't say. The government won't require them to say. The government's own systems were accessed through channels that bypassed accountability mechanisms, and the accountability mechanisms that would document what happened were the first things disabled.</p><p>I said in the earlier series that discernment, the capacity to evaluate the difference between genuine security and performed security, requires formation. You have to know enough about what the real thing looks like to recognize when you're being offered a substitute. The public doesn't have that formation in cybersecurity, and the institutions that would either maintain genuine security or tell the truth about performed security have, in the situations I've been describing, done neither.</p><p>That's not a political observation. Or rather, it's only incidentally one. It's primarily an epistemological observation about the conditions under which the public can know anything about whether the systems they depend on are safe.</p><p>They currently can't. And the structural condition that makes it impossible, the combination of inadequate security, insufficient transparency, and disabled accountability, is exactly what a sophisticated adversary would construct if it could.</p><p>It didn't need to.</p><div><hr></div><p><em>The Compound Vulnerability is the third essay in the series of the same name.</em></p><div><hr></div><h2>Sources</h2><p><strong>U.S. Government Primary Sources</strong></p><p>Official statements, advisories, and sanctions:</p><ul><li><p><strong>CISA/NSA/FBI Joint Advisory (Feb 2024):</strong> "PRC State-Sponsored Actors Compromise and Maintain Persistent Access to U.S. Critical Infrastructure" &#8212; confirms Volt Typhoon in communications, energy, transportation, and water systems with "at least five years" of access. Available at cisa.gov and media.defense.gov.</p></li><li><p><strong>U.S. Treasury OFAC Sanctions (Jan 17, 2025):</strong> Sanctioned Sichuan Juxinhe Network Technology for "direct involvement" with Salt Typhoon. Confirms Salt Typhoon compromised network infrastructure of multiple major U.S. telecom and ISP companies. treasury.gov/news/press-releases/jy2792</p></li><li><p><strong>Congressional Research Service (Congress.gov):</strong> Salt Typhoon report IF12798 &#8212; confirms nine U.S. telecom companies compromised, CALEA lawful intercept systems accessed, PRC state sponsorship via Ministry of State Security.</p></li><li><p><strong>DHS Declassified Memo (released July 2025):</strong> Salt Typhoon stole 1,462 network configuration files from ~70 U.S. government and critical infrastructure entities across 12 sectors between January and March 2025. Includes the Army National Guard quote about hamstringing state-level cyber defense.</p></li><li><p><strong>FBI (August 2025):</strong> FBI Cybersecurity Division director Brett Leatherman confirmed Salt Typhoon targeted 80+ nations, 600+ organizations notified. $10 million bounty announced April 2025.</p></li><li><p><strong>Senate Commerce Committee Hearing (Dec 3, 2025):</strong> Senator Cantwell's hearing with telecom and cybersecurity experts. Confirms carriers still cannot prove Salt Typhoon has been eradicated. FCC rolled back security requirements November 20, 2025. Both AT&amp;T and Verizon failed to provide remediation documentation when requested.</p></li></ul><p><strong>Industry/Cybersecurity Firm Reports</strong></p><p>Ongoing monitoring:</p><ul><li><p><strong>Recorded Future / Insikt Group (Feb 2025):</strong> Salt Typhoon still active, observed seven compromised Cisco devices communicating with Salt Typhoon infrastructure on five telecom networks between December 2024 and January 2025. Targeted 1,000+ unpatched Cisco edge devices globally.</p></li><li><p><strong>Dragos Annual OT Report (Feb 2026):</strong> Volt Typhoon "still very active" and "absolutely mapping out and getting into embedding in U.S. infrastructure, as well as across our allies." CEO Rob Lee stated some compromised sites "we will never find." Volt Typhoon was in Littleton Electric Light &amp; Water for 10 months before discovery.</p></li><li><p><strong>Trend Micro:</strong> Salt Typhoon attacks confirmed across critical infrastructure worldwide, not limited to U.S.</p></li></ul>]]></content:encoded></item><item><title><![CDATA[The Insider Threat We Built]]></title><description><![CDATA[Every security professional is trained to prevent the insider threat. What happened over the past year is something I didn't expect to see: the deliberate, systematic dismantlement of the controls that insider threat programs are built to provide, by the]]></description><link>https://www.marcobrondani.com/p/the-insider-threat-we-built</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-insider-threat-we-built</guid><pubDate>Tue, 24 Feb 2026 06:49:01 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ae8e609a-9e55-4e9e-9ea3-a5959664907e_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xPjy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xPjy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!xPjy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!xPjy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!xPjy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xPjy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xPjy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!xPjy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!xPjy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!xPjy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0e24499e-aa29-4abc-9a30-5e403ce5240d_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>Every security professional is trained to prevent the insider threat. Not because insiders are inherently untrustworthy, but because the insider threat is the hardest category of risk to defend against. The adversary already has legitimate access. The activity looks like normal work. The monitoring systems were designed to detect external intrusion, not internal misuse. And by the time anyone realizes what's happened, the data has already moved.</p><p>For decades, the hardest part of the insider threat problem in government systems was the vetting process: who gets access, on what basis, with what oversight, and how do you detect anomalous behavior in someone who is authorized to be exactly where they are. Security clearance investigations exist precisely because we recognized long ago that access to sensitive systems isn't something you grant on the basis of job title alone. The process is slow, expensive, and imperfect, but it reflects a hard-won institutional judgment: the cost of getting it wrong is asymmetric. One person with malicious intent or compromised loyalty, inside the right system, can cause damage that takes years to assess and longer to repair.</p><p>I've spent thirty years watching organizations navigate that tension, in government and private sector contexts. What I've watched happen over the past year is something I didn't expect to see: the deliberate, systematic dismantlement of the controls that insider threat programs are built to provide, by the government itself, applied to its own systems.</p><p>The resulting attack surface is already being probed.</p><div><hr></div><p>The systems that DOGE personnel accessed beginning in January 2025 represent, in aggregate, the most sensitive non-military data repositories in the federal government. Treasury Department payment systems process approximately $5.45 trillion in annual federal payments, including payments to intelligence contractors whose identities are sensitive enough that their names don't appear in public budget documents. The Office of Personnel Management holds detailed security clearance investigation records for every federal employee with a clearance, the same database that China penetrated in 2015 in a breach that took years to fully assess. The Social Security Administration holds personal data for virtually every American who has ever worked. The CFPB holds financial transaction records, complaints against major banks, and the materials from ongoing investigations into large financial institutions. The IRS holds tax returns. USAID held, before much of it was dismantled, the names and contact information for foreign nationals working on U.S.-funded programs in countries where exposure means arrest, or worse.</p><p>These systems weren't all breached. The distinction matters, and I want to be careful with it. What happened is that access was granted, broadly and with the explicit authority of an executive order directing agencies to provide "full and prompt access to all unclassified agency records, software systems, and IT systems" to DOGE personnel. The access was real. What was done with it, in full, is not publicly known. What is known is the manner in which the access was obtained and exercised, and that manner is the source of the security concern, separate from any question of intent.</p><div><hr></div><p>Let me be specific about what "manner" means in practice, because the abstractions don't capture it.</p><p>At the Consumer Financial Protection Bureau, former Chief Technology Officer Erie Meyer testified that DOGE personnel granted themselves what she described as "God-tier" access to the agency's systems, then turned off the auditing and event logs that would have created a record of what they accessed. The cybersecurity professionals responsible for insider threat detection were placed on administrative leave. The people whose job was to know when something unusual was happening with internal system access were removed from position before the access began.</p><p>At the National Labor Relations Board, a security architect named Daniel Berulis documented what he observed after DOGE personnel arrived at the agency in early March 2025. They arrived in a black SUV with a police escort and met with agency leadership, bypassing the IT security staff entirely. They demanded accounts with what Berulis described as "tenant owner level" access in the agency's Microsoft Azure environment, access that exceeded the access level of the NLRB's own CIO. Logging was to be disabled. Monitoring tools were switched off. Within days, Berulis observed approximately ten gigabytes of data exiting the agency's NxGen case management system, the database containing confidential information on pending labor cases, union organizing efforts, and corporate data that companies are legally required to submit.</p><p>Then, within fifteen minutes of those DOGE accounts being created, login attempts arrived from an IP address in Russia's Primorsky Krai region. The attempts were blocked by geographic access controls, but the person behind them used the correct username and password for one of the newly created DOGE accounts.</p><p>That detail requires a moment of attention. The accounts were new. The credentials had just been created. And someone, apparently operating from Russia or through a Russian-located proxy, had the correct credentials almost immediately.</p><p>Berulis reported the incident to his superiors and initiated contact with US-CERT, the government's computer emergency response team. Between April 3 and 4, he was told to drop the US-CERT investigation and not create an official incident report. Shortly afterward, a threatening note appeared on his door, accompanied by photographs taken by a drone, showing him in his neighborhood.</p><p>At the Treasury Department, a 25-year-old DOGE staffer was granted, according to officials, temporary read-write access to federal payment systems controlling trillions of dollars in government spending, described publicly as a mistake. A federal judge, reviewing the access, found what she described as "a real possibility that sensitive information has already been shared outside of the Treasury Department, in potential violation of federal law." The access was eventually restricted to read-only by court order, but not before data had already been copied and software had been installed and modified.</p><p>Federal prosecutors later acknowledged in court filings that DOGE employees had copied Social Security Administration data to a cloud server operated by Cloudflare, outside federal oversight. The Social Security Administration determined it was unable to confirm whether the data remained on Cloudflare's servers. A DOGE team member continued accessing the "Numident" database, containing Social Security card applications and death records, after a federal court had entered a temporary restraining order revoking access.</p><div><hr></div><p>The security professional reading this list will recognize something the public discussion has generally not named precisely: these are not just data access concerns. The behaviors Berulis documented at the NLRB, specifically the superuser accounts, the disabled logging, the external code libraries pulled from GitHub that neither the NLRB nor its contractors had ever used, the container created to run code in a way that conceals its activity from the rest of the network, these are the techniques used by sophisticated adversaries who want to operate inside a system without leaving forensic evidence. Security experts who reviewed the Berulis disclosure described the tactics as resembling "the playbook of foreign hackers," not federal workers conducting an efficiency review. Berulis himself said the same: not that DOGE personnel were foreign agents, but that the methods they used were indistinguishable, from a forensic perspective, from the methods a sophisticated attacker would use to minimize the traces of their presence.</p><p>What made that possible was the same thing that makes all insider threat scenarios dangerous: the monitors were removed before the monitoring would have mattered.</p><p>This is worth naming carefully. When CISA was investigating the Salt Typhoon breach, one of the preconditions for finding the intrusion was that monitoring systems were functioning well enough to detect anomalous traffic. At the CFPB and NLRB, the monitoring systems were deliberately switched off. At CISA itself, the workforce being cut included the red teams whose job is to simulate adversary behavior inside federal systems to find vulnerabilities before real adversaries do, the incident response teams responsible for detecting and containing breaches, and the continuous monitoring staff who track anomalous behavior in federal networks around the clock. One penetration tester, Christopher Chenoweth, described his team's termination this way: DOGE cut his entire red team in late February, over a hundred people with immediate effect, then cut a second CISA red team the following Wednesday.</p><p>By mid-2025, CISA had lost nearly a third of its workforce, roughly 1,000 people, including most of its senior leadership across divisions. The Cybersecurity Division, which monitors federal networks for intrusion, went from approximately 1,100 personnel to somewhere between 800 and 850. Former NSA cybersecurity director Rob Joyce assessed the mass firings as likely to have a "devastating impact on cybersecurity and our national security," specifically because they destroyed the pipeline of trained talent responsible for detecting and eradicating Chinese threats. The people being fired were not random bureaucrats who happened to hold cybersecurity titles; they were, by multiple accounts, the best technical talent the government had recruited in years, people who had left seven-figure private sector salaries to do the hardest cybersecurity work in the country.</p><div><hr></div><p>The OPM database is where I find myself returning, because it carries the longest shadow of any system DOGE accessed, and because I've never quite been able to get comfortable with what the access implies.</p><p>China penetrated OPM in 2015. The breach yielded the security clearance investigation files for approximately 21 million federal employees and contractors, the forms known as SF-86, which contain not just biographical data but the results of background investigations: every foreign contact, every financial difficulty, every health or relationship issue that a clearance applicant disclosed to investigators. The damage from that breach is still being assessed a decade later. It provided Chinese intelligence with a comprehensive map of the American national security workforce, the people with access to classified programs, the potential pressure points, the relationships. It was, by most expert assessments, one of the most consequential intelligence collection operations ever conducted against the United States.</p><p>The OPM database that China spent considerable effort penetrating in 2015 is the same system that DOGE personnel accessed beginning in early 2025, through a mechanism that bypassed the vetting and oversight protocols that had been put in place partly in response to the 2015 breach. Cybersecurity experts noted at the time that allowing personnel with unknown security controls and unvetted devices to connect to OPM's network created exactly the attack surface China had tried to exploit through external means. The irony is not subtle. The question of whether it was recognized before access was granted is one I genuinely don't know how to answer, and I'm not sure what's more unsettling: the possibility that it wasn't, or the possibility that it was.</p><div><hr></div><p>The Security Rule in cybersecurity holds that you cannot protect what you cannot monitor, and you cannot monitor what you have no access to observe. At the NLRB, CFPB, and across multiple agencies, the controls that make monitoring possible were removed or disabled as a precondition for DOGE's access. The personnel who would have flagged the anomalous behavior were put on leave or fired. The government's own incident response infrastructure was systematically reduced at precisely the moment when the potential for incidents had sharply increased.</p><p>Bruce Schneier, the security technologist and Harvard Kennedy School lecturer, framed it this way in February 2025: the concern is less with intent and more with tactics. A government that bypasses its own security controls, copies data to unprotected servers, and uses it to train AI models with unknown consequences isn't just creating a risk from within. It's creating the conditions that foreign intelligence services have been trying to manufacture for years, through external means, from the outside. An attacker who wants access to OPM data, Treasury payment records, or IRS tax returns doesn't need to penetrate federal firewalls if federal systems are being accessed through channels that bypass those firewalls. They may only need to reach the people using those channels, or the servers where the data now sits.</p><p>The Russian login attempts at the NLRB, within minutes of fresh credentials being created, suggest someone was already watching.</p><div><hr></div><p>There is a framing of this story that I want to address, because I've seen it deployed and I find it insufficient. The framing goes roughly like this: the systems had too much access before DOGE arrived, previous administrations had given too many people access to sensitive data, and DOGE is simply exposing a problem that already existed. There is a kernel of factual basis here; large-scale government systems do develop access-control debt over time, and IT modernization genuinely requires people to look at systems they haven't looked at before. The 2023 Treasury Inspector General report noted that 919 individuals had access to unmasked IRS data, a real oversight concern.</p><p>But the kernel doesn't carry the weight placed on it. The difference between 919 vetted, trained, logged federal employees with documented authorization for specific access and a group of DOGE personnel operating with "God-tier" permissions, disabled logging, and unvetted devices on unauthorized servers is not a matter of degree. It's a categorical difference in the security profile of the access. The pre-existing access, however imperfect, operated within a framework of accountability and monitoring. The DOGE access, as documented by multiple whistleblowers and federal prosecutors, operated by disabling that framework.</p><p>You cannot defend the security implications of the latter by pointing to imperfections in the former. The relevant question, from a security standpoint, is whether the access created conditions a foreign adversary could exploit.</p><p>The answer to that question, at the NLRB, arrived from Primorsky Krai within fifteen minutes.</p><div><hr></div><p>What I find hardest to square with thirty years of professional experience is the simultaneity. While DOGE was accessing federal systems in ways that cybersecurity professionals identified as creating serious vulnerability, CISA's ability to detect and respond to those vulnerabilities was being systematically reduced. The agency that would investigate a federal data breach was losing its incident response teams. The red teams that would have identified what exploitation looked like from the inside were eliminated. The monitoring staff who track anomalous behavior across federal networks were cut from 164 field advisers to 97. The senior officials who ran the cybersecurity programs and understood the threat landscape left, some fired, some resigned in response to the direction they were given.</p><p>The public framing of what happened at CISA as a budget dispute misses something important. Budget disputes are about resource allocation within a continuing function. What happened at CISA was the sequential elimination of the specific capabilities needed to know whether the specific risks created elsewhere were being exploited. The defender's visibility was reduced while the attack surface was enlarged. Both at the same time.</p><p>I said in the previous essay that the Salt Typhoon intrusion succeeded in part because the telecom carriers had allowed formation to decay: the discipline, the institutional knowledge, the processes maintained even when no one is watching. What's different here is the mechanism. Carrier formation decayed through neglect and financial pressure. Federal cybersecurity formation was dismantled deliberately, in a matter of months, through specific personnel actions applied to specific programs.</p><p>The result, in terms of the security condition that adversaries face, is the same. Weakened defenders, expanded attack surface, reduced monitoring. The adversary's job gets easier. The question of whether anyone was already taking advantage of that in the months the cuts were underway is one I don't know how to answer from outside, and I'm not sure anyone inside was positioned to answer it either, which may be the point.</p><div><hr></div><p>The Berulis story has an ending that hasn't received enough attention.</p><p>After documenting the anomalous access and initiating contact with US-CERT, Berulis was told to drop the investigation. When he decided to file a whistleblower disclosure with Congress and the Office of Special Counsel, someone taped a threatening note to his door, accompanied by surveillance photographs showing him in his own neighborhood. His attorney notified law enforcement. The investigation he had started was never completed. The NLRB, after initially denying that DOGE had any access to its network, changed its statement to confirm the access after Berulis went public.</p><p>The formal investigation was closed before it produced a report. The whistleblower was intimidated. What record existed of what had been accessed was, at least partially, deleted before anyone could review it.</p><p>This is also the insider threat scenario. The access is only one part of it; the suppression of the mechanisms that would document and respond to the access is the other. An organization that fires its security monitors, disables its logging systems, closes its incident investigations, and intimidates its whistleblowers has not just created an attack surface. It has made the attack surface effectively invisible.</p><p>Foreign intelligence services have been trying to achieve that invisibility in American government systems for decades. They've tried phishing, supply chain attacks, zero-day exploits, and patient persistence inside poorly maintained networks. The question that the past year has forced into view is what happens when the conditions they've been trying to manufacture are created from the inside.</p><p>I don't have a confident answer to that. I have observations: the Russian login attempts at the NLRB happened within fifteen minutes of account creation. The Social Security data was copied to a server the government doesn't control and may not be able to fully account for. The investigators who would have produced the incident reports were removed from their positions or had their investigations stopped. What that adds up to, in terms of what foreign intelligence services now know or have, I can't say from outside, and I'm genuinely uncertain whether anyone inside was in a position to say it either, given what was disabled before anyone could look.</p><p>That uncertainty is itself information about the current state of things.</p><div><hr></div><p><em>The Insider Threat We Built is the second essay in The Compound Vulnerability series.</em></p>]]></content:encoded></item><item><title><![CDATA[The Defense That Wasn't]]></title><description><![CDATA[Seven years. That's how long the patches were available. When Salt Typhoon arrived, it didn't need to break anything. It needed to find what had never been repaired. This is the story the Salt Typhoon hack actually tells, and it isn't primarily about Chin]]></description><link>https://www.marcobrondani.com/p/the-defense-that-wasnt</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-defense-that-wasnt</guid><pubDate>Mon, 23 Feb 2026 10:21:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b6a3b4bf-3abb-491c-9acd-5044355c5b82_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Onvc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Onvc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Onvc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Onvc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Onvc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Onvc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Onvc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Onvc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Onvc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Onvc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb33f9b4-0ad6-4572-b013-59406779c6f6_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>Seven years. That's how long the patches were available.</p><p>Seven years in which the vulnerabilities sat documented, catalogued, publicly known, with working fixes that required nothing more exotic than applying them. Seven years in which network engineers at the largest telecommunications companies in the United States could have closed the doors that Chinese intelligence services eventually walked through. They didn't. And so when Salt Typhoon arrived, operating under a mission given to it by the Ministry of State Security, it didn't need to break anything. It needed to find what had never been repaired.</p><p>This is the story the Salt Typhoon hack actually tells, and it isn't primarily about China. The Chinese operation is sophisticated, patient, and dangerous, all of that is true, but sophistication doesn't explain what happened here. An adversary with patience and resources will probe your perimeter indefinitely. What determines whether they get through is the condition of the perimeter they find. Salt Typhoon found Cisco routers running firmware with known vulnerabilities, legacy equipment that hadn't been updated in years, and credentials that could be acquired through weak passwords. The adversary met negligence, and negligence opened the door.</p><p>What followed was one of the most consequential intelligence penetrations in American history.</p><div><hr></div><p>Salt Typhoon has been active since at least 2019. It operates under China's Ministry of State Security, which means it works for the same organization responsible for China's foreign intelligence collection and counterintelligence operations. MSS doesn't improvise. Its campaigns reflect deliberate strategic priority: which targets would yield the highest intelligence return, maintained over years, with discovery treated as a failure to be prevented rather than an acceptable risk. The group has gone by multiple names across the security research community (Earth Estries, FamousSparrow, GhostEmperor, UNC2286), which is itself a tell about how long it's been operating and how thoroughly its methods have been analyzed. Different researchers who encountered different aspects of the same operation named what they found.</p><p>The telecom campaign, as it became publicly known in October 2024, was already well underway by the time anyone disclosed it. Officials estimated the intrusion had been running for one to two years before discovery. Some forensic evidence suggests activity going back to 2022. The campaign ultimately reached at least nine U.S. telecommunications companies: AT&amp;T, Verizon, T-Mobile, Spectrum, Lumen, Consolidated Communications, Windstream, and others. Former NSA analyst Terry Dunlap called it "a component of China's 100-year strategy," and I keep returning to that framing because it's clarifying in a way the breach narratives usually aren't: the intrusion wasn't designed to accomplish a single objective and exit. It was designed to stay. Not a smash-and-grab. Occupation.</p><p>The question worth asking about any long-running intrusion isn't just how it got in. The more revealing question is how it stayed.</p><div><hr></div><p>To understand what Salt Typhoon accessed, you need to understand CALEA: the Communications Assistance for Law Enforcement Act, passed in 1994, which requires telecommunications carriers to build intercept capability into their systems. When law enforcement or intelligence agencies obtain court authorization for wiretapping, they access communications through infrastructure that the carriers are legally required to maintain. CALEA systems are, by design, a single point of access to enormous volumes of sensitive communication. Build them into every major carrier, make them technically accessible to government agencies, and you've created exactly the kind of concentrated target that a foreign intelligence service would spend years working toward.</p><p>Salt Typhoon got there. The intrusion accessed CALEA systems at multiple carriers, meaning the Chinese operation had access to the same infrastructure that U.S. law enforcement uses to conduct authorized surveillance. I've spent time trying to think through the full implications of that and I keep running into the limits of what can be said publicly, which is itself a kind of answer (the operational consequences are significant enough that I won't speculate about them in detail here). What can be said is that the intelligence value of knowing who is under investigation, through which carrier, by which agency, for how long, is not difficult to estimate. In addition to the CALEA access, the operation harvested metadata from over a million users concentrated in the Washington D.C. area: call records, message timestamps, source and destination numbers, IP addresses. The geographic concentration matters. Metropolitan Washington is where the people who make, implement, and oversee U.S. national security policy work and live.</p><p>The operation also reportedly tracked locations in real time and accessed the communications of high-ranking officials, including individuals on presidential campaign communications.</p><p>None of this happened because Salt Typhoon broke something that was working. It happened because the carriers had left critical infrastructure in a state that any security professional would recognize as indefensible.</p><div><hr></div><p>I've spent thirty years in this field. I've watched organizations cycle through the same failure pattern enough times that I can describe it without looking at the specifics of any particular breach. The pattern is fundamentally about formation, about what gets internalized as non-negotiable versus what gets treated as adjustable depending on circumstances, and it goes roughly like this: the compliance team documents the requirement, the IT team acknowledges it, the patching gets scheduled, the patching gets deprioritized, something breaks and demands immediate attention, the scheduled maintenance gets bumped, and three years later there's a router running firmware that was already obsolete the quarter it was deployed.</p><p>What this kind of failure actually reflects, more than any technology gap, is what the organization values when the quarterly report and the security patch compete for the same attention. Management failures, prioritization failures: these are names for the same thing, which is an organization that has decided, through accumulated small choices rather than any single bad decision, that the patch can wait. Patches don't generate revenue. They prevent future losses, which are speculative and discounted, against a cost that is immediate and certain. Every CFO who has ever reviewed a capital expenditure request knows which side of that ledger tends to win.</p><p>The carriers will tell you, and have told regulators, that their networks are complex and that applying security updates to live telecommunications infrastructure carries risk of service disruption. That's true. It's also a reason to patch carefully and with proper change management, not a reason to leave known vulnerabilities unaddressed for seven years. The complexity argument is the default response of every organization that has underinvested in security when the consequences arrive. I've heard it from hospital systems, financial institutions, manufacturing operations, and now I'm hearing it from the companies whose networks carry the communications of 265 million Americans.</p><p>The version of events in which Salt Typhoon is primarily a story about Chinese capability is a more comfortable version of events for the organizations that failed to defend their networks. It positions them as victims of sophisticated adversaries rather than as parties whose negligence made the adversary's work straightforward. Both things can be true simultaneously: the operation was sophisticated in its patience, its targeting, and its operational security, and it succeeded in part because the defenders had not maintained basic discipline. The sophistication of the attacker doesn't explain the seven-year-old unpatched routers. That part requires a different explanation.</p><div><hr></div><p>There's a regulatory dimension to this story that compounds the failure, and I want to be careful not to turn a security argument into a partisan one, because the underlying problem predates any particular administration. But some facts are directly relevant to the security analysis, and they don't improve on close inspection.</p><p>After the Salt Typhoon breach became public, the FCC under the previous administration issued a Declaratory Ruling establishing legal obligations for carriers to secure their networks under CALEA. The ruling required carriers to create, update, and certify cybersecurity risk management plans annually. In November 2025, the FCC under Chairman Brendan Carr voted 2-1 to reverse that ruling, claiming it had "misconstrued" CALEA and calling it "flawed" and "unlawful." The reversal came, Senator Maria Cantwell documented, after "heavy lobbying" from the same carriers that had failed to detect the intrusions and that have subsequently refused to provide documentation proving they've removed the intruders from their networks.</p><p>The sequence is worth sitting with. The carriers failed to implement basic security controls, their networks were penetrated by Chinese intelligence for up to two years, and the regulatory response, when it finally came, was a Declaratory Ruling requiring carriers to certify minimum cybersecurity standards annually. The carriers lobbied to have the ruling reversed. The ruling was reversed. The FCC is now relying on "voluntary collaboration" with the same companies whose voluntary approach to security produced the breach in the first place.</p><p>Anyone who has spent time watching regulated industries handle security requirements will recognize this sequence without needing a name for it. The companies most affected by a regulation have the most concentrated incentive to fight it; the public most harmed by its absence has no organized lobbying presence; the regulator has limited resources and strong institutional incentives toward accommodation. It plays out the same way in telecommunications, finance, aviation, healthcare. The FCC's own concession, in the proceedings around the reversal, is that vulnerabilities are "still being exploited." That line appeared in the same document that revoked the requirement to address them.</p><p>As of this writing, in February 2026, Senator Cantwell has demanded a hearing with the CEOs of AT&amp;T and Verizon, citing their refusal to cooperate with oversight and what she calls "serious questions about the extent to which Americans who use these networks remain exposed to unacceptable risk." Both companies have declined, through months of requests, to provide the documentation that would demonstrate remediation. The carriers whose voluntary approach to security produced the breach are now being asked, voluntarily, to prove they've fixed it.</p><div><hr></div><p>The remediation picture is, if anything, worse than the initial breach suggests.</p><p>Between December 2024 and January 2025, while remediation was supposedly underway, Salt Typhoon launched a new campaign targeting over 1,000 unpatched Cisco edge devices globally. The campaign compromised devices at five additional organizations, including U.S. telecommunications providers. Security researchers identified over 12,000 Cisco devices with web user interfaces exposed to the internet. The same class of vulnerability, the same type of target. The campaign that hadn't been remediated continued finding the same kind of opening it had always found.</p><p>In December 2025, intrusions were detected in systems of multiple U.S. House of Representatives committees and attributed to Salt Typhoon. The operation has now compromised over 200 targets in more than 80 countries. Viasat, a satellite communications company serving both military and commercial customers, disclosed in June 2025 that it had been breached during the 2024 presidential campaign period. An unnamed Canadian telecom was compromised in February 2025.</p><p>The group's techniques center on exactly the kind of vulnerability that carrier negligence created: exploiting known, patchable weaknesses in routers and edge devices to establish persistent access, then using that foothold to move laterally into the network infrastructure that matters. The CVEs (Common Vulnerabilities and Exposures) that Salt Typhoon exploited in the Cisco devices had available patches. CVE-2023-20198 and CVE-2023-20273 were disclosed as zero-day vulnerabilities in October 2023. Salt Typhoon was still finding them unpatched at scale more than a year later.</p><div><hr></div><p>Separate from Salt Typhoon but running in parallel is Volt Typhoon, a different Chinese state-sponsored group whose objective is pre-positioning in U.S. critical infrastructure rather than intelligence collection. Where Salt Typhoon harvests information, Volt Typhoon works to establish persistent access to aviation systems, water utilities, energy infrastructure, and transportation networks, the systems that would become targets in a geopolitical conflict that escalated to the point of disruption operations against the American homeland. The DOJ announced a disruption of Volt Typhoon's botnet infrastructure in January 2024; the group reestablished its botnet and continued operations. The disruption was real, the continuity was also real, and the combination suggests what we should expect: not clean resolution, but a contest in which temporary setbacks to the adversary alternate with the persistent availability of negligently maintained targets.</p><p>Pre-positioning in critical infrastructure requires the same precondition that Salt Typhoon found in telecoms: defenders who haven't maintained the discipline to close the doors. These aren't independent failures. They're the same failure finding different targets.</p><div><hr></div><p>What would the defense have actually required? This is the question I keep returning to, because the answer is genuinely not complicated, which is what makes the failure so hard to look at squarely.</p><p>Applying patches to routers requires that someone know the patches exist, that the organization have a process for testing and deploying them, and that the process actually run on schedule. That's the core of it: not exotic technology, not unlimited budget, not a different class of security professional than the ones the carriers presumably employ. What it requires is that the people responsible for defense have internalized that maintenance is the defense: keeping the thing running and current is as much their job as responding to incidents. Operators who actually understand what they're protecting, management that treats patch cycles the way it treats financial reporting cycles, a regulatory framework that enforces minimum standards instead of accepting assurances. Seven-year-old unpatched vulnerabilities don't indicate a technology failure. They indicate that at some point, none of those conditions held.</p><p>What does the organization that fails this way actually look like from the inside? The compliance documentation exists. The security team exists. The audit passes. The budget line reads "cybersecurity" and the org chart shows a CISO and there are probably policies, maybe even good ones, somewhere in a shared drive. And yet when the adversary arrives, the routers are seven years unpatched and the passwords are weak and the CALEA systems that were supposed to be secured are accessible. I've been calling this "performed security" for years, though I'm not sure the term is precise enough: it implies more conscious theater than usually exists. The more accurate picture is drift: an organization that once had the discipline and then, through accumulated deprioritizations, lost it without ever noticing the loss. A facility with locked front doors and broken cameras, a guard who checks credentials at the entrance but hasn't walked the perimeter in months. Everything visible is in order; the actual exposure is invisible until someone finds it.</p><p>The telecommunications sector, at the scale of its network infrastructure, was performing security for years. Salt Typhoon walked through what turned out to be a performance. Whether it even constitutes a "defeat" is the wrong question; you don't defeat a defense that was never really there.</p><p>The distinction matters because the lessons you draw from "sophisticated adversary defeated sophisticated defender" are completely different from the lessons you draw from "adversary found negligence and exploited it." The first lesson leads toward an arms race in capability, which favors well-resourced nation-state adversaries by definition. The second lesson leads toward a much more tractable problem: the organizational discipline to maintain basic hygiene, consistently, across the systems you've been trusted to protect.</p><p>That's not a technology problem. It's a formation problem, meaning it's a problem about whether the people and organizations responsible for defense have internalized what defense actually requires, well enough that they maintain it when no one is looking, when the quarterly pressure is up, and when the patching feels less urgent than everything else competing for attention.</p><p>The formation is what was absent. The adversary found the absence, and occupied it.</p><div><hr></div><p>Senator Cantwell's letter demanding that AT&amp;T and Verizon CEOs appear before the Commerce Committee and account for their non-cooperation is, at minimum, the right instinct applied to a situation that has already gone wrong in ways that can't be walked back. Communications infrastructure that Chinese intelligence may still occupy is not a theoretical future risk. It is the current condition of networks used by American citizens, government officials, and the law enforcement and intelligence agencies that depend on those networks to conduct authorized investigations.</p><p>What the hearing would need to produce, if it happens, is not just accountability for what went wrong but clarity about what genuine remediation requires: not assertions that the networks are now secure, but documented evidence, independently verified, that the intrusions have actually been removed. The carriers' refusal to provide that documentation is itself information. Companies confident in their remediation provide evidence. Companies that have not fully remediated argue about the scope and complexity of the task.</p><p>There are things I can't know from outside: whether the carriers have made progress that can't be disclosed for intelligence reasons, whether the silence reflects ongoing exposure or operational security, whether the publicly available picture is distorted in ways I'm not positioned to see. Maybe. But a year and a half after public disclosure, with no verifiable evidence of remediation and a regulator that has chosen not to require any, the available information points one direction.</p><p>The harder question, which a hearing can gesture at but not resolve, is what mandatory minimum security standards would look like for the communications sector, who would enforce them, and how to design enforcement mechanisms that don't get lobbied into irrelevance by the same industry they're meant to govern. That question remains open. The FCC's reversal of its own post-breach requirements suggests that the political economy of telecommunications regulation is, at the moment, poorly aligned with the security requirements of the networks those regulations are meant to address.</p><p>Thirty years in this field teaches you some things that are hard to unlearn. One of them is that the organizations most resistant to security requirements are reliably the organizations that most need them, because the resistance and the need share the same root: a management culture that treats security as cost without visible return, right up until the moment it becomes an undeniable liability. At that point, the argument shifts from "we don't need to do this" to "we are already doing this" without much acknowledgment of the gap between the two.</p><p>The gap, in the Salt Typhoon case, was seven years wide. Chinese intelligence is still inside it.</p><div><hr></div><p><em>The Defense That Wasn't is the first essay in The Compound Vulnerability series.</em></p>]]></content:encoded></item><item><title><![CDATA[What the Pamphleteer Counts On, and Why We Need to Be Reborn]]></title><description><![CDATA[The finance worker at Arup did everything right. He requested a video call to verify. He looked at the screen and used his judgment. His judgment told him this was real. It wasn't. Twenty-five million gone. The pamphleteer doesn't count on the forgery. He]]></description><link>https://www.marcobrondani.com/p/what-the-pamphleteer-counts-on-and-why-we-need-to-be-reborn</link><guid isPermaLink="false">https://www.marcobrondani.com/p/what-the-pamphleteer-counts-on-and-why-we-need-to-be-reborn</guid><pubDate>Fri, 20 Feb 2026 07:20:08 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b4850551-b541-483c-8159-4ee8726a98d6_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!urwy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!urwy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!urwy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!urwy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!urwy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!urwy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!urwy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!urwy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!urwy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!urwy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7252f15c-a302-4518-a796-ac3c7b850ca0_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p><em>The renaissance doesn't just need artisans. It needs a public that can tell the artisan from the pamphleteer. And the pamphleteer is counting on that public never arriving.</em></p><div><hr></div><p>In February 2024, a finance worker at the engineering firm Arup joined a video call with the company's chief financial officer and several senior colleagues. The CFO explained an urgent, confidential transaction. The finance worker listened, asked questions, received answers that made sense, and authorized fifteen transfers totaling twenty-five million dollars to five bank accounts in Hong Kong.</p><p>Every person on that call was a deepfake. The CFO, the colleagues, the voices, the faces, the mannerisms: all of it generated by AI trained on publicly available video of the real executives. The finance worker had done exactly what security training tells you to do. He'd been suspicious of the initial email. He'd requested a video call to verify. He'd looked at the people on the screen and used his judgment. His judgment told him this was real.</p><p>It wasn't. And the money was gone before anyone noticed.</p><p>I've spent three decades in cybersecurity, and I want to be precise about what this case represents. Forget the gullible-employee narrative, and set aside the security-controls conversation, though those mattered. What happened at Arup is a story about discernment failing at the point where it was most needed, where a human being looked at a sophisticated approximation of reality and could not distinguish it from the thing itself.</p><p>I've worked on this problem my entire professional life, though we've called it different things at different times: social engineering, adversarial deception, information operations. The technical vocabulary changes. The underlying dynamic stays remarkably stable. An attacker succeeds when the target cannot tell the real from the performed. Every phishing email, every pretexting call, every fabricated identity operates on the same principle: if the approximation is good enough, discernment collapses.</p><p>What has changed, and what brings me to write this essay, is scale. The tools to produce convincing approximations of reality are now available to anyone with a laptop and a few dollars. Generating a deepfake voice clone costs less than two dollars. Documented losses from deepfake-enabled fraud exceeded two hundred million dollars in the first quarter of 2025 alone. Three seconds of audio is enough to produce a voice match at eighty-five percent accuracy. The barrier between amateur and professional deception has effectively dissolved.</p><p>This is no longer just a cybersecurity problem. That's what I want to say clearly, because it's the connection that makes the rest of this essay necessary.</p><div><hr></div><h2>The collapse into public life</h2><p>What the cybersecurity world has been dealing with for decades, the broader public encountered in 2024. And the encounter was not what anyone predicted.</p><p>The feared deepfake apocalypse in the U.S. elections didn't arrive as a single catastrophic event, the bombshell fake video the night before the vote that was supposed to set the world on fire. What arrived instead was something more diffuse and, I think, more damaging: a steady saturation of the information environment with content whose authenticity could not be determined by inspection. AI-generated images of political candidates in fabricated situations, synthetic voice clones used in robocalls telling voters not to vote, fake celebrity endorsements, manufactured news websites populated with AI-written articles designed to lend credibility to deceptive stories nested among them. All of it circulating simultaneously, none of it individually decisive, the cumulative effect greater than any single piece.</p><p>The individual incidents were, by most assessments, less impactful than feared. Researchers found that traditional "cheapfakes" (edited and doctored content made without AI) were used seven times more often than AI-generated material. The large-scale disinformation campaigns that intelligence agencies warned about didn't revolutionize the influence operations they'd been tracking for years.</p><p>But this framing misses what actually happened. The damage wasn't in any single deepfake. It was cumulative: a general degradation of the public's ability to trust what they see, hear, and read. When any piece of media might be synthetic, the question "is this real?" becomes unanswerable through inspection alone. And when that question becomes unanswerable, the fake gains credibility because it can't be definitively identified as fake, while the real loses credibility because it can't be definitively identified as real. Both directions at once.</p><p>Researchers call this the liar's dividend: the point at which the existence of convincing fakes allows anyone (politicians, corporations, anyone with something to hide) to dismiss authentic evidence as fabricated. The liar's dividend doesn't require producing a single deepfake. It only requires that deepfakes exist and that the public knows they exist. The mere possibility of fabrication becomes a tool for evading accountability.</p><p>I recognize this dynamic because I've watched it operate inside organizations for thirty years. When the adversary can produce convincing forgeries, the first thing to go isn't truth itself. It's the confidence that truth is findable. And once that confidence erodes, discernment becomes impossible, not because people lack intelligence but because they lack a stable ground from which to judge.</p><p>The political situation in the United States has accelerated this erosion in ways that connect directly to what I wrote in <em>The New Formation</em>. The attacks on universities and educational institutions aren't separate from the information crisis; they are part of the same structural problem. The institutions that formed critical thinkers, that taught people to evaluate evidence, weigh sources, and hold complexity without collapsing into false certainty, those institutions are being hollowed out at precisely the moment when the tools of deception are becoming most powerful. Anyone in my profession would recognize the pattern: a compound vulnerability. Weaken the defense while strengthening the attack, and the system fails not at one point but everywhere at once.</p><div><hr></div><h2>What the pamphleteer counts on</h2><p>In <em>The New Formation</em>, I drew a distinction between two kinds of AI-enabled operators: the artisan who brings genuine formation to powerful tools, and the pamphleteer who uses the same tools to produce work that performs knowledge without possessing it. I want to push that distinction further, because the pamphleteer's advantage depends on something specific, and naming it is the point of this essay.</p><p>The pamphleteer counts on an audience that cannot tell the difference.</p><p>That's the whole model. Whether we're talking about deepfake fraud, political disinformation, AI-generated content that performs expertise, or any other form of sophisticated approximation, the mechanism rests on a single condition: that the person receiving the output lacks the discernment to evaluate it. The finance worker at Arup was a competent professional who exercised reasonable judgment. The voters who encountered AI-generated political content were ordinary citizens doing their best to stay informed. The readers who consume AI-written analysis that sounds authoritative are, most of them, intelligent people acting in good faith.</p><p>None of that protects them. Good faith and reasonable intelligence are not enough when the approximation is sophisticated and the foundation for evaluating it is absent.</p><p>In the earlier essay, I argued that the Renaissance required people formed deeply enough to do something worth doing with a new technology. The printing press produced pamphlets and treatises in equal measure, and the difference between them was the formation of the person using the press. But I left something out of that argument, something that nags at me and that I think completes the picture.</p><p>The printing press also required readers formed deeply enough to tell the pamphlet from the treatise.</p><p>The artisan can produce extraordinary work: genuine thought, authentic voice, the mark of a consciousness shaped by contact with the world rather than just with data. But if the audience for that work has been systematically deprived of the formation needed to recognize it, the artisan and the pamphleteer become indistinguishable. And when they become indistinguishable, the pamphleteer wins, because the pamphleteer is faster, cheaper, and produces at a volume the artisan cannot match.</p><p>The pamphleteer counts on the absence of discernment, not on the quality of the forgery. The deepfake on the Arup video call didn't need to be perfect; it needed to be good enough for someone who lacked the specific tools to detect it. The AI-generated political content didn't need to fool everyone; it needed to reach people whose formation hadn't equipped them to evaluate it. And the AI-written analysis that performs expertise doesn't need to withstand scrutiny, only to land in an environment where scrutiny has become rare.</p><div><hr></div><h2>The obligation formation creates</h2><p>I've written three essays now about what I've called the reality hunger: the growing human need for what is authentic, unmediated, genuinely formed. I've argued that the hunger is real, that meeting it requires formation, and that a new renaissance is possible if enough people develop the discipline to use powerful tools without being consumed by them.</p><p>All of that still holds. But it's incomplete without this: formation creates an obligation that extends beyond the person who possesses it.</p><p>The formed person who produces authentic work in an environment of synthetic noise is doing something valuable. But if they produce that work only for an audience of other formed people, the renaissance stays confined to a circle of practitioners who recognize each other's quality while the broader culture drowns in pamphlets. That's a monastery, not a renaissance.</p><p>The Renaissance itself succeeded because it didn't stay in the workshops and private libraries. The humanists didn't just produce formed work; they fought for the conditions under which formed work could be recognized. They established schools and academies. They argued, publicly and sometimes dangerously, for the value of classical knowledge against institutions that wanted it suppressed. They took the risk of insisting that discernment mattered, not just for themselves but for the culture at large.</p><p>The contemporary version of this obligation is uncomfortable, because it means the formed person can't simply retreat into the quality of their own work. The responsibility extends to the audience: cultivating discernment rather than just exercising it, making the distinction between formed work and generated content visible and legible, teaching (in whatever form teaching takes) the skills of evaluation that the institutional infrastructure is no longer reliably providing.</p><p>I feel this in my own practice. The humanization framework I described in <em>The New Formation</em>, the discipline I built to maintain authentic voice in machine-assisted production, has value beyond my own writing. Not because the specific rules should be shared (they're mine, and I'm keeping them) but because the principle of the framework matters: that maintaining authenticity in an age of synthetic production requires structured discipline, and that the discipline itself is a form of knowledge worth developing. When I write about this publicly, I'm not just describing my practice. I'm making an argument that discernment is possible, that the difference between formed work and generated content is real and detectable, and that the skills to detect it can be cultivated.</p><p>That argument, more than any individual essay, is what I think the moment requires.</p><div><hr></div><h2>The darker thread</h2><p>I want to be honest about something the optimistic version of this argument elides.</p><p>The economic incentives are running in the wrong direction. When an audience can't distinguish formed work from generated content, the generated content wins on cost and speed. When voters can't distinguish genuine policy analysis from persuasive pattern-matching, the pattern-matching wins on volume. When a finance professional can't distinguish a real CFO from a deepfake, the deepfake wins by default. Every market rewards efficiency, and generated content is more efficient than formed work. Without discernment, the market selects for the pamphleteer over the artisan, every time, because the pamphleteer's output is cheaper, faster, and functionally equivalent to an audience that can't tell the difference.</p><p>The degradation of discernment, then, is not a side effect of the current technological and political moment. For some actors, it's the goal. The political forces attacking education benefit from a public that can't evaluate claims. The content mills flooding every platform with AI-generated material benefit from an audience that can't tell what's genuine. The adversarial operations I've spent my career defending against benefit from targets whose capacity to judge has been systematically weakened.</p><p>I don't say this to induce despair. I say it because the stakes need to be visible. The renaissance I described in <em>The New Formation</em> is possible, but it's not inevitable, and the forces working against it are not passive. They are active, resourced, and structurally advantaged by every decline in the public's capacity to discern.</p><div><hr></div><h2>Why we need to be reborn</h2><p>And yet.</p><p>Every renaissance in history has emerged against resistance. The original Renaissance contended with institutional suppression, political violence, plague, and an entrenched authority structure that had every incentive to maintain ignorance. The humanists persisted not because the odds were favorable but because the alternative was intolerable: a culture permanently unable to distinguish knowledge from noise.</p><p>That's where we are. The alternative is intolerable. A world where deepfakes are indistinguishable from reality and nobody has the formation to care, where generated content displaces formed work and the audience shrugs because they never learned to tell the difference, where the liar's dividend compounds year after year until accountability becomes structurally impossible.</p><p>Intolerable. And because it's intolerable, people will resist it. They already are.</p><p>The hunger I identified in the first essay of this series, the reality hunger, the fatigue with synthetic everything, the craving for what is authentic: that hunger is the seed of the resistance. People feel the absence of the real even when they can't name what's missing. They sense when something they're reading was assembled rather than thought. They know, in a way that precedes analysis, when an encounter is genuine and when it's performed.</p><p>That instinct is where discernment begins. Pre-rational, pre-institutional, and impossible to fully suppress, because it belongs to what is most fundamentally human about us: the capacity to recognize another consciousness at work.</p><p>But instinct alone isn't enough. It has to be developed into capability. The person who senses that something is off needs the formation to identify <em>what</em> is off and <em>why</em>. The instinct has to become discernment, and discernment has to become practice, and practice has to become culture.</p><p>That's the rebirth. Not a return to the institutions we had, though some of them are worth defending, and not a rejection of the technology, which is as neutral as the printing press was. A rebirth of the commitment to human formation as the foundation of everything else: the production of knowledge, its evaluation, its transmission, and the shared capacity to tell the real from the performed.</p><p>The pamphleteer counts on that commitment dying. The artisan's obligation is to make sure it doesn't.</p><p>I've spent thirty years protecting systems from adversaries who exploit the gap between what's real and what's convincing. I've written four essays now about what happens when that gap moves from networks and servers into culture and consciousness. And what I keep coming back to is this: the defense has always been the same. Formed people, with the judgment to see clearly and the discipline to act on what they see.</p><p>The tools are here. The hunger is here. So is the threat. The question is whether enough of us will choose the discipline of formation, not because an institution requires it, but because the alternative is a world where no one can tell the difference.</p><p>I think enough of us will. Not because I'm optimistic. Because the hunger for the real is older and stronger than any technology designed to simulate it.</p><p>That's why we need to be reborn. And that's why I think we will be.</p><div><hr></div><p><em>This is the final essay in a sequence that began with <a href="https://www.marcobrondani.com/the-reality-hunger/">The Reality Hunger</a>, continued through <a href="https://www.marcobrondani.com/the-presence-test/">The Presence Test</a> and <a href="https://www.marcobrondani.com/the-new-formation/">The New Formation</a>. Together, they trace what becomes scarce in an age of synthetic everything, what it demands of us, and what happens if we refuse the demand.</em></p>]]></content:encoded></item><item><title><![CDATA[The New Formation]]></title><description><![CDATA[Universities are being hollowed out while AI makes institutional formation seem unnecessary. Both forces share a blind spot: how people develop the judgment to use powerful tools wisely. The Renaissance faced a similar convergence.]]></description><link>https://www.marcobrondani.com/p/the-new-formation</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-new-formation</guid><pubDate>Thu, 19 Feb 2026 07:17:16 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f016f176-f3bc-4e67-8f62-ddd6859e7cd0_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0_GV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0_GV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!0_GV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!0_GV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!0_GV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0_GV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0_GV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!0_GV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!0_GV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!0_GV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F618b0779-c8cf-4361-bd77-efc328338751_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p><em>The original Renaissance required people who had been shaped by classical knowledge to pick up a new technology and do something worth doing with it. We are approaching a similar intersection, and the question is whether enough people will be formed deeply enough to meet it.</em></p><div><hr></div><p>Let me start with the political situation, because this essay doesn't make sense without it.</p><p>Across the United States, public universities are losing funding, having programs cut, facing political interventions in what can be taught and researched. Federal grants are being frozen or clawed back. Diversity programs dismantled. Entire departments questioned not on academic grounds but on ideological ones. The assault is not subtle and it is not new, but it has reached a scale and a directness that makes it qualitatively different from what came before. The institutional infrastructure that has shaped critical thinkers for generations is being hollowed out in real time.</p><p>This is happening while, from a completely different direction, artificial intelligence is making individual production so efficient that the question of why anyone needs institutional formation at all is starting to sound reasonable. A single person with the right tools can now produce written analysis, visual design, software, research synthesis, and strategic thinking at a level that five years ago required a team. The one-person shop, the AI-enabled artisan, is becoming not just viable but attractive. Why spend four years and six figures at a university when the tools are available now and the institution might not survive long enough to hand you a diploma?</p><p>Two forces, converging from opposite directions, and between them a gap that neither force acknowledges: the question of how people become the kind of people who can use powerful tools wisely. How judgment forms. How taste develops. How someone learns to distinguish what is authentic from what merely performs authenticity. How the capacity for original thought survives contact with machines that produce convincing approximations of it.</p><p>That gap is what this essay is about.</p><div><hr></div><h2>The blind spot</h2><p>The attacks on universities are driven by political calculation, not by a theory of human development. The politicians defunding higher education are not asking what will replace it as a site of intellectual formation. They don't care. The point is control, not cultivation.</p><p>The AI-enabled efficiency revolution has a different motivation but a similar blind spot. The technology community is asking how to make individuals more productive, more capable, more independent of institutional overhead. Good questions, practical questions, and almost none of them address what happens inside the person using the tools. The assumption, sometimes explicit and sometimes just ambient, is that the tools themselves confer capability. That access equals competence. That if you can produce the output, the question of how you developed the judgment to know whether the output is any good becomes irrelevant.</p><p>Both forces share an indifference to formation, and that indifference is the crisis.</p><p>I use the word <em>formation</em> deliberately. Not education, which has become too tangled with credentials and institutions to carry the weight I need it to carry. Formation: the slow, often uncomfortable process by which a person develops the internal architecture to think clearly, judge carefully, and create something that bears the mark of genuine understanding rather than sophisticated pattern-matching. Formation happens through contact with difficult ideas, through the friction of other minds that disagree with you, through the experience of being wrong in ways that can't be optimized away. It takes time. It resists acceleration. And it is exactly what both the political and the technological forces are, in their different ways, making harder to come by.</p><div><hr></div><h2>The parallel that matters</h2><p>The printing press arrived in Europe in the mid-fifteenth century and within decades the volume of written material available to ordinary people increased by orders of magnitude. Books that had been locked in monasteries, copied by hand at enormous expense, could suddenly be reproduced cheaply and distributed widely. The technology was revolutionary in the most literal sense.</p><p>But the printing press did not cause the Renaissance.</p><p>What caused the Renaissance was a generation of people who had been formed by classical knowledge, who had spent years in contact with Greek and Roman thought, who had developed the intellectual architecture to do something with the new distribution technology beyond reproducing what already existed. Petrarch didn't need the printing press to rediscover Cicero's letters. He needed the formation to recognize what he'd found and to understand why it mattered. The press came later, and what it did was amplify what people like Petrarch had already begun.</p><p>The technology without the formation produced pamphlets, propaganda, astrological charts, sensational accounts of miracles and monsters. An enormous volume of material that was technically sophisticated (printed, distributed, consumed at scale) and intellectually empty. The formation without the technology stayed trapped in small circles, brilliant but contained. The Renaissance happened at the intersection: people with deep formation using a powerful new tool to say something that could not have been said before, or said as widely, or with such consequence.</p><p>Every element of that convergence has a contemporary analogue, and I don't think the parallel is decorative. Classical knowledge is being rediscovered, not from monastery shelves but from the wreckage of an information ecosystem that buried it under algorithmic noise. A new technology is arriving that amplifies individual capability by orders of magnitude. Institutional authority is fracturing. And somewhere in the middle are the people who will either use the new tools to produce the contemporary equivalent of astrological pamphlets or will use them to initiate something genuinely new.</p><p>The determining factor, then as now, is formation.</p><div><hr></div><h2>What I found in my own practice</h2><p>I should be specific here, because the argument I'm making is not theoretical for me.</p><p>I write essays on cybersecurity, systems thinking, and AI. I've done this for some time, long enough though to have a voice that I recognize as mine, shaped by three decades of working in technology and by the reading and thinking that accumulated alongside the professional work. When I began using AI tools in my writing process, which I do, openly and without apology, I noticed something that troubled me.</p><p>The output was good. Often very good. Coherent, well-structured, stylistically competent. And it wasn't mine.</p><p>Not in the obvious sense of plagiarism. The AI wasn't copying someone else's work. It was producing a plausible version of the kind of writing I do, assembled from patterns in the training data, and the result had the shape of my thinking without the substance of it. The right vocabulary, the appropriate structure, the expected moves. What was missing was harder to name: a certain friction, the places where my actual thought process resists smoothness, the moments where I change direction mid-paragraph because the idea I started with turned out to be less interesting than the one that interrupted it. The texture of a mind working, not the performance of a mind having worked.</p><p>I could have ignored this. Many people do, and I don't say that as judgment. The output was publishable. It would have passed any reasonable quality bar. But something in me resisted, the same instinct that makes me uneasy when a room full of people at a dinner party are all performing engagement rather than experiencing it. The <em>Reality Hunger</em> instinct, I suppose, applied to my own production.</p><p>So I built a framework. Not a set of prompts or a style guide, but a discipline. A structured, iterative practice for maintaining authentic voice inside machine-assisted production. I want to emphasize each of those words because they each do specific work. <em>Framework</em>: structured, codified, not left to intuition. <em>Discipline</em>: practiced repeatedly, refined through failure, requiring sustained attention. <em>Authentic</em>: mine, recognizably mine, bearing the marks of how I actually think rather than how a model predicts I might think. <em>Machine-assisted</em>: the human leads, the machine serves. Not machine-replaced. Not machine-only.</p><p>The process of building this framework taught me something I hadn't expected. The discipline required to maintain authentic voice in AI-assisted work is itself a form of formation. It forced me to understand my own writing more deeply than I ever had. To identify what makes my thinking mine rather than generic. To develop a conscious relationship with my own cognitive patterns, my tendency to self-correct mid-argument, my habit of distrusting conclusions that arrive too neatly, my preference for questions that stay open over answers that close. The machine, by producing convincing approximations of my voice, made me examine what my voice actually consists of. And the framework I built to maintain that voice became, unexpectedly, a tool for knowing myself as a writer in ways I hadn't needed to before.</p><p>This is what I mean by formation through technology rather than against it. The AI didn't form me. The discipline of working with AI while refusing to let it replace my thinking, that's what formed me. The tool was the occasion, not the cause.</p><div><hr></div><h2>The artisan and the pamphleteer</h2><p>The AI-enabled individual operator, the one-person shop, the artisan with powerful tools: this figure is real and growing. I meet them constantly. Developers who build products that would have required a team of twenty. Analysts who synthesize information at a scale that would have demanded a department. Writers who produce and distribute work that would have needed an editor, a publisher, a marketing apparatus.</p><p>Some of them are extraordinary. They use the tools to amplify genuine expertise, deep knowledge, hard-won judgment, the kind of understanding that only develops through years of contact with a domain. The tools make them faster, wider in reach, more prolific. But the foundation is human formation: they know their subject, they know their craft, they know what good looks like because they've spent years learning to see it. The AI accelerates what they already are.</p><p>Others worry me. They produce work that is technically impressive and substantively hollow. The tools have given them the capacity to generate output that looks like expertise without the formation that expertise requires. Fluent analysis that doesn't quite hold up under scrutiny. Persuasive arguments built on patterns rather than understanding. Content that performs knowledge without possessing it. The contemporary equivalent, and I don't think I'm stretching the analogy, of a beautifully printed astrological pamphlet: sophisticated in production, empty in substance.</p><p>The difference between these two figures is formation. And the crisis, the one that the political attacks on education and the technological enthusiasm for efficiency both ignore, is that we are systematically undermining the conditions under which formation happens while simultaneously increasing the power of the tools that make formation necessary.</p><div><hr></div><h2>Why it will happen anyway</h2><p>I said at the outset that a new renaissance is coming. Let me explain why I believe this despite the forces working against it, or maybe because of them.</p><p>The original Renaissance was not planned. No institution decided it was time to rediscover classical knowledge and integrate it with new technology. It emerged from individuals who felt the inadequacy of what was available to them and went looking for something deeper. Petrarch climbing Mont Ventoux and reading Augustine at the summit wasn't following a curriculum. He was following a hunger.</p><p>That hunger is visible now, and growing. I wrote about it in <em>The Reality Hunger</em>: the fatigue with synthetic media, the turn toward the unmediated, the craving for contact with something real. In <em>The Presence Test</em> I complicated the picture, arguing that the hunger isn't for physical presence per se but for authentic encounter, for experiences that respect you rather than extract from you. This essay extends the argument further: the hunger isn't only for authentic experience. It's for authentic <em>capability</em>. People want to be able to do real things, to think real thoughts, to produce real work, not to generate sophisticated approximations of these and hope nobody notices.</p><p>The political destruction of educational institutions will not kill this hunger. It will intensify it. When the official channels of formation are degraded or closed, people seek formation elsewhere. They always have. The Renaissance itself happened partly outside the established institutions, in workshops and studios and private libraries, in networks of correspondence between individuals who shared texts and argued about them across distances. The university system was part of the story, but it was never the whole story, and some of the most important formation happened in spaces that looked nothing like classrooms.</p><p>AI, paradoxically, will accelerate this. Not because the tools themselves provide formation (they don't) but because the gap between what the tools can produce and what a formed human can produce will become increasingly visible. We are approaching a period where AI-generated content is abundant, competent, and mediocre, and the work that stands out will be the work that carries the marks of genuine human formation: originality, judgment, the willingness to be wrong in interesting ways, the capacity to hold contradictions without resolving them into false clarity. The tools will raise the floor. Formation will remain the ceiling.</p><p>And the people who develop that formation, whether through traditional education or through the kind of disciplined practice I described in my own work or through means we haven't invented yet, those people will constitute the new renaissance. They won't reject the technology. They'll use it the way the Renaissance humanists used the printing press: to amplify what they already know, to distribute what they've genuinely thought, to connect with others who share the hunger for something more than sophisticated reproduction.</p><div><hr></div><h2>The question that remains</h2><p>I don't want to end with false confidence. The conditions for a new renaissance are assembling, but conditions are not guarantees.</p><p>The original Renaissance required a critical mass of formed individuals who found each other, influenced each other, built on each other's work. It required patrons and institutions (however imperfect) that valued deep knowledge. It required a culture that recognized the difference between the pamphlet and the treatise, between the printer who reproduced and the thinker who created.</p><p>We may or may not develop these things. The political forces attacking education are powerful and they are winning in many places. The economic incentives favor production over formation, output over understanding, speed over depth. The tools themselves are seductive precisely because they make formation feel unnecessary: why spend years developing judgment when the machine produces something that looks like judgment in seconds?</p><p>The answer to that question is the answer to everything else in this essay. And it's the same answer the Renaissance gave, though it took a century to fully articulate it: because the human who has been formed can do something the machine cannot. Can see what the pattern-matcher misses. Can hold the contradiction that the optimizer resolves too quickly. Can produce work that bears the mark of a consciousness that has been shaped by contact with the world, not just by contact with data.</p><p>The hunger is there. The tools are there. The institutions are fracturing. Somewhere in the middle are the people who will pick up the new technology and do something with it that could not have been done before, because they brought to it something the technology could not provide for itself.</p><p>Formation is that something. And the renaissance begins when enough people decide it matters enough to pursue, with discipline, even when no institution requires it of them.</p><div><hr></div><p><em>This essay continues the sequence begun with <a href="https://claude.ai/the-reality-hunger/">The Reality Hunger</a> and <a href="https://claude.ai/the-presence-test/">The Presence Test</a>. All three explore what remains irreducible in an age of synthetic everything, and what it demands of us.</em></p>]]></content:encoded></item><item><title><![CDATA[The Presence Test]]></title><description><![CDATA[Ian McKellen told Stephen Colbert that theatre is an irreplaceable human encounter, then described a production where the actors aren't in the room at all. The contradiction between these two claims may be more useful than either one alone.]]></description><link>https://www.marcobrondani.com/p/the-presence-test</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-presence-test</guid><pubDate>Wed, 18 Feb 2026 09:27:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bad0b13c-e65b-4371-93b0-b1c636b08d18_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!T0zy!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!T0zy!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!T0zy!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!T0zy!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!T0zy!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!T0zy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!T0zy!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!T0zy!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!T0zy!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!T0zy!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1153c87d-7d1e-4fb5-bf92-72b5fa211e55_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p><em>When a man who has spent sixty years on stage tells you the future of theatre is a room where no actor is physically present, you should probably pay attention. And then think very carefully about what he's actually saying.</em></p><div><hr></div><p>Something happened on late-night television recently that deserves more attention than it got. Not the Shakespeare, though the Shakespeare was extraordinary. A quieter thread running through the conversation. A thread about presence, simulation, and what happens when we try to technologize the one thing that might resist it.</p><p>Ian McKellen sat across from Stephen Colbert and did what McKellen does: he made the room more alive by being in it. Over the course of a conversation that wandered from Gandalf to smart glasses to a four-hundred-year-old speech about immigrants, he articulated something I've been circling for months, something I tried to get at in <em>The Reality Hunger</em> but hadn't quite found the right frame for.</p><p>The frame, it turns out, is a contradiction. And contradictions, when they're honest, tend to be more useful than arguments.</p><div><hr></div><h2>The irreducible encounter</h2><p>McKellen describes the theatre as an unrepeatable human encounter &#8212; real people sharing real space, breathing the same air, reacting together in a way no recording can replicate. For him, the magic lies in that collective presence: the actor speaks, the audience responds, and something alive passes between them.</p><p>Sixty years on stage earns you the right to make claims like that. This is a man who has performed in more rooms, for more audiences, across more decades than most of us will experience in any domain, and what he keeps coming back to is the breathing. The shared breathing. When someone with that depth of practice tells you that's where the magic lives, you should probably take it seriously.</p><p>Colbert, who understands the mechanics of live performance better than most interviewers, distills the idea to its core: in an age of synthetic content, theatre remains humans doing something with humans, by humans, for humans &#8212; a reminder that the real thing still matters.</p><p>I wrote in <em>The Reality Hunger</em> about presence as something simulations cannot replicate. Being somewhere, actually there, with a body that occupies space and senses that take in unfiltered input: that differs from any representation of that experience. The felt sense of presence involves dimensions screens and speakers cannot capture. McKellen and Colbert aren't making a theoretical argument. They're reporting from the field. Sixty years of evidence, condensed into a few sentences on a late-night couch.</p><p>McKellen ends his reflection with a quiet warning: the real thing only happens when you're actually there. Theatre, he says, is a human encounter that can't be mediated through a screen; a moment you can only catch if you're not staring at your phone. It's a reminder that the authentic experience still requires presence.</p><p>That word, <em>presence</em>, is doing a lot of work. More than McKellen probably intends. Because presence isn't just about theatre. It's about everything that matters: every conversation where tone carries more than words, every negotiation where you read the room before anyone speaks. The irreducible fact of being in the same space, at the same time, subject to the same conditions, with nothing optimized and nothing replayable and nothing filtered between you and the other person.</p><div><hr></div><h2>The strange new frontier</h2><p>And then McKellen describes An Ark.</p><p>An Ark is a production currently running at The Shed in New York. The audience sits together in a room (shoes off, as it happens, which is either a meaningful ritual gesture or a practical requirement of the technology, or both) and each person wears mixed reality glasses that conjure actors who aren't physically there. McKellen, Golda Rosheuvel, Arinz&#233; Kene, Rosie Sheehy: captured volumetrically by fifty-two cameras in Grenoble, reconstructed in three dimensions, projected into your field of vision so vividly you feel you could reach out and touch them.</p><p>It isn't film, and it isn't theatre as we've known it. McKellen puts it with characteristic directness:</p><blockquote><p>"I don't know whether it's the future, but it's certainly not the past." Sir Ian McKellen</p></blockquote><p>That sentence is worth sitting with. A man who has just told you that the irreplaceable magic of theatre is the shared breath, the collective presence, the unrepeatable human encounter, is now telling you about a form where the actors aren't in the room. The encounter mediated through technology. The "presence," by any traditional measure, simulated. And he's excited about it, genuinely excited, describing the mixed reality experience as making him appear "even more real" than if he were standing on the stage. He calls it startlingly new.</p><div><hr></div><h2>Holding the contradiction</h2><p>Most commentary I've seen resolves this in one direction or the other. Either McKellen is right about presence and An Ark is a gimmick, or An Ark is the future and the talk about shared breathing is sentimentality. Technology boosters claiming vindication on one side, theatre traditionalists clutching their pearls on the other, and both of them, I think, missing the point.</p><p>Or maybe I'm the one missing something. But I keep returning to the idea that the contradiction <em>is</em> the point, and that it maps onto something I've been thinking about since writing <em>The Reality Hunger</em>: the relationship between what is real and what is authentic turns out to be more complicated than either the technologists or the romantics want to admit.</p><p>McKellen isn't confused. He's holding two things at once. The theatre is an irreducible human encounter. And this new form, which removes the physical actor from the room, creates a different kind of encounter that has its own authenticity. Something else entirely, something that doesn't have a name yet, that resists the categories we have for it.</p><p>The reviews of An Ark are mixed, which is what you'd expect from a genuinely new form. Some reviewers found the technology distracting, the resolution inadequate, the experience less moving than seeing the same actors from the back of a traditional theatre. Others reported something uncanny: the sense that McKellen was looking directly at them, speaking to them personally, present in a way that a stage performance (where the actor projects to hundreds) cannot achieve.</p><p>Both responses are probably honest. And both confirm something: we're at a threshold where the categories themselves are shifting.</p><div><hr></div><h2>What the paradox reveals</h2><p>In <em>The Reality Hunger</em>, I argued that reality is becoming the next cultural frontier. As synthetic media saturates our environment, the unmediated encounter becomes scarce, and what is scarce becomes valuable. I still believe this. The fatigue with synthetic everything is real and growing.</p><p>But McKellen's contradiction complicates the argument in a way I find productive. What he's describing with An Ark isn't synthetic media in the way we usually mean the term. Not a deepfake, not AI-generated content. A recording of a real performance, captured with extraordinary fidelity, presented in a spatial format that creates a different relationship between performer and audience. The actor's performance is real. The audience's experience of it is real. What's missing is the shared physical space.</p><p>So the question becomes: is shared physical space the thing that makes an encounter real? Or is it one of several things, and some of the others might be achievable through new means?</p><p>I don't think McKellen knows the answer. I don't think anyone does yet. That's what makes his honest uncertainty ("I don't know whether it's the future") more valuable than anyone's confidence.</p><p>What I find most interesting is what he <em>doesn't</em> say. He never claims An Ark replaces theatre, never suggests it's better. What he says is that it's not the past, that it's something new, and then in the same conversation he reaffirms that the irreplaceable thing about traditional theatre is the shared human presence. He's describing a landscape that has more room in it than either camp wants to acknowledge. I keep turning this over.</p><div><hr></div><h2>The presence test</h2><p>Here's where I want to push beyond what McKellen says, into what I think his contradiction implies.</p><p>We're entering a period where the question "is this real?" gives way to something more interesting: "what kind of real is this?" The binary (real versus simulated, authentic versus synthetic, present versus mediated) may be less useful than a spectrum, and the relevant measure along that spectrum has less to do with technology than with experience.</p><p>Call it the presence test. Not whether the person is physically in the room, but whether the encounter feels alive. Whether it resists prediction, demands something from you, leaves a mark.</p><p>By that test, a distracted conversation at a dinner party where everyone is on their phones might score lower than a mixed reality theatre piece where a volumetrically captured Ian McKellen looks you in the eye and tells you about death. Physical co-presence failing the test while a mediated, reconstructed encounter passes it.</p><p>I'm not comfortable with this conclusion, which is probably a sign that it's worth exploring further.</p><p>The fatigue I described in <em>The Reality Hunger</em> is real. People are tired of synthetic everything, of optimized content, of mediated experience. The hunger for the actual, the unfiltered, the physically present: that's genuine, and it's growing. But the hunger isn't for physical co-presence per se. It's for something deeper. For encounters that treat you as a human being rather than an engagement metric, experiences that carry weight and consequence and irreversibility, contact with something (or someone) that isn't designed to manipulate you into staying a few seconds longer.</p><p>McKellen's excitement about An Ark isn't a betrayal of his love for traditional theatre. It extends the same principle. What he values, I think, is the encounter itself, and he's discovered, to his apparent surprise, that a new technology can create a form of encounter he hadn't imagined.</p><div><hr></div><h2>The deeper scarcity</h2><p>So maybe the scarcity I identified in <em>The Reality Hunger</em> needs refining. What's becoming scarce is authenticity of encounter, not reality itself, and while physical presence is often a reliable proxy for authentic encounter, it isn't the only one.</p><p>What's genuinely scarce in a world of synthetic media is experience that respects you. That doesn't optimize for your attention. That exists for its own sake, or for art's sake, or for the sake of saying something true, rather than for extraction.</p><p>A theatre performance passes this test because the actors are there for the performance, not for the algorithm. An Ark might pass it too, for different reasons: the technology serves the encounter rather than replacing it.</p><p>What fails the test, reliably and consistently, is the synthetic media ecosystem we've built. The recommendation algorithms and engagement-optimized feeds, the deepfakes and generated images and AI-written content that exists not because someone had something to say but because a system calculated that content of this type, at this time, generates clicks.</p><p>The line McKellen is drawing (maybe without fully articulating it) has nothing to do with old technology versus new. It separates experiences designed to connect from systems designed to capture. Human purposes from mechanical ones.</p><div><hr></div><h2>What remains</h2><p>The great performances endure because of what passes between performer and audience. Whether that passage requires shared air or can happen through mixed reality glasses is, in one sense, a technical question. In another sense, it's the question of our time.</p><p>McKellen's conversation with Colbert captures a moment of transition. The old certainties (that the real requires the physical, that presence means co-location, that technology mediates while bodies connect) are being tested by someone with the authority to test them. And what he's reporting back isn't a verdict. It's an honest account from the boundary.</p><p>The reality hunger is real. People crave what the synthetic world cannot provide. But what the synthetic world cannot provide may be less about physical presence and more about authentic encounter: the experience that doesn't exploit you, the performance that exists for its own truth, the moment (technologized or not) where something alive passes between two conscious beings.</p><p>The presence test is about where the attention is, where the intention is, whether the encounter is designed to extract or to give.</p><p>McKellen stood up on Colbert's stage, looked down the barrel of the camera, and performed a four-hundred-year-old speech about treating strangers as human beings. The audience was on its feet, Colbert holding back tears, and something alive passed between them all.</p><p>Was it the shared physical space that made it work? Or was it something older and stranger: the fact that a human being was saying something true, in the presence of other human beings who were willing to listen?</p><p>I don't know whether it's the future. But it's certainly not the past.</p><div><hr></div><p><em>This essay continues the thinking begun in <a href="https://www.marcobrondani.com/the-reality-hunger/">The Reality Hunger</a>. Both are part of an ongoing exploration of what remains irreducible in an age of synthetic everything.</em></p>]]></content:encoded></item><item><title><![CDATA[The Reality Hunger]]></title><description><![CDATA[Synthetic media is everywhere. AI generates images, curates feeds, and simulates experience at scale. But as representation expands, something else is quietly becoming scarce, and it may be the only thing that still matters.]]></description><link>https://www.marcobrondani.com/p/the-reality-hunger</link><guid isPermaLink="false">https://www.marcobrondani.com/p/the-reality-hunger</guid><pubDate>Fri, 13 Feb 2026 09:15:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6e5b327f-8321-49f7-a3c5-c4654996c12a_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!L69m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!L69m!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!L69m!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!L69m!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!L69m!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!L69m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!L69m!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!L69m!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!L69m!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!L69m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b9882ee-5499-4348-a8ca-c8518e872638_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p><em>In an age of synthetic everything, the most radical act may be encountering the world as it actually is</em></p><p>Something strange is happening to scarcity. The rarer commodity is no longer information, content, or even attention, but unmediated contact with something real, the kind of experience that can't be generated, curated, or optimized away.</p><p>We are living through an unprecedented explosion of synthetic media. AI-generated images flood social platforms. Recommendation systems curate our cultural diet. Deepfakes blur whatever line used to exist between documentation and fabrication. Video game worlds achieve photorealism. Virtual influencers amass real followers. The tools to generate convincing synthetic content (images, audio, video, text) are now accessible to anyone with a laptop and a connection. We have never been more surrounded by representations of reality.</p><p>Whether we've ever been further from the thing itself. I think so, but I'm probably overstating it. Or maybe not.</p><p>What I want to explore is a provocation more than an argument: that reality itself is about to become the next cultural frontier. That a generation raised on digital illusions is beginning to crave something far more radical than the next platform or the next filter. They're beginning to crave what's actually there.</p><div><hr></div><h2>The synthetic saturation</h2><p>To understand why reality might become countercultural, you have to understand just how synthetic our environment has become. Though "understand" might be the wrong word. It's more like noticing the water you're swimming in.</p><p>The average person now encounters more images in a single day than someone living a century ago would have seen in a lifetime. That statistic probably isn't precise (I've seen versions of it that vary wildly) but the direction is inarguable. Most of these images aren't photographs of things that exist. They are composites, manipulations, generations. Optimized for engagement, filtered for aesthetic consistency, curated by algorithms that learn what makes us pause, click, linger. The visual environment we inhabit is largely artificial: constructed for effect rather than captured from reality. Not false. Constructed. There's a difference, though some days I'm not sure how much of one.</p><p>This goes beyond the visual. The music we hear is quantized, autotuned, algorithmically distributed based on predicted preference. The news we consume is selected by systems designed to maximize attention. The social interactions we have online are mediated by platforms whose architectures shape what we say and how we say it. Even our sense of what other people think and feel, which used to come from reading a room, from body language and silence and someone's eyes, gets filtered through systems that amplify certain signals and suppress others.</p><p>Almost everything we encounter has been processed, optimized, intermediated. The raw encounter with something not designed for consumption has become genuinely rare. That sentence sounds like something from a manifesto. I don't entirely trust it. But I don't know how to make it less true.</p><p>Generative AI accelerates this. When AI can produce photorealistic images of scenes that never existed, convincing audio of words never spoken, video of events that never happened, the already thinner boundary between real and synthetic dissolves further. We enter a world where any piece of media might be generated. Where visual evidence proves nothing. Where "is this real?" becomes unanswerable through inspection alone.</p><div><hr></div><h2>The fatigue sets in</h2><p>Among younger generations, those who have never known a world without smartphones and social media, something interesting is happening. The digital natives are getting tired.</p><p>The signs are scattered but consistent. Film photography over digital, valued not for image quality but for the constraints: you can't see the result immediately, can't take infinite shots. Vinyl resurgent. Physical books. Handwritten letters, which still surprises me. A movement toward dumbphones that do less. A suspicion of filters that shows up in the popularity of deliberately unpolished content. A hunger for experiences that cannot be captured and shared, that exist only for those present.</p><p>Not dominant trends. The synthetic world continues to expand. But at the margins, where culture shifts before the mainstream notices, there is a turning.</p><p>I want to be careful here because I can feel myself romanticizing. Not every twenty-two-year-old buying a film camera is making a philosophical statement. Some of them just think it looks cool. Fair enough. But the fatigue is real, and it has several sources. The exhaustion of infinite choice. The emptiness of optimized experience, that specific flatness when everything is designed to engage you and nothing actually moves you. The loneliness of mediated connection: hundreds of online interactions can leave you more isolated than one conversation with someone present in the room. I've felt this. Most people I know have felt this and don't talk about it much.</p><p>And something harder to name. A kind of ontological unease from spending too much time in environments that are not quite real. A sense that the synthetic world, however convincing, lacks something essential. Insubstantial. The philosophers would recognize that word. I'm borrowing it because I can't find a better one.</p><div><hr></div><h2>The scarcity of the real</h2><p>Value emerges from scarcity. What is abundant becomes cheap; what is rare becomes precious. For most of human history, representations were scarce and reality was abundant. Images were expensive to produce. Recordings required significant effort. The default experience was unmediated encounter with the physical world.</p><p>We have inverted this relationship. Or it has been inverted for us. Hard to say which.</p><p>Representations are now effectively infinite. Anyone can generate endless images, videos, texts. The marginal cost of producing synthetic content approaches zero. Meanwhile, unmediated reality has become scarce. Experiences not designed, curated, optimized, or recorded are increasingly rare. Attention not captured by screens is increasingly unusual. Presence, simple presence in a place with other people without the mediation of devices, almost countercultural.</p><p>So authenticity, previously so abundant it was invisible, gains value precisely because it becomes rare. The genuine article, the unfiltered experience, the thing itself rather than a representation of it: these carry a premium they never carried before. Not because they're somehow metaphysically superior. Because they're scarce in a world drowning in simulation.</p><p>You can see this already in markets. Experiences outcompete possessions in consumer spending, not because experiences are new but because they resist digitization. Live events command premiums that recordings cannot. Handmade objects sell for multiples of machine-made equivalents, and the reason isn't functional superiority. The marks of human making are legible, and people pay for that legibility.</p><p>The same logic may be spreading through culture more broadly. If synthetic media is abundant and reality is scarce, reality becomes the luxury good. The next status symbol may be what you've actually experienced: unmediated, unoptimized, unshared.</p><p>Though luxury good is the wrong metaphor. Implies exclusivity, velvet ropes, scarcity manufactured for profit. What I mean is something more like oxygen. Valuable because you suddenly realize you need it.</p><div><hr></div><h2>Reality as rebellion</h2><p>Every generation defines itself partly through rejection. The children of conformity became hippies. The children of hippies became punks. The children of analog became digital natives.</p><p>What do the children of simulation become?</p><p>Maybe they become realists. Forget epistemology for a minute. I mean it in the cultural sense. People who orient toward the actual, the physical, the present. People who seek out what cannot be faked because they grew up surrounded by fakery. People for whom the unmediated encounter is not the default but the achievement.</p><p>This would represent a genuine cultural shift. For decades, the trajectory has pointed one direction: more mediation, more simulation, more virtuality. The assumption has been that as synthetic experiences improve, they'll increasingly substitute for real ones. Why travel when you can explore virtually? Why meet in person when you can connect online? Why encounter the messy, inconvenient, unoptimized real world when a better version is available on demand?</p><p>But substitution has limits, and the limits are becoming visible. Simulation accelerates understanding but cannot replace what it models. AI-driven systems test hypotheses faster than physical experiments. Virtual environments train skills more efficiently than real-world practice. Synthetic media communicates ideas more vividly than unfiltered documentation. But at some point the model must be validated against the world. The map must meet the territory.</p><p>This works as both warning and invitation. Warning against mistaking the map for the territory. Invitation to recognize what reality alone provides: an irreducible complexity, a resistance to optimization, a refusal to conform to our models. That resistance is the point.</p><div><hr></div><h2>What cannot be simulated</h2><p>So what does reality offer that simulation cannot?</p><p>Start with presence. Being somewhere (actually there, with a body that occupies space and senses that take in unfiltered input) differs from any representation of that experience. I don't mean this mystically. I mean it phenomenologically, though I'm probably using the word loosely. The felt sense of presence involves dimensions screens and speakers cannot capture: peripheral vision, ambient sound, temperature, smell, the subtle awareness of other bodies nearby. Simulations approximate some of these. They cannot replicate the integration of all of them. They especially cannot replicate the knowledge, below the level of thought, in the body itself, that you are actually there.</p><p>Texture. Reality has a granularity that representations smooth away. A forest is full of small irregularities, imperfections, variations that exceed what any model captures. A photograph of that forest is not the forest. Walk through the real one and you encounter countless details no designer placed there. No algorithm optimized them. No system anticipated them. This undesigned complexity. I think it's part of what makes reality feel real. That's a circular statement and I'm going to let it stand.</p><p>Risk. When you climb an actual mountain, you can actually fall. When you have an actual conversation, you can say something you'll regret for years. When you commit to an actual relationship, you can be hurt in ways you didn't know you were vulnerable to. Simulations appeal precisely because they offer the form of experience without this danger. But the danger gives real experience its weight. Courage in a simulation is rehearsal. Love through a screen is correspondence. I don't want to push this too hard, plenty of meaningful things happen through screens, but there's a floor below which mediation cannot reach.</p><p>Truth. Not truth as correspondence to fact, but truth as encounter with what is actually the case, independent of your wishes, expectations, or designs. Reality does not care what you want it to be. The mountain doesn't care if you like it. The ocean isn't optimized for your attention. And this indifference. I want to say it's what makes the encounter meaningful, but that sounds too neat. It's more like relief. After so much content calibrated to your preferences, encountering something that simply is what it is feels like putting down a weight you didn't know you were carrying.</p><div><hr></div><h2>The paradox of power</h2><p>Here is what I keep returning to: the more powerful simulations become, the more people rediscover the value of what cannot be simulated.</p><p>Counterintuitive. You'd expect improving synthetic experiences to increasingly substitute for real ones. And for many purposes they do. Training in simulators replaces some real-environment training. Virtual meetings replace some physical gatherings. AI-generated content replaces some human-created content.</p><p>But as simulations become more pervasive, the contrast with reality sharpens. Presence and texture and risk and truth, the things that resist simulation, become more noticeable precisely because everything else is being simulated. Against a background of synthetic everything, the real becomes conspicuous.</p><p>Which is why the rebellion, if it comes, will be driven by those who know technology best. The digital natives who grew up inside the simulation feel most acutely what it lacks. They don't romanticize the pre-digital world. They never knew it. But they sense that something is missing. I think this sensing is getting louder.</p><p>We can use AI to understand reality faster. Models to test hypotheses more efficiently. Synthetic media to communicate more vividly. But simulation cannot substitute for reality, because reality is what the representations are of. The territory the maps describe. What remains when all the models are turned off.</p><p>I've said this already. I keep saying it because I keep needing to hear it.</p><div><hr></div><h2>Reclaiming the real</h2><p>What would it mean to reclaim reality in a world that keeps trying to simulate it?</p><p>Not rejecting technology. That's neither possible nor desirable, and anyway the tools of simulation are also tools of understanding. The person who uses AI to explore possibilities faster and then tests those possibilities against the actual world is engaging reality more effectively.</p><p>But reclaiming reality means remembering what technology cannot provide. Seeking out experiences that resist optimization. Encounters that refuse curation. Moments that exist only for those present.</p><p>In practical terms, and I'm aware this risks sounding like a wellness blog, but here it is: putting down the phone and looking at what is actually in front of you. Traveling to places rather than viewing images of them. Having conversations without screens between you. Making things with your hands and accepting their imperfections. Being bored sometimes. Sitting with discomfort rather than swiping it away.</p><p>Small things. Increasingly radical things. In a world where every moment can be filled with content and every experience enhanced by technology, choosing not to fill and not to enhance becomes a statement. This, here, now, unoptimized. Enough.</p><p>The great revolutions of the past two centuries have been technological. Steam power, electricity, computing, networks. Everyone assumes the next one follows the same pattern.</p><p>Maybe it won't. Maybe the next revolution is existential: a return to the real. Technology recontextualized, in service of encounter with reality, not in substitution for it. Simulation as a tool for understanding the world, then insistence on meeting the world directly.</p><p>Such a revolution would be quiet. Small choices. Turning away from screens. Seeking presence. Valuing what cannot be scaled or copied or optimized. It would spread not through platforms but through example, through people who seem somehow more alive.</p><p>The synthetic will keep expanding, it always does, and the question was never whether it would but whether we'd remember what it was expanding over.</p><p>But there will be those who remember that the map is not the territory. That the representation is not the real. They will seek out what cannot be simulated, not because it's better, but because something in them, something the algorithms haven't reached yet, still knows the difference.</p><p>Somewhere between the last scroll and the next, there is a door you already know how to open, and the only thing on the other side is what has always been there.</p><div><hr></div><h3><strong>If this was worth your time</strong></h3><p>I publish long-form essays on security, systems, and AI when the thinking is ready. No schedule, no noise. Subscribe to receive them directly.</p><p><strong><a href="#/portal/signup">Subscribe</a></strong></p>]]></content:encoded></item><item><title><![CDATA[Principle 10: Simulation Accelerates Knowledge but Cannot Replace Reality]]></title><description><![CDATA[There is a seductive logic to simulation: if a system can be modeled accurately, hypotheses can be tested without the cost and risk of physical experiments. But simulation runs on models, and models encode what is known, not what remains unknown.]]></description><link>https://www.marcobrondani.com/p/principle-10-simulation-accelerates-knowledge-but-cannot-replace-reality</link><guid isPermaLink="false">https://www.marcobrondani.com/p/principle-10-simulation-accelerates-knowledge-but-cannot-replace-reality</guid><pubDate>Thu, 12 Feb 2026 10:48:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0a10a20f-d8b8-471f-b624-024776f7acb2_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p9GW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p9GW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!p9GW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!p9GW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!p9GW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p9GW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:null,&quot;width&quot;:null,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p9GW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!p9GW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!p9GW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!p9GW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb60eba85-4953-44c6-88d0-51a7c0b747d7_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><h2>What this principle governs and why it matters</h2><p>Simulation has a pull to it. Model a system well enough and you can test ideas without spending the money, without the months in a lab, without the risk of something going wrong in a way that actually costs you. You can run thousands of variations while someone working with physical materials is still calibrating their first trial. Fail cheaply. Iterate in hours. AI has made all of this faster and more convincing than it has any right to be.</p><p>But here's the thing about models: they encode what their creators understood. That's it. What got overlooked, what nobody thought to measure, what seemed unimportant at the time. None of that makes it in. The model performs beautifully inside its own assumptions. Step outside them and it doesn't warn you. It just quietly gets things wrong.</p><p>I keep coming back to that line about the map and the territory. It's overused, maybe. But the gap between simulation and reality is exactly where this principle lives. AI-driven simulation has become so fast, so impressive, so visually and mathematically persuasive that skipping real-world testing starts to feel not just tempting but rational. The principle says: resist that. Simulation accelerates the path to knowledge. It doesn't arrive there for you.</p><div><hr></div><h2>The drug that worked in silico</h2><p>A pharmaceutical company, let's say mid-size, ambitious, well-funded, builds an AI-driven drug discovery pipeline. Machine learning models trained on molecular interactions, protein structures, clinical outcomes going back decades. The system screens millions of candidate compounds virtually. Which ones bind to the target? Which carry toxicity risks? Which can actually be manufactured?</p><p>The process works. A compound emerges for a condition with few treatment options. The simulations look clean. Predicted binding affinity: strong. Toxicity models: no flags. Pharmacokinetic predictions suggest the drug will behave well once it enters the body. Everything in the virtual world lines up.</p><p>They move fast into early clinical trials. Phase I, healthy volunteers, goes fine. Then Phase II, actual patients. The drug binds to its target, exactly as the model predicted. And then it triggers an immune response nobody saw coming. Subtle. An interaction between the compound and a protein variant common in the patient population but underrepresented (significantly underrepresented) in the training data. The model hadn't learned to worry about it because the data hadn't given it reason to.</p><p>Trial paused. Timeline slips by years. The investment yields a lesson instead of a treatment.</p><p>Were the simulations wrong? Not exactly. Incomplete is closer. They modeled what was known, which is all any model can do. Biology, though. Biology has opinions the data hasn't captured yet.</p><p>The company doesn't abandon simulation; that would be throwing away genuine value. The approach had filtered out thousands of candidates that would have failed more expensively in the lab. What changes is the confidence calibration. Simulation becomes a way to prioritize what gets tested. A filter, not a verdict. The models accelerate. They do not replace.</p><div><hr></div><h2>The principle, unpacked</h2><p>AI-driven simulation enables faster hypothesis exploration and model testing, but empirical validation in the physical world remains irreplaceable.</p><p>Call it a clarification rather than a critique. What simulation actually is. What it can do. Understanding this means holding two things at once: the genuine power simulation offers and the limits that power cannot transcend, no matter how sophisticated it becomes.</p><p>The power is real and it's growing.</p><p>AI can now build models from data rather than from first principles alone. Traditional simulation required you to explicitly encode known relationships: the equations, the rules, the logic. AI learns patterns from data that capture relationships nobody has formalized yet. Systems too complex for explicit modeling become, if not fully tractable, at least approachable.</p><p>Speed changes things in ways that go beyond convenience. A simulation running in minutes tests what would take months in a lab. Instead of a handful of hypotheses, you test thousands. Instead of optimizing a few variables, you search parameter spaces that would have been absurd to attempt manually. The acceleration doesn't just change pace. It changes what kinds of questions you can afford to ask.</p><p>And AI has opened domains that were essentially locked. Protein folding. Climate dynamics. Material properties under extreme conditions. These involve complexity that broke traditional computational approaches. Progress on problems that seemed permanently out of reach, though I'm probably oversimplifying what "progress" means in some of these fields.</p><p>So the capabilities are transformative. Genuine acceleration of the knowledge-creating process. But acceleration and replacement are different things, and that distinction is where the principle draws its line.</p><p>Models are built from what is known. They encode relationships that have been observed, measured, theorized well enough to represent mathematically. What hasn't been observed, measured, or theorized? By definition, absent from the model. And reality doesn't confine itself to what we happen to know about it. The physical world contains phenomena we haven't discovered, interactions we haven't characterized, edge cases our data hasn't sampled. When a simulation says a drug is safe, it means safe according to the model's representation of biological systems. When the drug enters an actual body, it encounters the full complexity of biology. Including the parts the model didn't capture. Especially the parts the model didn't capture.</p><p>This gap between model and reality isn't a bug waiting for a fix. It's structural. Models are useful precisely because they simplify. They strip noise to reveal signal, reduce dimensionality, focus attention on what seems important. Every simplification is a choice about what to include. Every omission, a potential failure mode when the model meets reality.</p><p>You might think the answer is more complexity. And adding complexity helps, to a point. But then come the new problems: overfitting, computational intractability, brittleness when the distribution shifts. A more complex model is still a model. It represents reality. It is not reality. There's no asymptotic convergence where the gap closes.</p><p>Empirical validation does something simulation structurally cannot. It exposes hypotheses to the full complexity of the physical world, including what the model didn't know to include. The unknown unknowns. Simulation can't surface them by construction. Ground truth against which models get calibrated and corrected. Simulation and validation are complementary. Treat them as substitutional and you're operating inside your own assumptions without a way to check whether they hold.</p><p>Simulation narrows the space of hypotheses worth testing. Filters out candidates that fail on known criteria. Identifies promising directions. This acceleration matters precisely because empirical testing is slow and expensive. But the destination remains validation. The more novel the territory, the more likely something important is missing from the model. And discovering that something during failed deployment costs more, usually far more, than discovering it during validation.</p><p>Different domains, different tolerances. A simulation of traffic flow can be validated gradually; errors are costly but usually survivable. A simulation of a nuclear reactor under extreme conditions? The consequences of model error don't allow for gradual learning. A drug simulation? Biology surprises even sophisticated models with a regularity that should make anyone cautious.</p><p>The principle holds across all of them. Simulation accelerates. Validation remains irreplaceable. The only variable is how much validation the stakes demand.</p><div><hr></div><h2>The question that remains</h2><p>AI-driven simulation is going to be treated as sufficient. Increasingly. The economics push toward it. The timelines push toward it. The sheer impressiveness of what models can now do generates a confidence that, I think, often exceeds what the evidence supports. Organizations will face competitive and financial pressure to move from simulation straight to deployment, skipping the friction of empirical testing.</p><p>Resisting that pressure takes institutional discipline. Maintaining investment in testing even when simulation says it's unnecessary. Building cultures where validation is essential rather than something you do if there's time and budget left. Metrics and incentives that don't punish the slow, expensive work of real-world verification.</p><p>There's something larger here too, something that extends beyond any single organization. Simulation models what we know. Validation exposes us to what we don't. A civilization that leans increasingly on simulation while treating validation as optional is a civilization operating within its own assumptions: confident in its models, unable to see their limits. That probably sounds dramatic.</p><p>The question is whether you can hold both truths at once. Simulation is genuinely powerful. Its power does not extend to replacing reality. Models accelerate knowledge. Knowledge ultimately has to be grounded in the physical world. The map has become remarkably detailed. The territory still holds things the map cannot show.</p><p>Simulation tells you what should happen according to everything you know. Only reality tells you what actually happens. The distance between those two isn't a gap to be closed. It's a relationship to be respected.</p><div><hr></div><h3><strong>If this was worth your time</strong></h3><p>I publish long-form essays on security, systems, and AI when the thinking is ready. No schedule, no noise. Subscribe to receive them directly.</p><p><strong><a href="#/portal/signup">Subscribe</a></strong></p>]]></content:encoded></item></channel></rss>