The Narcissistic Institution
The institution looks secure. It sounds compliant. The documentation says everything it should. And something is wrong, in a way that is difficult to name, because the performance is convincing enough that naming the wrongness feels like overreach.
Essay Four of The Valley of False Signals series
There is a version of the uncanny valley that operates not at the level of individual deception but at the level of institutional governance. It is the condition in which an organization produces all the signals of accountability (the compliance reports, the audit certifications, the governance frameworks, the risk registers) without those signals being causally connected to actual accountable behavior. The institution looks secure. It sounds compliant. The documentation says everything documentation should say. And something is wrong, in a way that is difficult to name and more difficult to act on, because the norms governing how institutions are evaluated are the same norms governing how narcissists avoid detection: the performance is convincing enough that naming the wrongness feels like an overreach.
Two cases from earlier work illustrate the condition. I examined the operational details of the Salt Typhoon intrusion in The Compound Vulnerability. What matters here is not what the Chinese state actors did inside those telecommunications networks, but what the carriers had been doing long before the attackers arrived: producing compliance signals, certifications, audit reports, regulatory filings, whose relationship to actual security posture had quietly come apart. The carriers were not negligent by any conventional measure. They had frameworks, programs, and the full apparatus of documented due diligence. Their networks were owned for eighteen months without detection. The gap was not between the carriers and their frameworks. It was between the frameworks and reality.
The federal access control failures that accompanied the Department of Government Efficiency's deployment in early 2025 demonstrated a parallel condition through a different mechanism. Treasury payment systems, OPM personnel databases, Social Security Administration records: these were governed by access control frameworks developed over decades of federal IT security policy. What the episode revealed was that the documented controls were not the controls that existed in practice. Political will, applied with sufficient force and speed, dissolved mechanisms that were supposed to be procedurally resistant to exactly that kind of pressure. Whatever one's view of the entity's mandate, the structural observation is the same: the documented controls described one reality; the operational pressure revealed another.
Both cases demonstrate institutions whose accountability signals and accountability substance had drifted apart, in ways that were invisible to normal oversight but became visible under adversarial conditions. Different threat actors, same vulnerability. The vulnerability is the gap between the framework and the reality it is supposed to represent, not the absence of framework itself.
How Institutions Learn to Perform
This is not primarily a story about bad actors or deliberate fraud. Institutional drift toward accountability theater is something close to a structural tendency in large organizations operating under compliance regimes, a tendency that emerges not from malice but from the ordinary operation of incentives, bureaucratic rationality, and the social norms that govern professional life in hierarchical organizations. Deliberate fraud is an exception. The drift is the norm.
The compliance regime is, in its intent, a mechanism for making accountability legible to external observers. The board cannot directly observe every security control. The regulator cannot directly audit every system. The compliance framework (the certifications, audits, reports, and standards) is a translation layer: it converts the internal reality of organizational security into signals that external observers can read and evaluate.
This translation function is necessary and, when it works, valuable. The problem is that translation layers create their own incentives, and those incentives do not always align with the thing being translated.
Once an organization has learned that producing certain outputs (a SOC 2 report, an ISO 27001 certification, a NIST CSF assessment) satisfies the external observer's demand for accountability signals, the optimization pressure shifts. The question stops being "are we secure?" and starts being "do we satisfy the framework?" These are not the same question, and organizations that conflate them, under time pressure, resource pressure, and the ordinary human tendency to optimize for what gets measured, begin to drift.
The drift is an accumulation of small decisions, each individually defensible. The security control that exists in the policy document but is too operationally expensive to enforce. The audit finding that is logged as a remediation item and rolled forward, quarter after quarter, because addressing it would require re-architecting a system that production depends on. The risk register entry that accurately describes a critical exposure but is scored in a way that keeps it below the threshold requiring board attention. The penetration test scoped to avoid the systems most likely to produce embarrassing findings.
Each individual decision is defensible. The policy document genuinely represents the intended state. The remediation item is genuinely intended to be addressed. The risk score reflects a genuine judgment. But the accumulation produces an institution whose documented security posture and actual security posture have quietly come apart, a signal/source split at the organizational level, invisible in any individual document but structurally present in the gap between what the institution claims and what it is.
I have been part of this accumulation. I have signed risk acceptances that I knew were optimistic, scoped penetration tests to avoid systems I suspected were vulnerable, and presented dashboards that were accurate at the level of data but misleading at the level of implication. Not from malice. From the same structural pressures I am describing. The drift is easier to see from the outside than to resist from the inside, and the professional cost of resisting it is real.
This is the same mechanism that Essay One mapped at the individual level, operating at institutional scale. The narcissist produces empathy signals without affective resonance. The institution produces accountability signals without accountability substance. In both cases, the performance is convincing precisely because it is built from genuine components: real certifications, real audit firms, real compliance processes, assembled in a way that satisfies the observer's coherence check while the underlying reality has departed. The compliance framework tells you what signals to produce. It cannot tell you whether producing those signals corresponds to genuine security. That correspondence requires judgment, adversarial testing, and the organizational culture to act on uncomfortable findings. It requires precisely the capacities that the compliance-optimization dynamic tends to erode.
This is the institution in the uncanny valley. Almost accountable. The signals are there. The source has quietly left.
The Suppression Mechanism at Institutional Scale
In personal social engineering, the suppression mechanism is interpersonal: professional courtesy, hierarchy, the discomfort of naming unverifiable alarm. In the institutional context, the suppression mechanism operates at a larger scale, but the structure is identical.
The CISO who notices the drift, who sees that the risk register is being managed for optics rather than exposure, that the audit findings are being rolled forward rather than remediated, that the security posture claims being made to the board do not correspond to the actual attack surface, faces a version of the same social pressure that faces anyone who notices that the signals and source have separated.
What they know is difficult to articulate precisely. They have a feeling, compounded of professional experience, pattern recognition, and the particular unease of someone who understands both what the documentation says and what the systems actually do, that the accountability is not real. But the documentation is real. The certifications are genuine. The audit firm is reputable. The risk register was signed off by the right people. Every articulable piece of evidence points toward compliance. Only the inarticulate alarm points the other way.
And the organizational context generates powerful pressure to suppress that alarm. The board wants assurance, not uncertainty. The CEO wants to present a clean posture to investors and regulators. The audit committee wants findings to be closed, not perpetually open. The external auditor, whose continued engagement depends on maintaining a workable relationship with management, is not structurally incentivized to produce findings that the organization is not prepared to address.
The CISO who names the gap, who tells the board that the certified posture does not correspond to the actual risk, is making a claim that contradicts the apparatus of institutional assurance. They are being difficult. They are introducing uncertainty into a presentation designed to communicate confidence. They are, in the language of organizational management, not being a team player.
This is the institutional suppression mechanism. It operates through the same forces that suppress individual alarm: the social cost of naming wrongness that cannot be fully proven, the professional cost of contradicting a consensus that convenient documentation supports, the hierarchical pressure to defer to the process rather than the judgment.
The difference from the individual case is scale. When the CISO's alarm is suppressed, what is lost is not one person's judgment. It is the organization's only instrument for detecting the gap between its claimed and actual security posture. The suppression of the institutional alarm is the suppression of institutional reality-testing.
AI Governance as Contemporary Case Study
The institutional uncanny valley is being constructed in real time in the AI governance domain, and the construction is happening fast enough to watch.
Since 2016, there has been an extensive global production of AI governance artifacts: principles documents, ethical frameworks, voluntary commitments, model cards, responsible AI programs, algorithmic impact assessments. The OECD AI Principles. The EU AI Act. The US Executive Orders on AI. The major technology companies' responsible AI frameworks.
The lifecycle of these frameworks has been instructive. In 2019, Google formed its Advanced Technology External Advisory Council, an eight-member AI ethics board meant to guide the responsible development of AI. It lasted nine days before being dissolved. The members never met. In 2023, Microsoft laid off its entire Ethics and Society team, the group responsible for translating the company's stated AI principles into product design, during the same period it was investing over eleven billion dollars in OpenAI and racing to integrate generative AI across its product suite.
These are not aberrations. They are the expected output of organizations whose competitive incentives and governance commitments point in opposite directions. A former Microsoft team member described the gap to The Verge: people would look at the principles coming from the Office of Responsible AI and not know how they applied. The Ethics and Society team existed to close that gap. It was eliminated precisely when the gap was widest. The production of AI governance signals (principles, commitments, frameworks) is cheap relative to deployment. The production of AI governance substance, the actual constraint of deployment in response to identified risks, is expensive, because it means accepting competitive disadvantage. When the signal and the substance diverge, institutions optimize for the signal.
The European AI Act is the most serious attempt to create binding governance with actual enforcement consequences. Its implementation has been revealing. The Act's GPAI obligations entered force in August 2025, but the Commission's enforcement powers are delayed until August 2026; a year in which providers must comply but face no penalties for non-compliance. The rules for high-risk AI systems embedded in regulated products have an extended transition period until August 2027. Open-source models meeting certain criteria receive exemptions from several obligations. The Commission itself acknowledged that an informal enforcement grace period may be needed beyond the formal dates. The signal says: AI is now regulated. The infrastructure of enforcement says: not yet. And the deployment continues at pace.
I am not sure whether to call this cynicism or inevitability, and I think the uncertainty matters. Governance frameworks produced within institutional environments that have a primary interest in the activity being governed will tend, under competitive pressure, to drift toward the production of accountability signals rather than substance. The incentive structure produces the same drift that compliance optimization produces in enterprise security. The framework becomes the performance of governance, not its instrument.
The Board as Structural Accomplice
The board of directors occupies a particular position in this dynamic that deserves direct examination, because it is the board that is supposed to close the gap between institutional signals and institutional reality.
Board-level cybersecurity oversight has expanded dramatically in the past decade. SEC rules require disclosure of material cybersecurity incidents and of board expertise in cybersecurity risk. Audit committees now routinely receive security briefings. Many boards have added CISO presentations to their regular agenda. The signal says: boards are taking cybersecurity seriously.
The substance is more complicated. A board receiving a security briefing from a CISO is receiving a presentation designed by the very function it is supposed to oversee. The information is filtered through the organizational hierarchy that has its own incentives to present a reassuring picture. Board members, even those with cybersecurity backgrounds, are working from information that the management layer has curated. They are reading the documentation that the institution has produced about itself.
The structural problem is not that boards are negligent. Effective oversight of institutional security posture requires precisely the kind of adversarial, independent, reality-testing capacity that board governance is not structurally designed to provide. Boards receive information; they do not generate it. They evaluate representations; they do not independently verify them. They assess the quality of management's judgment; they cannot, in any practical sense, substitute their own.
The three suppression norms that Essay Two identified operate here with particular force. The hierarchy norm: the CISO presents upward to a board that has authority but not expertise to challenge technical claims, and the board defers to management's framing because the alternative requires independent investigation that governance structures do not support. The efficiency norm: board time is scarce, agendas compressed, and the presentation format itself favors assurance over uncertainty; a clean risk dashboard is a thirty-second read, while a qualified assessment of actual posture requires an uncomfortable conversation that may not resolve within the allocated time. The social grace norm: naming the gap between documented posture and actual posture, in a boardroom setting, is an implicit accusation that management has been misrepresenting its own security. No CISO who wants to maintain a functional relationship with the C-suite will make that claim without extraordinary evidence, and the gap, by its nature, produces inarticulate unease rather than extraordinary evidence.
The result is a board oversight function that operates primarily at the signal level, evaluating the quality and coherence of the accountability documentation, rather than at the source level, evaluating the actual correspondence between that documentation and organizational reality. The board becomes the most senior level of the suppression mechanism, not because its members are captured or dishonest, but because the institutional architecture of oversight does not give them the instruments to do otherwise.
This is the closing of the loop, and it is worth tracing carefully. The CISO who might name the gap is suppressed by organizational culture. The internal audit function that might surface it is constrained by scope limitations and client relationships. The external auditor is not structurally incentivized to produce findings the organization is not prepared to address. The regulator evaluates the signal because the signal is what has been submitted. And the board, sitting at the top of this chain, receives the output of each prior suppression and processes it as assurance.
The alarm fires at each level, in each function, in each mind that encounters the gap between what the documentation says and what the systems do. And at each level, the institutional suppression mechanism engages. Not through conspiracy. Through structure.
The Distinction That Changes Everything
There is a distinction that the institutional uncanny valley makes available, and it is the most important practical implication of this entire analysis. It is also, I think, the point at which the argument stops being diagnostic and becomes actionable.
The distinction is between frameworks that are adversarially designed and frameworks that are not.
A compliance framework that is not adversarially designed asks, implicitly: does this institution produce the signals of accountability? It tests documentation, process, and the coherence of stated practice. It takes the institution's representation of itself as the primary data source. It evaluates the signal.
A framework that is adversarially designed asks something different: does this institution actually do what it claims to do when the verification is inconvenient, when the pressure is high, when doing what it claims to do has real operational cost? It assumes that the gap between claimed and actual posture is a predictable feature of institutional behavior, not an exceptional failure.
Adversarial design does not require bad faith toward the institution being evaluated. It requires honest acknowledgment of the structural tendency toward accountability theater, and the deployment of verification approaches calibrated to that tendency rather than to the assumption of good faith compliance. The objection to adversarial design is usually framed as an objection to distrust, as if designing verification for the gap implies an accusation that the institution is dishonest. It does not. It implies that the institution is subject to the same structural pressures that produce the gap in every large organization, and that verification should be designed for the world as it is rather than the world the documentation describes.
Red team exercises are the clearest existing example at the technical level: rather than asking whether the security controls exist and are documented, they ask whether the security controls work when an actual adversary is trying to defeat them. The difference in what they find, compared to conventional compliance audits, is frequently severe. Organizations that are compliant by every conventional measure are penetrated by red teams in hours.
At the board level, adversarial design would mean that at least some of the information the board receives about cybersecurity posture is generated independently of the management layer; findings produced by a function that reports to the board directly, with scope and budget the management layer does not control. Internal audit is supposed to serve this function, and in some organizations it does. But in most, the independence is formal rather than operational: internal audit's scope is negotiated with management, its resources are allocated through the management budget process, and its findings are discussed with management before reaching the board. Genuine adversarial independence would require that the board's information about actual posture be produced by a function whose incentives are structurally aligned with finding the gap, not with managing it.
At the regulatory level, adversarial design would mean moving from documentation review to operational testing. Rather than evaluating whether an institution has submitted the correct filings, the regulator would test whether actual operations correspond to what the filings describe, under conditions that include surprise and scenarios the institution has not been briefed on. Financial regulators have partially implemented this through stress testing. The same principle applied to cybersecurity and AI governance would mean regulators who test actual resilience rather than documented resilience. This is more expensive than documentation review. It is also more useful by exactly the margin that separates the signal from the source.
At the AI governance level, adversarial design would mean evaluating not whether the institution has produced the required governance artifacts but whether those artifacts have produced any observable constraint on deployment decisions. Has any deployment been delayed or cancelled as a result of the governance framework? Has any revenue opportunity been declined because the risk assessment indicated unacceptable harm? If the answer to these questions is consistently no, the governance framework is producing signals without substance. The test of AI governance is the cost it has imposed on the institution that maintains it, not the quality of its artifacts.
None of these are easy to implement. All of them create friction, expense, and organizational discomfort. The question is whether that friction is more expensive than what the institutional uncanny valley costs when the adversary, or the crisis, arrives. The carriers that had frameworks and were owned for eighteen months provide one answer. The federal systems that had documented controls and lost them under political pressure provide another.
The Hardest Admission
There is a version of this argument that is politically comfortable: compliance frameworks are imperfect, and we should improve them. That version is not wrong. But it is not the argument I am making.
The argument I am making is harder. The institutional tendency toward accountability theater is a structural feature of how large organizations under regulatory pressure respond to the incentives that compliance creates, not a correctable defect in the compliance architecture. Better frameworks will produce better theater. More rigorous standards will produce more rigorous performance of compliance with those standards. The gap between signal and source will move, will narrow at the edges, will be more expensive to maintain; but it will persist, because the forces that generate it are structural, not accidental.
This does not mean governance frameworks are useless. It means that their function needs to be honestly understood. They raise the floor. They make casual non-compliance costly and visible. They create accountability for the largest and most obvious gaps. They produce, at minimum, a record against which failures can be evaluated in retrospect. These are real contributions.
What they cannot do, by design, is close the gap between institutional performance of accountability and institutional reality of it. That gap is closed only by the things that are hardest to systematize: genuine adversarial testing, organizational cultures that make naming the gap safe rather than costly, leadership that treats uncomfortable findings as intelligence rather than threat.
And, the thing that brings this essay back to the alarm we have been following since the first essay in this series, it is closed by the people who notice the wrongness, who feel the incoherence between what the documentation says and what the systems do, and who have both the personal capacity and the organizational permission to say, plainly, what they see. Not the frameworks. Not the auditors. The people who sit in the room where the gap is visible and choose to name it, knowing the professional cost.
The institutional uncanny valley is a condition to be managed continuously, not a problem to be solved once: through the design of verification that assumes the gap will be present, through the protection of the people who detect it, and through the honest acknowledgment that no framework, however rigorous, eliminates the structural incentive to produce signals without substance. The compliance framework raises the floor. It does not close the gap. And the gap is where the adversary lives, whether that adversary is a nation-state actor with eighteen months of patience, a political force with operational speed, or simply the accumulated weight of institutional self-deception.
Those people, the ones who carry the alarm despite the institutional pressure to suppress it, are the subject of Essay Five.
Next: Essay Five — The Unsuppressed. On the structural question of what happens when the alarm cannot be overridden, and what it would mean to design institutions that protect the alarm rather than silence it.
Sources
Case Studies from Prior Series
Brondani, M. The Compound Vulnerability (essay series). Published at marcobrondani.com. (Salt Typhoon intrusion analysis; federal access control failures accompanying the Department of Government Efficiency deployment in early 2025.)
Salt Typhoon
The Salt Typhoon intrusion into U.S. telecommunications networks was documented across multiple government and industry sources in late 2024 and early 2025, including advisories from CISA and the FBI. The carriers maintained compliance frameworks and regulatory filings throughout the period of compromise, which lasted approximately eighteen months before detection. The essay references these facts as analyzed in the author's earlier Compound Vulnerability series.
Federal Access Controls (DOGE)
The Department of Government Efficiency's access to Treasury payment systems, OPM personnel databases, and Social Security Administration records in early 2025 was documented by multiple news organizations and in congressional testimony. The essay treats these events as structural case studies rather than political commentary, focusing on the gap between documented access controls and their operational resilience under political pressure.
AI Governance
Google. (2019). "An external advisory council to help advance the responsible development of AI." Google Blog, March 26, 2019. The Advanced Technology External Advisory Council (ATEAC) was dissolved on April 4, 2019, nine days after its announcement. Reported by Vox, MIT Technology Review, VentureBeat, and others.
Newton, C. (2023). "Microsoft just laid off one of its responsible AI teams." Platformer, March 13, 2023. Microsoft's Ethics and Society team, once approximately thirty employees, was eliminated during layoffs affecting 10,000 employees, during the same period the company was investing over $11 billion in OpenAI.
Schiffer, Z. (2023). "Microsoft lays off entire ethics and society team within its AI organization." The Verge, March 13, 2023. Former employee quote: "People would look at the principles coming out of the Office of Responsible AI and say, 'I don't know how this applies.'"
European AI Act
European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 (the AI Act). GPAI obligations entered force August 2025; Commission enforcement powers delayed until August 2026; high-risk AI system rules for regulated products with extended transition period until August 2027. Implementation timeline and enforcement grace period details from European Commission official communications.
Compliance and Governance Frameworks Referenced
SOC 2 (System and Organization Controls 2). Developed by the American Institute of Certified Public Accountants (AICPA).
ISO/IEC 27001. International standard for information security management systems. International Organization for Standardization.
NIST Cybersecurity Framework (CSF). National Institute of Standards and Technology, U.S. Department of Commerce.
OECD. (2019). Recommendation of the Council on Artificial Intelligence (OECD AI Principles). Organisation for Economic Co-operation and Development.
SEC Cybersecurity Disclosure Rules
U.S. Securities and Exchange Commission. (2023). "Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure." Final rule, effective December 2023. Requires disclosure of material cybersecurity incidents and of board expertise in cybersecurity risk oversight.
Cross-Series References
Brondani, M. Essay One: "The Alarm" and Essay Two: "Cold Empathy at Scale." The Valley of False Signals. Published at marcobrondani.com. (Suppression mechanism, three norms of hierarchy/efficiency/social grace, signal/source split formulation.)