The Unsuppressed
There is a person in your organization who has been telling you something is wrong. Not loudly. Not with a polished deck. In the register organizations find most difficult to process: persistent, imprecise, and professionally inconvenient.
Essay Five of The Valley of False Signals series
There is a person in your organization, possibly several, who has been telling you something is wrong.
Not loudly. Not with a polished deck and a clear remediation roadmap. In the register that organizations find most difficult to process: the persistent, imprecise, and professionally inconvenient insistence that something in the system does not cohere. The analyst who keeps escalating a vendor concern that everyone else considers resolved. The auditor who writes the same finding three engagements in a row because the remediation never quite closes. The engineer who flags an architectural decision as a future exposure and is told, repeatedly, that the business has accepted the risk. The CISO who frames the board presentation in terms that are accurate rather than reassuring and finds, over time, that the invitations become less frequent.
These people are not difficult. They are not lacking in social intelligence or professional judgment. They are, in many cases, the most technically capable people in their organizations. What they share is a specific resistance: a failure, or a refusal, to perform the social operation that the organization's culture requires, the suppression of alarm that cannot be fully articulated.
This essay is about them. What they share, structurally. Why organizations systematically marginalize them. And what it would mean to build institutional architecture that protects their function rather than eroding it.
The answer to that last question requires a detour through developmental neuroscience and the philosophy of institutional design that may feel, initially, distant from the cybersecurity governance problems this series has been examining. The distance is not as great as it appears.
The Research That Changes the Frame
In 2018, a team of researchers at Peking University published a study with a finding that has received far less attention than it deserves. They were examining the uncanny valley effect in children, specifically whether the effect Mori described in adult responses to humanoid robots was present in younger populations, varying the realism of facial appearance and inducing perceptual mismatch in ways shown to trigger the uncanny valley response in adults.
Their control group, typically developing children, showed the expected effect. As facial realism increased and approached but did not reach full human likeness, preferences declined. The alarm fired. The uncanny valley was present and robust.
The children with autism spectrum disorder showed no such effect. Their preference curve did not display the characteristic valley. None of the features that produced strong negative responses in typically developing children triggered the same alarm. The uncanny valley, for this population, was absent.
This finding has been replicated in multiple subsequent studies. If the uncanny valley effect is, as the first essay in this series argued, a trust detection mechanism rather than an aesthetic response, then its absence in ASD represents a structurally different relationship to the detection mechanism itself. And that structural difference has consequences that extend well beyond robot therapy.
What the Absence Means
The uncanny valley alarm, as established in Essay One, fires when the brain detects incongruence between what an entity signals and what it is. The suppression of that alarm is a separate operation: the professional norm, the hierarchical deference, the discomfort of accusing someone of deception without articulable proof. Detection is perceptual. Suppression is social.
The critical question: in the ASD population, which operation is different? The research does not resolve this cleanly, and intellectual honesty requires saying so. What it does establish, across multiple studies and in both child and adult ASD populations, is that the behavioral output is different: the avoidance behavior, the expressed preference decline, the reported eeriness are attenuated or absent. The proposed mechanisms vary (differences in how prior social experience calibrates the detection model, differences in social motivation, differences in how social norming converts alarm into suppression) and the research has not settled which account is most accurate.
What matters for our purposes is the structural implication that all three mechanisms share: the relationship between the detection system and the social suppression operation is different. Whether the alarm calibrates differently, or fires differently, or reaches expression differently, the output is a detection profile that is less shaped by the social forces that, in the typical case, convert alarm into suppression and suppression into compliance.
The Inversion
Here is where the argument takes a turn that requires careful handling.
The absence of the uncanny valley effect in ASD has been framed, in the research literature, primarily as a deficit: something is missing from the social alarm system. This framing makes sense within the therapeutic context.
But the framing inverts when the context is adversarial. In an environment where sophisticated actors are systematically producing signals of authenticity disconnected from their actual intentions, the alarm that typically developing individuals possess is an asset with a critical vulnerability: it is susceptible to the suppression mechanism. The reliable operation of the alarm depends on the social suppressability of the alarm being resisted. And resistance to the suppression mechanism is not, in the typical developmental profile, strongly selected for. The social costs of unsuppressed alarm expression are real, and the social environment reliably punishes their expression. People who do not perform the suppression are difficult. Organizations prefer people who perform it.
This preference is the source of institutional vulnerability. Not because it is irrational (it is rational for the ninety-nine percent of interactions that are not adversarial) but because in the specific subset that are adversarial, the suppression preference produces exactly the exposure that sophisticated attackers and institutional drift rely on.
Resistance to the suppression mechanism is not strongly selected for in typical professional development. The social costs of unsuppressed alarm expression, the professional friction, the accusation of paranoia, the disruption of cooperative relationships, are real. The social environment reliably punishes their expression. This is not a design flaw. It is a design feature whose costs have changed.
The question the ASD research raises, indirectly, is whether the suppression operation is separable from the detection operation in ways that could be structurally exploited for defensive purposes. Not "can we make people more like autistic individuals," which is both clinically wrong and ethically untenable. But: what can the existence of a different detection-suppression profile teach us about how to design institutional architectures that protect detection outputs from social override?
This is the inversion. The research that was conducted to understand a population that lacks a typical alarm response turns out to illuminate something about the alarm response itself, specifically about the social operation that converts alarm into silence, and about what happens when that operation is attenuated or differently regulated.
A Necessary Pause
Before proceeding, something needs to be stated directly and without qualification.
Autism spectrum disorder is not a superpower, not a security asset, and the people who have it are not instruments for organizational detection architectures. The research findings summarized above do not establish that autistic individuals are better at security; they establish something much more specific and limited: that a particular behavioral output of the uncanny valley alarm is attenuated in this population, and that this attenuation involves the relationship between detection and social suppression.
The lived experience of autism includes challenges in social navigation, sensory processing, executive function, and communication that are real and often severe. The absence of the uncanny valley effect is not, for the people who live with ASD, primarily experienced as an advantage. It exists within a broader profile that the neurotypical world has not been designed to accommodate.
What the research offers is a structural insight, not a personnel recommendation. The suppression mechanism is a social operation applied to detection outputs, not an inevitable feature of the detection process itself, and it can, in principle, be differently regulated. The detour through ASD research is a lens, not a template. It shows us something about the structure of the problem that neurotypical cognition, precisely because it takes the suppression operation for granted, cannot easily see from the inside.
The People Who Do Not Suppress
Return to the person at the beginning of this essay. The analyst who keeps escalating. The auditor who writes the same finding three years running. The engineer who will not accept "business has accepted the risk" as a final answer.
These people are not, generally, autistic, or at least, that is not what defines their functional profile in the organizational context. What defines it is a particular relationship to the organizational suppression pressure that most professionals navigate as automatic. They feel the pressure. They understand it. In many cases, they have paid professional costs for not complying with it. And they do not comply anyway.
The reasons are various. Some have an unusually high tolerance for professional friction. Some have a professional identity built around a specific obligation: the auditor who understands their role as a fiduciary function compromised by social deference, the security researcher who has internalized a specific ethical commitment to disclosure. Some have experienced, personally and concretely, the consequences of suppression, and the memory of it makes the social cost of speaking feel small by comparison. And some have a cognitive style that processes the social suppression pressure differently, that perceives the organizational norm to perform the override as a distinct thing from the professional obligation to report accurately, and declines to conflate them. This cognitive style exists on a spectrum, is distributed across the population, and is not reducible to any single neurological profile. But it shares, structurally, the feature that the ASD research illuminates: the suppression operation is not automatic. It is perceived as a separate choice, subject to a separate judgment. And the judgment, in these people, consistently comes back: the alarm is more important than the comfort. The choice is refused.
These are the people organizations most consistently fail to protect, and most consistently fail to use.
In 2022, Peiter "Mudge" Zatko, one of the most respected figures in the cybersecurity community, a former member of the L0pht hacking collective who had testified before Congress on network security in 1998, filed a whistleblower complaint against Twitter, where he had served as head of security. Zatko alleged that Twitter's executive team had instructed him to present cherry-picked data to the board to create a false impression of progress on security issues, had a consulting firm's report scrubbed to minimize its findings, and had the CEO discourage him from being fully transparent with the board about the company's actual security posture. He documented servers running outdated software lacking basic security features, thousands of employees with broad and poorly monitored access to core systems, and approximately one security incident per week serious enough to require government reporting.
The company's response was to characterize Zatko as having been fired for "ineffective leadership and poor performance," a classic instance of credibility erosion. His alarm, which had been raised internally and documented, was reframed as evidence of his inadequacy rather than evidence of the gap he was describing.
The Zatko case matters because it demonstrates every mechanism of institutional suppression operating in sequence against a single person. But Zatko had resources most alarm-carriers do not: a national reputation, legal representation from a nonprofit whistleblower firm, and a public moment (the concurrent Musk acquisition dispute) that gave his allegations an audience. Most people who carry the alarm have none of these. They have only their observation and the organizational culture that surrounds it.
How Organizations Suppress the Unsuppressed
The mechanisms are numerous, varied, and rarely explicit. They operate through ordinary professional culture rather than through direct censorship.
Credibility erosion is the most common: the gradual reframing of persistent alarm as evidence of poor judgment rather than accurate detection. The professional consequence is not dismissal; it is the progressive withdrawal of institutional trust, which operates through smaller signals, the meeting invitation that stops arriving, the project that goes to someone else, the promotion that is indefinitely deferred. Scope limitation is subtler: moving the unsuppressed person from functions with broad organizational visibility to functions with narrow technical scope, where their observations become invisible to the people who might act on them.
The most sophisticated mechanism is process capture, which converts the unsuppressed person's output into the compliance apparatus itself. Their findings are acknowledged, logged, assigned to remediation owners, tracked in the risk register, and reviewed in the quarterly governance meeting. Every alarm is formally received. None of it changes the posture. The organizational machinery for receiving the alarm and the organizational machinery for acting on it are decoupled. The finding goes in the register. The register goes to the committee. The committee notes the finding. The finding ages.
And perhaps the most damaging is social isolation: the informal cost of being the person who names the wrongness. The difficult colleague. The one who makes meetings tense. The one who, when they walk into the room, produces a subtle shift in the atmosphere because everyone knows they may say something uncomfortable. The social isolation is rarely deliberate. It is the aggregate output of individual decisions to prefer comfortable company, which is to say, it is the suppression mechanism operating at the social level.
Red Teams and Whistleblower Systems
Before asking what institutional design could protect the alarm, it is worth examining the two mechanisms that have explicitly tried.
The red team is, at its best, a structural attempt to create an organizational function whose purpose is to not suppress the alarm. Its mandate is adversarial: to find the gaps between what the institution claims and what it is, to produce findings that are uncomfortable rather than reassuring. Its value depends on its independence from the organizational culture that would otherwise convert its findings into the compliance register.
When red teams work, they work because they externalize the permission to alarm. They do not rely on individual resistance to suppression pressure; they create an institutional role that makes suppression impermissible, or at least much more costly. The red team analyst who finds a critical exposure has a mandate, a role, an institutional permission structure that converts the alarm into a deliverable rather than a career risk. The gap between what a red team finds and what the compliance apparatus documents is a direct measure of how much the suppression mechanism has cost the organization.
But red teams have their own failure modes, and understanding them matters. Their findings get converted into the compliance register. Their scope is limited by the same management that controls the systems being tested. Their independence is conditional on the continued support of the hierarchy they are supposed to challenge. In organizations where the institutional uncanny valley has deepened, where the gap between claimed and actual posture is large and acknowledged at the level of senior leadership, the red team's findings are received as a threat to management rather than intelligence for it, and the team's scope and independence are progressively curtailed. The red team is a structural workaround for the suppression problem, and a valuable one. It is not a solution, because it is still embedded in the organizational culture that generates the suppression pressure.
Whistleblower systems, the formal mechanism for protecting alarm against suppression, perform similarly. Academic research consistently finds that formal protection mechanisms fail to prevent the informal costs of whistleblowing: the credibility erosion, the scope limitation, the social isolation. Legal protection from termination does not protect against being moved to a role with no visibility. Anonymous reporting channels do not protect against the informal attribution of reports to the small number of people with access to the relevant information. Regulatory protection for safety reporting does not prevent the organization from making the whistleblower's professional life sufficiently unpleasant that resignation becomes the rational choice.
The failure mode is the same as the compliance framework failure mode: they produce the signal of protection without its substance. The gap between the documented protection and the experienced protection is the institutional uncanny valley of whistleblower systems.
Toward Adversarially Resistant Detection Architecture
What would institutional design look like if it were built to protect detection from suppression rather than to produce documentation of assurance?
This is not a question the security governance literature has directly addressed, and the reason is the same reason that security awareness training has not addressed the suppression layer: the field has been focused on the detection capacity, not on the social architecture that determines whether detection outputs reach action. Several principles suggest themselves, drawn from the analysis above and from the places where suppression-resistant detection has been attempted and partially achieved.
The most powerful protection is structural independence of alarm functions: genuine separation between the function that generates alarm and the function that manages the operations the alarm is about. The independence must be real, not just documented; reporting lines, budget authority, and scope definition that cannot be controlled by the management layer being evaluated. Closely related is output that bypasses hierarchy: architecture that routes alarm directly to board-level or external oversight without requiring management endorsement or framing. The suppression mechanism operates primarily through hierarchy; findings that pass through management layers get filtered before they reach decision-makers. Reducing the number of points at which suppression can be applied is structurally uncomfortable for management, which is precisely why it is rarely implemented in its strong form, and precisely why the strong form is where the protection lives.
Formal protection for alarm-carriers must have teeth. Whistleblower protections that address only formal retaliation leave the informal suppression machinery intact. Protection that genuinely prevents the informal costs (the scope narrowing, the credibility erosion, the social isolation) requires monitoring and enforcement mechanisms at least as sophisticated as the informal machinery they are trying to counteract. This is expensive, intrusive, and organizationally uncomfortable. It is also the difference between a protection signal and actual protection.
And the deepest change is cultural rather than structural: the normalization of inarticulate alarm. The professional norm that requires articulable justification before alarm can be expressed is the engine of the suppression mechanism. Changing it requires organizations to explicitly value the expression of inarticulate unease, to create contexts in which "something seems off and I can't say exactly what" is a legitimate input rather than evidence of poor judgment. This is the hardest change because it runs against how professional cultures define rigor and rationality. It requires accepting that the alarm system is sometimes more accurate than the documentation, and that acting on the alarm before the documentation catches up is not paranoia but intelligence.
What the Cassandra Problem Teaches
The Cassandra myth is old enough that it has become a cliché, but its precise structure deserves attention.
Cassandra was given the gift of true prophecy and the curse that no one would believe her. The standard reading emphasizes the social reception of accurate alarm. That reading is correct and important. But there is another element that gets less attention: the cost to Cassandra herself. The experience of being the person who sees accurately and is systematically disbelieved, who watches the consequences of suppressed alarm unfold in slow motion, produces its own pathologies: the escalating alarm that loses credibility by virtue of its persistence, the psychological toll of sustained professional isolation, the progressive narrowing of the space from which accurate signals can be transmitted.
The institutional suppression machinery does not just silence individual alarms. It degrades the people who carry them. The analyst who has been told repeatedly that their concern is unfounded eventually faces a choice: absorb the professional cost of continued escalation, or absorb the psychological cost of self-suppression. Many capable people make the second choice, not because they stop seeing accurately, but because the cost of seeing accurately, in a context that will not receive what they see, becomes unsustainable.
The organizations that lose these people do not lose them all at once. They lose them gradually, as the space for accurate alarm narrows, and the professional costs of occupying that space accumulate, and the people who occupy it calculate that there is no longer a path from accurate detection to any useful response. This is the final mechanism of institutional suppression: not silence, but exhaustion.
The suppression mechanism is a feature of how human social life manages the tension between cooperative trust and adversarial vigilance, not a corporate pathology or a security industry problem. The norms that generate suppression pressure are the norms that make large-scale cooperative life possible. They are functional, in the environments they were developed for.
The question is whether those environments still describe the world we are operating in. The base rate of sophisticated deception, at the individual level, the organizational level, and the institutional level, has increased. The cost of producing convincing simulations of authenticity has collapsed. The scale at which deception operates has expanded from the interpersonal to the civilizational.
The suppression mechanism is running on obsolete parameters, suppressing alarms at a rate calibrated for a world with far fewer genuine threats in an environment with far more of them. The recalibration required is specific: a higher assumed base rate of sophisticated deception at every scale, a lower cost threshold for acting on alarm before articulable evidence is available, and institutional structures that treat the cost of occasional false-positive caution as categorically lower than the cost of the false-negative compliance it prevents. This is the urgency inversion that Essay Two identified in the social engineering context, extended to institutional design: alarm should trigger more scrutiny, not less, and the organizational cost of acting on alarm should be lower than the organizational cost of suppressing it. And the people who, for whatever combination of cognitive style, professional commitment, and accumulated experience, are less subject to the suppression pressure, the people who keep naming the wrongness despite the cost, are the people whose function has become more valuable than it has ever been.
Protecting them is an epistemic infrastructure question, not a human resources question. They are nodes in the detection architecture. What happens to them, whether they are protected or eroded, whether their outputs reach decision-makers or disappear into the compliance register, determines whether the institutions they inhabit maintain any capacity to perceive the gap between their signals and their reality.
The institutional uncanny valley persists as long as the alarm is suppressed. The alarm is suppressed as long as the people who carry it are not protected. Protecting them is not comfortable. It is, in the environment this series has been describing, necessary.
Next: Essay Six — After the Valley. On what trust looks like when signals can no longer be taken at face value, and what it would mean to build the infrastructure of trust again, from different foundations.
Sources
ASD and the Uncanny Valley
Feng, S., Wang, X., Wang, Q., Fang, J., Wu, Y., Yi, L., & Wei, K. (2018). The uncanny valley effect in typically developing children and its absence in children with autism spectrum disorders. PLOS ONE, 13(11), e0206343. (Primary study: Peking University. Typically developing children showed the uncanny valley effect; children with ASD did not. Varied facial realism through morphing and induced perceptual mismatch through eye-size modification.)
Kumazaki, H., Warren, Z., Muramatsu, T., Yoshikawa, Y., Matsumoto, Y., Miyao, M., Nakano, M., Mizushima, S., Wakita, Y., Ishiguro, H., Mimura, M., Minabe, Y., & Kikuchi, M. (2017). A pilot study for robot appearance preferences among high-functioning individuals with autism spectrum disorder. PLOS ONE, 12(10), e0186581. (Replication context: ASD individuals showed different responses to humanoid robot appearance compared to typically developing individuals.)
Li, L., Imaizumi, T., Nishikawa, N., Kumazaki, H., & Ueda, K. (2025). Do individuals with autism spectrum disorder not experience the uncanny valley? A psychological experiment and feature analysis using human and robot faces. Cognitive Development, 73, 101519. (Replication with robot and human facial images: typically developing individuals exhibited the uncanny valley effect; individuals with ASD showed a less distinct effect, with analysis suggesting emphasis on local rather than global facial information.)
Kumazaki, H., Muramatsu, T., Yoshikawa, Y., Matsumoto, Y., Ishiguro, H., Mimura, M., & Kikuchi, M. (2015). A Bayesian model of the uncanny valley effect for explaining the effects of therapeutic robots in autism spectrum disorder. PLOS ONE, 10(9), e0138642. (Computational modeling: proposed that ASD produces an "uncanny cliff" rather than an "uncanny valley," with implications for robot-assisted therapy design.)
Whistleblower Case Study
Zatko, P. ("Mudge"). (2022). Whistleblower disclosure to the U.S. Securities and Exchange Commission, the Federal Trade Commission, and the Department of Justice, filed July 6, 2022. Allegations concerning Twitter, Inc.'s security practices, including misrepresentation of security posture to the board, scrubbing of third-party consulting findings, and systemic access control deficiencies.
Twitter's response characterizing Zatko as having been fired for "ineffective leadership and poor performance" was reported by multiple outlets including The Washington Post, CNN, and The New York Times, August 2022. The complaint became public during the concurrent Musk acquisition dispute.
For background on Zatko's career: Zatko testified before the U.S. Senate Committee on Governmental Affairs on network security vulnerabilities as a member of the L0pht hacking collective in 1998.
Whistleblower Research
The essay references academic research on the failure modes of formal whistleblower protection systems. Key works in this literature include:
Miceli, M.P., Near, J.P., & Dworkin, T.M. (2008). Whistle-blowing in Organizations. Routledge/Psychology Press.
Moberly, R. (2012). Sarbanes-Oxley's whistleblower provisions: Ten years later. South Carolina Law Review, 64, 1.
Kenny, K. (2019). Whistleblowing: Toward a New Theory. Harvard University Press. (Documents the informal suppression mechanisms — credibility erosion, scope limitation, social isolation — that operate below the threshold of legal actionability.)
Organizational Suppression and Red Team Literature
The essay's analysis of organizational suppression mechanisms (credibility erosion, scope limitation, process capture, social isolation) draws on established organizational psychology and security governance literature. The red team analysis draws on practitioner experience and the broader adversarial design principles developed in Essay Four.
Cassandra Problem
The Cassandra myth is referenced as a structural analogy for the cost of carrying suppressed alarm. The essay's analysis of the cost to the alarm-carrier (escalating persistence losing credibility, psychological toll, progressive narrowing of transmission space) draws on the whistleblower research cited above and on practitioner literature in organizational psychology.
Cross-Series References
Brondani, M. Essay One: "The Alarm" (uncanny valley as trust detection mechanism, prediction error, suppression mechanism). Essay Two: "Cold Empathy at Scale" (urgency inversion, three suppression norms). Essay Four: "The Narcissistic Institution" (adversarial design principles, compliance framework failure modes). The Valley of False Signals. Published at marcobrondani.com.