After the Valley
Masahiro Mori never asked what lay on the other side. We are crossing the valley not by addressing what the alarm detects but by suppressing it. The question is no longer whether we are crossing but what we are crossing into.
Essay Six of The Valley of False Signals series
Masahiro Mori never asked what lay on the other side.
His 1970 essay mapped the uncanny valley as a region to be understood and, for practical purposes, avoided. The design recommendation was clear: keep your robots clearly robotic, or make them indistinguishable from human, but do not leave them in the liminal region where the alarm fires. The valley was a problem of proximity, of getting too close to human without closing the remaining gap, and the solution was either distance or completion.
What Mori did not address, because in 1970 there was no reason to, was the condition that obtains when you have actually crossed. When the valley is behind you. When the simulation has achieved the fidelity that was always theoretical and is now, in specific domains and with increasing generality, practical. What the world looks like when the alarm no longer has a reliable object to fire at, not because the alarm is broken, but because the incongruence it detects has been engineered away.
This series has been, in one sense, a map of the crossing. The alarm that fires and is suppressed by social convention. The alarm that fires and is suppressed by organizational culture. The alarm that fires and is suppressed by the apparatus of institutional assurance. The alarm that is approaching a condition where it may stop firing altogether because the simulations have become too precise. The alarm carried by people who refuse to suppress it, who pay professional costs for that refusal, and whose protection is a structural condition for any institution maintaining the capacity to perceive its own reality.
We are, to be precise, not yet fully past the valley. The crossing is in progress, in different domains, at different rates, with the synthetic signal capabilities advancing faster than the institutional adaptation to absorb them. But the direction is clear, and it has been clear long enough that the more urgent question is no longer whether we are crossing but what we are crossing into.
What does trust look like after the valley?
What Trust Was Built On
Trust, in the sense relevant to everything this series has examined, is an inference rather than a feeling: a conclusion drawn from evidence, about whether an entity is what it presents itself to be, whether the signals it is producing are causally connected to the reality they purport to represent.
For most of human history, that inference rested on direct observation: behavior watched over time, across varied circumstances, until coherence could be assessed. This model was slow, labor-intensive, and calibrated for small social environments. It began to break down as soon as human cooperation scaled beyond the face-to-face, and every expansion in scale since has required the development of new trust infrastructure to proxy for the direct observation that was no longer feasible. Credentials, contracts, certifications, reputational systems, legal liability, regulatory oversight: all mechanisms designed to make trustworthiness legible at scale.
What this series has mapped is the systematic failure of that trust infrastructure, not in all respects and not all at once, but in ways that are structural and accelerating. The failure has three distinct sources that the preceding essays examined separately and that now need to be understood together.
The collapse of signal fidelity: the practical impossibility of forging certain signals (voice, face, behavioral pattern, writing style) has ended, and the trust infrastructure built on that impossibility is being rendered obsolete faster than it can be replaced.
The optimization of signal production without substance: the learning, at both the individual and institutional level, that producing the right signals is sufficient to satisfy verification mechanisms, without the production requiring the underlying reality those signals are supposed to represent.
The systematic suppression of the most reliable detection instrument available: the coherence check, the prediction error mechanism, the alarm, which fires when it registers the split between signal and source, and which is suppressed, at every scale of social organization, by the norms of cooperative life that mistake the suppression of alarm for the exercise of good judgment.
These three failures form a system. Signal synthesis undermines the evidentiary value of signals. Signal production optimization is accelerated by the knowledge that signals rather than substance are what verification measures. And both are protected from detection by the suppression mechanism, which prevents the alarm that might otherwise surface the split from reaching action.
The trust infrastructure that was built for a world in which signals were hard to fake and institutions were assumed to produce the substance they claimed is not adequate for a world in which neither assumption holds. The question of what comes after is the question this essay is trying to answer.
The Three Errors to Avoid
Before naming what adequate trust infrastructure might look like, it is worth being precise about three errors that responses to this situation most commonly make. Each has real advocates, and each is wrong in a way that the analysis of this series makes visible.
Technical solutionism is the most common: the search for a new technical signal that cannot be faked. A biometric so complex, a cryptographic proof so robust, a behavioral marker so deeply embedded in neurological reality that it cannot be synthesized. These investments raise the cost of forgery, which has real value: it narrows the adversary population, increases attack resource requirements, and buys time. But as a foundation for trust infrastructure, technical solutionism fails because the adversarial parity dynamic described in Essay Three is real. Detection and generation co-evolve. No technical signal achieves permanent unforgeability in an environment where adversaries have access to the same foundational techniques as defenders. The deeper error is the implicit assumption that trust is a property of signals. Trust is a property of systems, of the institutional architectures, incentive structures, verification processes, and accountability mechanisms that determine whether the entities operating within them are what they claim to be. Rebuilding trust infrastructure on a new signal without rebuilding the system is building on a foundation that will, again, be undermined.
Cynical withdrawal is the second error: having recognized the collapse of signal fidelity and the systematic production of accountability theater, it concludes that trust is simply no longer possible. Every institution is performing. Every signal is suspect. This response has a kind of intellectual tidiness; it is consistent with the evidence and requires no difficult work. It is also indistinguishable from surrender. Trust, even imperfect and provisional, is a precondition for collective action. The cynical withdrawal does not protect against the failures this series has documented; it merely removes the possibility of institutional development that might address them. It also makes a subtle epistemic error: it treats the collapse of particular trust mechanisms as evidence that trust itself is impossible, rather than as evidence that particular mechanisms were built on inadequate foundations.
Nostalgic restoration is the third, perhaps most common in policy circles: the attempt to restore the conditions under which the previous trust infrastructure worked. To regulate synthetic media out of existence, to mandate signal authenticity through legal requirements, to impose on the current environment the assumptions under which the old mechanisms were adequate. The conditions under which signals were practically unforgeable were not policy choices. They were technological constraints that have been removed by capabilities that are not reversible. Deepfake generation cannot be uninvented. The regulatory impulse to require watermarking, provenance tracking, and synthetic media disclosure raises the floor, but it does not restore the underlying condition. Nostalgic restoration is particularly dangerous in the governance domain, because it produces exactly the institutional uncanny valley that Essay Four examined: frameworks that signal the restoration of trust infrastructure without actually rebuilding it.
What Structural Trust Requires
Trust infrastructure adequate for the post-valley condition is built on the assumption that signals are not reliable, compensating for that unreliability through structural design rather than signal improvement.
This requires a shift in the foundational question. The question that previous trust infrastructure was built to answer was: does this signal indicate trustworthiness? The question that adequate trust infrastructure asks is: is this system structured so that trustworthy behavior is produced by the incentives operating within it, regardless of whether signals are reliable?
The shift is from evidentiary trust (trust based on the interpretation of signals) to structural trust (trust based on the design of systems that make trustworthy behavior the rational choice for actors operating within them). The idea is not new; it is the foundational insight of institutional economics, of mechanism design, of the branch of political philosophy concerned with how constitutions should be designed to produce good governance even from self-interested actors. What is new is the urgency: the recognition that the evidentiary trust model has been more thoroughly undermined than previous transitions have produced, and that structural trust is a practical necessity rather than a theoretical refinement.
The distinction is worth pausing on, because it reframes everything this series has examined. Evidentiary trust asks: does this entity produce the right signals? Structural trust asks: is this entity operating within constraints that make producing the right substance more rational than producing the right signals? The first question can be answered by inspection, by evaluating what the entity presents. The second can only be answered by understanding the incentive architecture within which the entity operates. A compliance certificate answers the first question. The question of whether the compliance framework is adversarially designed, whether it tests the substance or merely the documentation, answers the second.
The distinction reframes everything this series has examined. Evidentiary trust asks: does this entity produce the right signals? Structural trust asks: is this entity operating within constraints that make producing the right substance more rational than producing the right signals? A compliance certificate answers the first question. The question of whether the compliance framework is adversarially designed, whether it tests the substance or merely the documentation, answers the second.
Most of our trust infrastructure is still designed to answer the first question. The shift to the second requires a fundamentally different relationship between the verifier and the verified, one in which the verifier assumes that signal optimization is the default behavior and designs for it, rather than assuming good faith and being surprised when the gap appears. This is the shift from cooperative verification (we trust you; show us your documentation) to adversarial verification (we assume the gap; show us your reality under conditions you haven't prepared for). It is, in essence, the shift from the world before the valley to the world after it.
The preceding essays developed the specific principles this shift requires: accountability that carries real costs, so that producing accountability signals is never cheaper than producing accountability itself; verification that is adversarial by design, testing for the gap under conditions the institution cannot prepare for; detection systems structurally independent of the functions they evaluate; and organizational cultures that treat the alarm as an institutional asset rather than a mark of poor judgment.
These principles are not new individually. What is new is the recognition that they are structural prerequisites for trust infrastructure in an environment where signal production has been decoupled from substance at every scale, from the synthetic voice on the phone to the compliance framework on the shelf.
But there is something these principles cannot address, and it would be dishonest to conclude this series without naming it.
The Epistemological Problem at the Center
The institutional responses this series has been advocating are adequate at the organizational and sectoral level. They can close the institutional uncanny valley in specific domains. They cannot solve the civilizational problem that underlies them.
The signal/source split that this series has been mapping is an epistemological problem: a problem about how collective knowledge is constituted, and about whether the conditions for collective knowledge still obtain.
Collective knowledge, the shared understanding that allows large groups of people to coordinate, assign trust, and recognize when the things they depend on have failed, is produced by the interaction of signals and verification. It requires that some signals be reliably connected to the realities they represent, and that the mechanisms for distinguishing reliable from unreliable signals be trusted enough to be actionable.
When the signals of human presence, institutional accountability, and individual authenticity are all simultaneously under systematic attack, when deepfakes produce voice and face, when compliance frameworks produce documentation without substance, when the mechanisms designed to verify these signals have themselves been optimized for signal production rather than source verification, the infrastructure of collective knowledge is under pressure in a way that no institutional design can fully address. The consequences are already visible outside the security domain: the erosion of shared factual frameworks across democratic societies, the inability to agree on what constitutes evidence, the progressive delegitimation of the institutions (media, regulatory bodies, scientific consensus) that were trusted to verify the verifiers. These are the same mechanism, signal/source split, suppression of detection, exhausted credulity, operating at civilizational scale.
This is an honest description of a real structural condition, not the counsel of despair that the second error above was warned against. The institutional responses this series has been advocating are necessary. They are not sufficient. The civilizational problem requires something more: not a better institution but a different relationship to the question of how trust is constituted when the old answers no longer hold.
That different relationship is a cultural achievement rather than a design solution. It cannot be mandated, regulated, or installed. It has to be recovered, which means it has to be understood as something that can be lost. The capacity of a population to make collective judgments about what is trustworthy and what is not, to distinguish between institutions that are producing accountability and institutions that are performing it, to attend to the alarm rather than suppress it: this capacity is developed through practice, maintained through exercise, and eroded through disuse. A society that has spent decades building institutional architectures designed to suppress the alarm has been training itself not to use the instrument it most needs. The recovery is the decision to stop suppressing an old capacity, not the development of a new one.
What Can Be Recovered
What has been lost is not trust itself. Trust, as a human capacity, is not something that can be taken away. What has been lost, or is in the process of being lost, is the shared epistemic infrastructure that made trust legible: the common frameworks for evaluating signals, the shared conventions about what kinds of evidence were sufficient for what kinds of claims, the institutional mechanisms that were trusted to verify the verifiers.
This loss is not evenly distributed. It is concentrated in the domains where the signal/source split has advanced furthest: digital communication, institutional accountability, the credentialing systems that proxy for direct observation of capability and character. It has not reached the domains of direct physical experience, extended personal relationship, and small-group cooperation, where the original detection mechanisms still operate with something approaching their original fidelity. The observation is uncomfortable but important: the recovery of adequate trust infrastructure will look more like the trust model of the environments the detection system was calibrated for than like the large-scale institutional trust that post-valley signal synthesis has undermined. The principle is not smallness (the scale of modern cooperative endeavor is not reversible, and the effort to reverse it would cost more than the problem it was solving) but proximity.
I started to write a paragraph here about what trust looks like at the individual level in this environment, what it means for a person rather than an institution, and realized I don't have an answer that isn't either nostalgic or naive. I kept writing it and kept deleting it. The honest version is that individual trust, in the post-valley condition, requires something that no essay can provide: the slow accumulation of direct observation, the willingness to attend to the alarm when it fires, and the acceptance that the signals we used to rely on are no longer sufficient. That is not a program. It is a disposition. And dispositions cannot be mandated; they can only be cultivated, or eroded.
What can be described more precisely is what proximity means at the institutional level.
Proximity between decision-makers and the reality they are deciding about. Governance mechanisms designed so that the people making consequential trust decisions can observe, over time and variety, the entities they are trusting, rather than relying on documentation that travels through layers of institutional translation before reaching them.
Proximity between accountability claims and the mechanisms that test them. Compliance frameworks in which the distance between the claim and the test is short enough that the claim cannot be optimized independently of the substance.
Proximity between the expression of alarm and the people who have the authority and the obligation to respond to it. Organizations in which the alarm does not pass through five layers of management filtering before reaching someone who can act, because at each layer, the suppression machinery has another opportunity to engage. This is what protecting the unsuppressed looks like in practice: not a whistleblower hotline, but an architecture in which the alarm's signal path is short enough that suppression cannot accumulate.
The point is structural, not romantic: the detection mechanism still functions reliably in environments of proximity, and the question is what it would mean to design institutions that allow its functioning rather than impeding it. The alarm was calibrated for a world of proximity, where the distance between signal and source was short enough that the coherence check could operate. The institutional project of the post-valley condition is to recover that proximity, not by making organizations smaller, but by making the critical channels shorter.
A Final Observation About the Alarm
There is something worth saying at the end of this series about the alarm itself, about the coherence check, the prediction error mechanism, the felt wrongness that has appeared in every essay as the most direct and most suppressed instrument of trust detection.
The alarm is not infallible. It fires for reasons that are sometimes wrong: for unfamiliarity masquerading as incongruence, for difference mistaken for deception, for the cognitive dissonance produced by encountering a genuine person or institution that does not conform to the expected pattern. The history of the alarm's failure modes is not short, and some of those failures have caused real harm.
The argument of this series has not been that the alarm is always right. It has been that the alarm is more often right than the suppression mechanisms credit, that its failure mode in the current environment is predominantly false negative rather than false positive, and that the social and institutional architecture that converts alarm into silence is doing more damage than the alarm's imperfections.
This is a calibration argument, not an infallibility argument. The alarm needs to be calibrated: its outputs need to be taken seriously as inputs to an investigative process, not acted on blindly as commands. What it does not need is to be suppressed by default, because suppression by default is the mechanism that all of the threats this series has examined depend on.
We have, over a long period of social development, professional culture-building, and institutional design, become very good at suppressing the alarm. We have built that suppression into our professional norms, our organizational hierarchies, our verification mechanisms, our compliance frameworks.
We have confused the suppression with wisdom and the unsuppressed with naivety.
The post-valley condition is the condition in which the cost of that confusion has become visible. The alarm that was overridden by the performance of normalcy at Orion, where sixty million dollars followed the signals out the door. The alarm that could not fire at all during the Arup deepfake call, because the simulation had crossed the valley. The alarm that went undetected for eighteen months in the carriers' networks before Salt Typhoon surfaced it. The alarm that dissolved under political pressure when documented federal access controls proved decorative. The alarm that Peiter Zatko carried and was fired for. The alarm that North Korean operatives rendered invisible by inhabiting trusted organizational space as synthetic colleagues for months at a time. These are not isolated failures. They are the predictable output of a system that has been optimized, at every level, to suppress the instrument it most needs.
The valley that Masahiro Mori mapped in 1970 was a region of alarm. We have spent fifty years learning to cross it by suppressing the alarm rather than addressing what the alarm was detecting. The crossing we find ourselves in now is the consequence of that choice.
What lies on the other side is not determined. It is not inevitable that the infrastructure of trust continues to degrade, that the institutional uncanny valley deepens, that the alarm is progressively rendered inoperative by the combination of signal synthesis and social suppression. These are tendencies, not destinies. They can be reversed, not quickly, not easily, not through any single institutional reform or technical solution, but through the accumulated effect of design choices that are made with clear eyes about what the problem actually is.
And here, at the end of the series, I find myself thinking not about the civilizational abstraction but about the finance worker at Orion who transferred sixty million dollars because the signals were right and the alarm did not survive the context. I think about that person because they are the scale at which this problem is actually experienced: one person, one decision, one moment in which everything this series has described, the signal/source split, the suppression mechanism, the organizational culture that makes acting on alarm costly, converges on a single human being who has to decide whether to trust what they are seeing. The civilizational problem is real. But it is composed of moments like that one.
The problem is not, at its root, a security problem, though security is where the consequences are most measurable. It is not a technology problem, though technology is what has changed the conditions. It is not even, primarily, a governance problem, though governance is where the institutional responses must be built.
It is the problem of a species that built its cooperative infrastructure on the assumption that the signals of authenticity could be trusted, discovering, in real time, at civilizational scale, that they cannot. And choosing, in the face of that discovery, what to build next.
That choice is still available. It is the choice this series has been, in its way, arguing for: to stop suppressing the alarm, to build institutions that protect its function, to design verification that tests the source and not just the signal, to recover the proximity between decision and reality that the alarm was calibrated for.
The alarm is still working.
The question is whether we will finally stop turning it off.
This is the final essay in The Valley of False Signals, a six-part series on trust, mimicry, and the collapse of authentication. The series begins with Essay One — The Alarm.
Sources
Foundational Reference
Mori, M. (1970). Bukimi no tani [The uncanny valley]. Energy, 7(4), 33–35. (In Japanese.) English translation: Mori, M., MacDorman, K.F., & Kageki, N. (2012). The uncanny valley [From the field]. IEEE Robotics & Automation Magazine, 19(2), 98–100.
Structural Trust and Institutional Economics
The essay's distinction between evidentiary trust and structural trust draws on the foundational literature of mechanism design and institutional economics:
Hurwicz, L. (1972). On informationally decentralized systems. In R. Radner & C.B. McGuire (Eds.), Decision and Organization: A Volume in Honor of Jacob Marschak (pp. 297–336). North-Holland. (Foundational framework for analyzing institutions as mechanisms that structure incentives and information.)
Hurwicz, L., & Reiter, S. (2006). Designing Economic Mechanisms. Cambridge University Press.
Maskin, E. (1999). Nash equilibrium and welfare optimality. Review of Economic Studies, 66, 23–38. (Originally circulated 1977. Implementation theory and the design of institutions that produce desired outcomes from self-interested actors.)
Myerson, R.B. (1981). Optimal auction design. Mathematics of Operations Research, 6(1), 58–73.
The 2007 Nobel Prize in Economics was awarded to Hurwicz, Maskin, and Myerson for their foundational contributions to mechanism design theory.
Institutional Design and Governance
Ostrom, E. (1990). Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press. (Institutional design principles for systems that produce cooperative behavior through structural incentives rather than signal-based trust.)
The essay's reference to "the branch of political philosophy concerned with how constitutions should be designed to produce good governance even from self-interested actors" draws on a tradition extending from James Madison's Federalist Papers (particularly Nos. 10 and 51, on designing institutions that channel self-interest toward collective good) through modern constitutional design theory.
The Three Errors
The essay identifies three common errors in responding to the collapse of signal-based trust:
Technical solutionism references include the liveness detection arms race documented in Essay Three (see Essay Three sources: iProov, Sumsub, MITRE ATLAS), zero-knowledge proofs for identity (the broader decentralized identity literature), continuous behavioral biometrics, and hardware-bound authentication tokens.
Cynical withdrawal is described as a structural tendency rather than attributed to a specific source. The essay's analysis of this error draws on the broader literature on institutional trust and social capital erosion.
Nostalgic restoration references include regulatory approaches to synthetic media: watermarking requirements, provenance tracking (e.g., the Coalition for Content Provenance and Authenticity, C2PA), and synthetic media disclosure mandates under various national and proposed international frameworks.
Case Catalogue (Closing Section)
The closing section references six cases documented in detail across the preceding essays:
Orion S.A. BEC fraud, 2024 ($60 million). See Essay Two sources.
Arup deepfake video conference fraud, 2024 ($25 million). See Essay Three sources.
Salt Typhoon telecommunications intrusion (eighteen months undetected). See Essay Four sources, originally analyzed in the author's Compound Vulnerability series.
Department of Government Efficiency federal access control failures, early 2025. See Essay Four sources, originally analyzed in the author's Compound Vulnerability series.
Zatko, P. ("Mudge"). Twitter whistleblower complaint, 2022. See Essay Five sources.
North Korean synthetic worker campaign (Famous Chollima), 2022–present (320+ organizations infiltrated). See Essay Three sources.
Cross-Series References
Brondani, M. Essays One through Five of The Valley of False Signals. Published at marcobrondani.com.
Brondani, M. Reality Hunger and The Compound Vulnerability (essay series). Published at marcobrondani.com.