The Death of the Signal

There is a line that was crossed, and we did not notice when we crossed it. Voice, face, and writing style were once unforgeable. That practical infeasibility is gone. The alarm has nothing left to detect. The signal has died.

The Death of the Signal

Essay Three of The Valley of False Signals series


There is a line that was crossed, and we did not notice when we crossed it.

For most of the history of electronic communication, the signals of human presence were practically unforgeable. A voice was a voice, not a statistical reconstruction of a voice, not a synthesis trained on hours of recordings, but the acoustic output of a specific larynx, shaped by a specific mouth, carrying the micro-variations of breath and hesitation that no recording technology of the time could plausibly reproduce in real time. A face was a face. A signature was a signature. Even as forgery existed (it always has) the cost of producing a convincing forgery was high enough that the attempt itself was rare, and the imperfections were usually detectable by someone paying attention.

These practical constraints were not just inconveniences for fraudsters. They were the load-bearing architecture of trust. Every authentication system ever built was constructed on the implicit assumption that certain signals of genuine human presence were costly enough to simulate that their presence could serve as evidence of legitimacy.

That assumption is no longer valid.


What the Valley Was, and What We Have Left Behind

Mori's uncanny valley described a specific failure mode of simulation: the point at which a simulacrum becomes close enough to human that its imperfections become visible and disturbing. The valley was a region of maximum alarm, where the simulation was advanced enough to trigger the coherence check but imperfect enough to fail it. The alarm fired precisely because the simulation was almost good enough.

The implicit structure of that problem assumed that the alarm was a useful instrument. The simulation was detectable. The valley existed as a warning region precisely because there was something to warn against, a gap between signal and source that was large enough, with sufficient attention, to be felt.

Essay Two described what happens when that alarm is suppressed by social and organizational mechanics. This essay addresses something different: what happens when the gap closes. When the simulation becomes precise enough that the alarm has no incongruence to detect. When we have not suppressed the alarm but passed beyond the conditions that cause it to fire.

We are, I want to argue, at or near that crossing in several domains simultaneously. Not fully past it in every context; the alarm still fires at poorly constructed deepfakes, at synthetic text that carries the particular flatness of large language model outputs, at voice clones with subtle artifacts. But the trajectory is clear, the rate of improvement is accelerating, and the frontier of undetectable simulation is advancing faster than the frontier of detection.

Mori described the uncanny valley as a region to be avoided or crossed. We are crossing it, not by making simulations less humanlike (which was Mori's practical recommendation for robot designers) but by making them more humanlike. By making them precise enough that the prediction error mechanism has nothing to register. The question of what lies on the other side of the valley, what the world looks like when simulation achieves parity with reality, is not a question Mori asked, because in 1970 it was not a question that needed answering. It needs answering now.


The Three Collapses

The death of the signal is occurring in three overlapping domains, at three different rates, and they need to be understood together before we can grasp what they mean in combination.

Voice identity collapsed first and fastest. In 2019, a UK-based energy company lost approximately €220,000 after a finance director received a phone call from someone who sounded exactly like the company's CEO. The voice, its tone, cadence, accent, was sufficiently precise that the director executed the transfer without hesitation. That case, at the time, represented the frontier. Six years later, voice cloning has moved from a research capability requiring hours of sample audio to a commercial service available for subscription fees measured in tens of dollars per month. Some implementations need as little as three seconds of clear audio to produce a clone with what researchers describe as an eighty-five percent voice match. The output is not a recording; it is a synthesis engine that can produce, in real time, that person saying anything. CrowdStrike's 2025 threat analysis documented a 442 percent increase in voice cloning usage between the first and second halves of 2024 alone.

The voice was the oldest authentication signal. Before written records, before seals, before cryptographic keys, the recognition of a familiar voice was the primary mechanism for verifying identity. The brain is extraordinarily sensitive to vocal identity; we recognize people we know from a single word, often before they have finished their first sentence. That sensitivity, which was an asset in an environment where voice synthesis was impossible, becomes a liability in an environment where it is cheap. The very precision of our voice recognition now works against us: the more faithfully we trust a recognized voice, the more completely we are deceived when that voice has been synthesized.

Visual identity is close behind. In February 2024, the engineering firm Arup suffered the largest documented deepfake fraud to date: a finance worker, participating in what appeared to be a routine video conference with the company's CFO and other senior executives, authorized fifteen transactions totaling twenty-five million dollars. Every face on the call was generated in real time, with synchronized facial movements, realistic voices matched to each executive's known speech patterns, and natural body language. The simulation was precise enough that the alarm did not fire, not because it was suppressed, but because there was nothing for it to detect. A year later, a finance director in Singapore fell to an almost identical structure. The attackers had absorbed the lesson of prior coverage: they proactively suggested a video call, using the apparent willingness to verify as a mechanism for producing false confidence. What these cases demonstrate is not just the quality of the simulation but its social engineering integration. The deepfake is not the attack; it is the resolution of the final friction point in a fundamentally psychological attack. The technology removes the last signal that would allow the alarm to fire. Meanwhile, synthetic identity fraud, the construction of entirely fictitious people with generated faces, fabricated histories, and synthetic documentation, has reached industrial scale. Experian's 2024 fraud data documented a sixty percent increase in false identity cases over the prior year. The Federal Trade Commission estimates that synthetic identity fraud accounts for eighty to eighty-five percent of all identity fraud cases in the United States, with costs to the financial industry exceeding thirty billion dollars.

The collapse of textual identity may be the most pervasive and least discussed, because it operates in the medium that most professional communication uses. A 2025 study in the Journal of Expert Systems with Applications tested fully automated AI spear-phishing campaigns against human expert campaigns: the AI-generated emails achieved a click-through rate of fifty-four percent, identical to experienced human social engineers, at a cost reduction of up to fifty times for large-scale campaigns. The spear-phishing email that references the correct operational context, mirrors the target's communication style, and sounds exactly like the person it claims to be from was once the product of hours of human research. It is now the output of an automated pipeline that costs fractions of a cent per target.

The implications extend beyond phishing. Large language models can produce text that is not just grammatically correct and contextually coherent but stylistically matched to a specific individual. Given a corpus of a person's writing, emails, reports, social media posts, a sufficiently capable model can produce new text that carries the statistical fingerprint of that person's style. We authenticate email, to a large degree, by feel: by the quality of the writing, the characteristic phrasings, the particular way a colleague structures a request. When those markers can be synthesized from a training corpus, the informal authentication layer collapses.


The Operation That Combines All Three

Since at least 2022, North Korean state-sponsored operatives have been infiltrating technology companies worldwide by posing as remote IT workers. Call it what it is: an identity synthesis operation conducted at national scale, integrating all three collapses into a single sustained effort.

GitHub's 2025 analysis documented a development team that created at least 135 synthetic identities using scraped photographs, AI image generators, and face-swapping tools, then used those images to create fraudulent passports that verified successfully in over forty percent of attempts. The scale is significant: the DOJ's June 2025 enforcement actions revealed that a single facilitator network had generated over seventeen million dollars in revenue across 309 jobs at US companies including Fortune 500 firms. CrowdStrike found the number of infiltrated companies grew 220 percent over twelve months, with operatives penetrating more than 320 organizations. During live video interviews, operatives use real-time face-swapping technology, allowing a single operator to interview for the same position multiple times under different synthetic personas. Palo Alto Networks' Unit 42 demonstrated that a researcher with no prior deepfake experience could create a synthetic identity convincing enough for job interviews in seventy minutes using consumer hardware. The textual layer completes the simulation: AI to fabricate resumes, prepare for interview questions in real time, mimic cultural fluency in English, and maintain ongoing workplace communications once hired.

This campaign matters because it represents something qualitatively different from the spectacular deepfake fraud. It is a sustained inhabitation of trusted space. Synthetic humans, complete with professional histories and ongoing behavioral patterns, operating inside organizations as trusted colleagues for months. The alarm does not fire because there is nothing for it to detect. The persona is complete. The signals of genuine presence are all present. They are all synthetic. And the gap between signal and source has been closed so completely that colleagues, managers, and HR departments process these personas as real people for months at a time.

This is what the post-valley condition looks like in practice: not a single spectacular fraud but a quiet occupation of the spaces where trust is assumed.

I find this the most unsettling case in the entire series, and I think the reason is that it inverts the emotional register of deception. The Arup deepfake was spectacular; this is quiet. It is the difference between a smash-and-grab and a neighbor who was never who you thought they were.


The Authentication Assumption

Every framework for verifying identity is built on what we might call the authentication assumption: that genuine presence leaves signals that are either inherently unforgeable or sufficiently costly to forge that their presence constitutes reasonable evidence of legitimacy.

The history of authentication is the history of this assumption being challenged and adapted to. Signatures became forgeable, so we added notarization. Identity documents became falsifiable, so we added biometrics. Passwords became vulnerable to brute force, so we added multi-factor authentication. Each adaptation assumed that the new signal was costly enough to forge that it retained evidentiary value. Voice, face, and writing style were in the "inherently unforgeable" category, not because forging them was technically impossible, but because doing so in real time, at scale, was practically infeasible.

That practical infeasibility is gone. What remains is a set of authentication systems built on assumptions that no longer hold, protecting infrastructures that have not yet absorbed what that means. NIST's digital identity guidelines are under revision precisely because the threat model they were built for has been rendered obsolete. The revision process is ongoing. The threat is not waiting for it to conclude.

The governance crisis here runs deeper than the technical problem, and I'm not sure the security industry has fully reckoned with it. Boards and executives are making risk decisions based on assurance frameworks that have not been updated to reflect the collapse of their foundational assumptions. CISOs are defending perimeters with tools calibrated for threat models that no longer accurately describe the actual attack surface. Regulators are enforcing compliance with standards that were written before the authentication assumption broke. The crisis is not that we lack better signals. The crisis is that the entire intellectual architecture within which security decisions are made was built for a world in which certain kinds of signals could be trusted, and that world is the one this essay has been describing the end of.


The Post-Valley Condition

What does the world look like on the other side of the uncanny valley?

Mori's graph suggested that the recovery came when the simulation became indistinguishable from the real. The practical implication was a kind of epistemic normalcy: you could not tell the difference, therefore you would not feel the alarm. But that picture assumes you do not know you are past the valley. It assumes that the improvement in simulation quality is matched by a corresponding reduction in your awareness that simulation is occurring.

That is not the situation we are in. We are approaching the crossing with full awareness that we are approaching it. The sophistication of synthetic voice, face, and text is a public fact, discussed in security conferences, documented in incident reports. We know that voices can be cloned, faces synthesized, and writing style matched. We know that the signals that used to tell us we were communicating with a genuine person may no longer be reliable.

The post-valley condition, for us, is therefore not Mori's theoretical comfort. It is something more destabilizing: the knowledge that the signals exist, combined with the loss of confidence that they mean what they used to mean. I am aware that this formulation risks sounding alarmist, and I want to be precise about why I think it is not. Two responses are possible, and both are problematic. Undifferentiated suspicion, treating every communication as potentially synthetic, is operationally unsustainable; organizations cannot function if every communication requires the verification level appropriate to a high-risk financial transaction. Exhausted credulity is the more likely outcome, and the more dangerous one. The population slowly absorbs the knowledge that signals can be faked, and slowly accommodates by deciding, implicitly, to mostly act as if they can't. Not from naivety. From the pragmatic judgment that life cannot be conducted at the alert level the threat technically requires. The alarm becomes background noise. Suppression becomes default.

A 2025 study by iProov found that only 0.1 percent of participants correctly identified all fake and real media shown to them. Seventy percent reported that they were not confident they could distinguish a real voice from a cloned one. These are figures describing a population that has been overwhelmed by the threat, not one that has adapted to it.

This is the new attack surface. Not the alarm that can be manipulated into suppression, as Essay Two described. The alarm that has been ground down into irrelevance by the sheer volume of the threat. The post-valley condition does not require defeating the alarm. It requires exhausting it.


The Adversarial Parity Problem

There is a dynamic in the synthetic media arms race that deserves direct attention, because it has no clean resolution and the security industry has been reluctant to say so plainly.

Detection and generation are structurally linked. The most effective approaches to detecting synthetic media use machine learning models trained to identify artifacts of synthetic generation. But the generation models improve in response to detection signals, in many cases using detection feedback directly as training signal. The result is a co-evolutionary dynamic in which each improvement in detection produces a corresponding improvement in generation.

The liveness detection domain makes this concrete. When presentation attack detection improved, attackers moved to injection attacks that bypass the camera entirely. When vendors developed injection detection, attackers moved to compromising device integrity through emulators and hardware tampering. In December 2025, iProov's Red Team published through MITRE's ATLAS framework a demonstration that a commercially available face-swapping tool could evade liveness detection on financial and banking mobile applications. The vulnerability was rated critical. The technique required no specialized AI expertise. And injection attacks surged nine-fold in 2024, fueled by a twenty-eight-fold spike in virtual camera exploits.

The pattern that liveness detection reveals defines the post-valley condition: every detection method that relies on the costliness of forgery eventually fails as that cost decreases. The defense was never the detection method itself; it was the economic barrier that made defeating it impractical. When the barrier collapsed, the detection method became a ritual. The signal retained its form while losing its substance. The email domain still displays. The SMS code still arrives. The liveness check still runs. The signal/source split that Essay One identified has extended to the authentication infrastructure itself.

Detection and generation share fundamental access to the same underlying techniques, and generation has a structural advantage: it only needs to produce one example convincing enough to defeat a specific detection system, while detection needs to identify all synthetic examples across all methods. The malware arms race provides the historical analogy. Malware and antivirus have been co-evolving for four decades. Antivirus technology is far more sophisticated than it was in 1990. Malware is also far more sophisticated, and the fundamental dynamic has not been resolved in favor of defenders. The endpoint detection and response industry exists precisely because the co-evolution produces a sustained market for defense tools that are never definitively sufficient. The synthetic media arms race will produce the same dynamic. Detection at the signal level will be a useful supplementary tool, producing actionable signals in a subset of cases. It will not be a foundation.


What Authentication Looks Like After the Signal Dies

The honest answer is that the field has not yet fully confronted this question. The working assumption in most authentication frameworks is still that signal degradation is a problem to be solved at the signal level. These are real investments made in good faith. They are also, structurally, fighting the last war.

The more durable approaches are not signal-based. They are context-based, process-based, and cost-based.

Context-based authentication shifts the question from "is this signal genuine?" to "is this request coherent with the established context of this relationship?" A request for a large financial transfer is authenticated not by the voice on the phone but by whether it fits the established pattern of how this counterparty communicates and transacts. Anomaly detection of the request and its contextual fit is more robust to signal synthesis than signal verification.

Process-based authentication embeds resistance to synthetic signals in process design rather than detection technology. Out-of-band verification through pre-established channels, time delays that prevent urgency-driven compliance, dual-authorization requirements that cannot be satisfied by a single compromised communication channel: these are process designs that remain effective even when individual signals are untrustworthy.

Cost-based authentication shifts the problem to the economics of the attack. If every authorization attempt requires actions with real-world costs (physical presence, multi-party coordination, time delays that increase operational risk of discovery) the cheapness of signal synthesis is offset by the costs embedded in the authorization process.

None of these are complete solutions. All of them introduce friction, and friction has costs. The calibration of security friction against operational efficiency is one of the defining problems of enterprise security governance, and it is never cleanly resolved. But the direction is clear: authentication frameworks built on the assumption of detectable genuine presence need to be rebuilt on the assumption of detectable genuine process, structures that are adversarially resistant not because the signal cannot be faked but because the process cannot be completed without costs that forgery cannot absorb.


The Civilizational Dimension

The collapse of signal authenticity extends well beyond security into something epistemic, and the epistemic dimension is larger than any organizational response can address.

Trust, at every scale, runs on signals: the signal that a voice is genuine, that a document is authentic, that an institution is doing what it says it is doing. These signals are not the trust itself; they are evidence that trust is warranted, the observable outputs of processes that, when functioning correctly, are causally connected to the trustworthiness they indicate. When signals can be produced without that causal connection, when the voice can be synthesized without the person, the document fabricated without the process, the evidentiary value of signals collapses. Not gradually. Structurally.

We are beginning to live in that collapse. And the psychological response to it, the exhausted credulity, the suspended judgment, the gradual accommodation to a world in which signals cannot be taken at face value, is not neutral. It reshapes the conditions under which collective action, institutional authority, and social cooperation are possible. A population that has learned, at a deep level, that the signals of authenticity are not reliable will respond to that knowledge in ways that extend far beyond cybersecurity. The institutions that have relied on the apparent authenticity of their signals to maintain legitimacy (governments, corporations, regulatory bodies, the media) will find that legitimacy increasingly difficult to sustain.

The North Korean synthetic worker campaign illustrates this at a precise scale. When a company discovers that a colleague they have worked alongside for a year was a state-sponsored synthetic identity, the damage extends beyond the data exfiltrated or the salary paid. It reaches the trust infrastructure itself: every subsequent hire, every video call, every new colleague's face is now shadowed by the knowledge that the signals of presence were once completely, convincingly false.

This is the deeper cost of the authentication crisis: not the individual fraud that succeeds, but the aggregate erosion of the signal infrastructure on which all collective trust depends. The Arup deepfake cost one company twenty-five million dollars. The erosion of the epistemic foundation of organizational communication costs something much harder to quantify and much harder to restore.


Essays One and Two described a world in which the alarm works but is suppressed. This essay describes a world in which the conditions for the alarm to fire are eroding.

Essay Four examines a dimension of this problem that is neither technical nor psychological but institutional: organizations and governance bodies that produce accountability signals systematically disconnected from the accountability they purport to represent. The signal/source split applied not to the voice on the phone or the face on the screen, but to the entire apparatus of institutional trust.

The deepfake CFO exploits a synthesized signal. The narcissistic institution exploits a structural one. Both rely on the same underlying condition: the possibility of producing outputs that signal trustworthiness without the processes that would causally generate it. The alarm has the same structure in both cases. What suppresses it is different. And that difference is what Essay Four is about.


Next: Essay Four — The Narcissistic Institution. On governance theater, compliance as performance, and the organizations that have learned to produce the signals of accountability without its substance.


Sources

Voice Cloning

Stupp, C. (2019). "Fraudsters Used AI to Mimic CEO's Voice in Unusual Cybercrime Case." The Wall Street Journal, August 30, 2019. (UK energy company, €220,000 voice clone fraud. Insurance firm Euler Hermes, subsidiary of Allianz SE, provided case details.)

CrowdStrike. (2025). 2025 Global Threat Report. CrowdStrike Holdings, Inc. (442% increase in voice cloning usage between H1 and H2 2024; deepfake-enabled fraud losses; North Korean synthetic worker campaign data.)

Deepfake Fraud

Arup deepfake fraud (2024). Finance worker authorized $25 million across fifteen transactions during a video conference in which all participants were real-time deepfakes. Reported by Hong Kong police and multiple sources including CNN, February 2024.

Singapore deepfake fraud (2025). Finance director at multinational firm targeted via deepfake video call with multiple synthetic executives. Reported by The Straits Times and cybersecurity press, March 2025.

Synthetic Identity Fraud

Experian. (2024). 2024 Identity and Fraud Report. Experian Information Solutions, Inc. (Sixty percent increase in false identity cases year over year.)

Federal Trade Commission. Synthetic identity fraud estimates: eighty to eighty-five percent of all identity fraud cases in the United States. Referenced in multiple FTC publications and testimony.

AI-Automated Spear Phishing

Heiding, F., Schneier, B., Vishwanath, A., & Laszka, A. (2025). "Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models." Journal of Expert Systems with Applications. (AI spear-phishing achieved fifty-four percent click-through rate, identical to human experts, at up to fifty times cost reduction.)

North Korean Synthetic Worker Campaign

GitHub Security Lab. (2025). Analysis of North Korean development team creating 135+ synthetic identities for infiltration operations.

U.S. Department of Justice. (2025). Enforcement actions, June 2025. North Korean operatives employed at 100+ US companies; single facilitator network generating $17 million across 309 jobs.

CrowdStrike. (2025). 2025 Threat Hunting Report. (Famous Chollima campaign; 220% growth in infiltrated companies; 320+ organizations penetrated.)

Palo Alto Networks, Unit 42. (2025). Demonstration that synthetic identity convincing enough for job interviews could be created in seventy minutes using consumer hardware.

Pindrop. (2025). Screening data: one in four DPRK-linked job applicants used deepfake technology during live interviews.

Liveness Detection and Authentication

iProov. (2025). 2025 Biometric Threat Intelligence Report. (0.1% of participants correctly identified all fake and real media; Red Team demonstration via MITRE ATLAS framework of liveness evasion on financial applications.)

Sumsub. (2025). 2025 Identity Fraud Report. (AI fraud agents combining generative AI, automation, and reinforcement learning; nine-fold surge in injection attacks; twenty-eight-fold spike in virtual camera exploits.)

Authentication Frameworks

National Institute of Standards and Technology (NIST). Digital Identity Guidelines (SP 800-63 series), revision in progress. The authoritative US government framework for identity assurance, under revision to address generative AI threats to biometric and signal-based authentication.

Cross-Series References

Brondani, M. Essay One: "The Alarm" and Essay Two: "Cold Empathy at Scale." The Valley of False Signals. Published at marcobrondani.com.

Mori, M. (1970). Bukimi no tani [The uncanny valley]. Energy, 7(4), 33–35.