Cold Empathy at Scale

The finance worker's alarm did not fire. Or it fired, and did not survive the context. Social engineering has been solving the wrong problem for thirty years. The vulnerability is not detection. It is the culture that suppresses it.

Cold Empathy at Scale

Essay Two of The Valley of False Signals series


In 2024, a senior finance employee at Orion S.A., a global specialty chemicals company headquartered in Luxembourg, received a series of emails requesting wire transfers. The emails appeared to come from company executives, referenced legitimate business contexts, and followed the communication patterns the employee was accustomed to. Over multiple transactions, approximately sixty million dollars was transferred to accounts controlled by the attackers.

No deepfakes were involved. No voice cloning. No synthetic video. The attack used nothing more than email, the right names, the right context, the right organizational knowledge, and a sophisticated understanding of how a specific person in a specific role at a specific company would respond to a request from apparent authority under time pressure.

The coverage focused on the amount lost and the procedural failures. That framing treats the incident as a problem of insufficient controls: if the verification procedures had been followed, the attack would have failed. But the verification procedures existed. They were known. They were bypassed, not because the employee was unaware of them, but because the social engineering was sophisticated enough to make following them feel unnecessary. The signals of legitimacy were sufficient to engage the suppression mechanism.

The finance worker's alarm did not fire. Or it fired, and did not survive the context.


The Attack That Was Always Psychological

Social engineering, the manipulation of people rather than systems, is the dominant attack vector in enterprise security. Not because technical vulnerabilities don't exist, but because attacking people is, for a sophisticated adversary, almost always the path of least resistance. A zero-day exploit requires finding an unpatched vulnerability, developing specialized code, deploying it without triggering detection. A well-constructed pretexting call requires understanding the target's organizational context, constructing a plausible narrative, and exploiting the psychological mechanisms that govern trust.

The Verizon 2025 Data Breach Investigations Report found that the human element (errors, social engineering, and credential misuse) was a factor in approximately sixty percent of all confirmed breaches, a figure that has remained stubbornly consistent year over year despite billions spent on awareness programs. That figure should stop every CISO cold. We have spent three decades building technical defenses, firewalls, endpoint detection, SIEM platforms, zero-trust architectures, and the dominant attack vector is still the human. As the technical perimeter has hardened, the human perimeter has been exposed as the softer target. The attacker simply went around.

But even this framing, the human as the weakest link, misses something. It treats the human factor as a problem of insufficient training, insufficient alertness, insufficient procedural compliance. If people would just follow the protocols, the attack would fail.

This is approximately what security awareness training teaches. And security awareness training has failed, by every meaningful metric, to reduce the incidence of successful social engineering attacks. The reason is that it addresses the wrong problem.


What Security Awareness Training Gets Wrong

The standard curriculum teaches people to recognize the signals of deception: do not click links in unsolicited emails, verify requests for wire transfers through a separate channel, be suspicious of urgency, check the sender's domain.

These are reasonable heuristics. They address the detection layer, the capacity to recognize that something is off.

But the actual vulnerability is not detection. As Essay One established, the alarm is generally working. People often have a sense, even during a successful attack, that something is not quite right. Post-incident interviews with victims regularly surface versions of this: "I had a feeling but I didn't say anything." "Something seemed off but it was hard to say what." "I didn't want to make trouble."

The alarm fired. Then it was suppressed.

Security awareness training teaches people to recognize attack signals. It does not address, and in many respects actively undermines, the capacity to act on a feeling that cannot be fully articulated. It teaches people to demand articulable evidence before they trust their unease. In doing so, it reinforces exactly the mechanism that sophisticated social engineers exploit.

People detect it, correctly, and then override the detection. The failure is structural, not educational. The suppression of unverifiable alarm is a feature of professional culture, organizational hierarchy, and the social norms that govern how uncertainty is permitted to be expressed in institutional settings. I keep coming back to this point because it reframes the entire defense problem: these norms did not emerge by accident, and they cannot be addressed by a forty-minute annual training module.


The Anatomy of Cold Empathy in Operation

In Essay One, I introduced cold empathy: the cognitive modeling that narcissists and psychopaths deploy without genuine affective resonance, documented from Cleckley's "mask of sanity" through Hare's psychopathy research, with Sam Vaknin providing the formulation that connects it to the uncanny valley. The cognitive element of empathy is present; its emotional correlate is not.

The skilled social engineer operates in this mode. Not because they are necessarily narcissists or psychopaths (though the profession does select for certain personality traits) but because the operational requirements are structurally identical to what cold empathy produces. The social engineer does not need to feel their target's experience. They need to model it, accurately, for the duration of the attack: what the target wants to believe, what narrative will be most readily accepted, which authority figures carry the most weight, what urgency framing will suppress verification instincts.

Watch the anatomy of a successful vishing call and the cold empathy structure becomes visible.

It begins with research. The attacker knows the target's name, role, approximate tenure, and details about recent organizational events that provide context for the pretext. LinkedIn, company websites, press releases, and earlier-stage phishing provide most of this. Then the call opens with specific and accurate claims: the caller knows the target's name, their manager's name, details about their role. They reference recent events in terms that signal insider knowledge. The target's brain runs its coherence check. Does this person know things that only insiders know? Yes. The alarm does not fire.

Having established apparent legitimacy, the attacker introduces urgency, compressing the time available for reflection and verification. The verification behaviors that security training teaches require time. Urgency, applied correctly, makes those behaviors feel like a threat to the urgency itself. And here is where the most sophisticated social engineers distinguish themselves: they exploit professional identity. The attack narrative places the target in the role of the competent professional who takes the right action quickly. Refusing to cooperate is implicitly framed as the behavior of an obstructionist. The social engineer is not just asking for compliance; they are offering the target a flattering self-concept in exchange for it.

If the target expresses hesitation, and good targets often do, the attacker has a response ready. The hesitation is anticipated, treated as a misunderstanding rather than a threat. "I completely understand the concern; that's exactly what we'd expect from someone careful. Let me just explain a bit more about why this is urgent." The alarm was suppressed, politely, by framing alertness as an obstacle to legitimate authority.

This is cold empathy in operation. The attacker does not need to feel the target's experience. They need to model it well enough to anticipate its movements and manage them. They know, before the target does, that the alarm will fire at approximately this point, and they have a scripted response ready.


The Attacker as Organizational Expert

There is a feature of sophisticated social engineering that awareness training almost never addresses, because it implicates something organizations do not want to examine about themselves.

The most dangerous social engineers attack specific people in specific organizational contexts, and their attack is calibrated to the culture, hierarchy, and behavioral norms of the target organization. A successful BEC attack against a manufacturing company exploits different vulnerabilities than one against a financial services firm; in manufacturing, the culture of operational urgency where delays have direct production consequences; in financial services, the culture of regulatory compliance where requests from apparently authoritative sources carry the implicit weight of a regulatory obligation.

The attacker's model of the organization is, in some ways, more accurate than the organization's model of itself. The organization believes it has security procedures. The attacker knows which procedures exist on paper and which ones are actually followed under pressure. The organization believes its employees are well-trained. The attacker knows, from the patterns of past attacks, which emotional levers produce compliance in what percentage of cases and under what organizational circumstances.

I started to write here that this represents a failure of organizational self-knowledge, but that framing is too gentle. It is more precise to say that the organization is structurally prevented from knowing itself accurately, because the same professional culture that makes cooperation possible also makes the honest assessment of one's own vulnerabilities socially impermissible. The attacker is looking specifically for the gaps. The organization is looking to confirm that the gaps don't exist. This asymmetry is not a correctable oversight. It is a structural feature of how hierarchical organizations process uncomfortable information about themselves.


The Professionalization of Deception

Social engineering has undergone a professional transformation in the past decade that most security discourse has not fully absorbed.

Business email compromise generated reported losses of $2.77 billion in the United States alone in 2024, according to the FBI's Internet Crime Complaint Center. That figure reflects only reported losses; BEC is famously underreported. Those losses represent an industry. On the criminal underground, toolkits for BEC attacks (pre-researched target lists, email templates, playbooks for different organizational contexts, even customer support for operators who encounter unusual resistance) are available for subscription fees measured in hundreds of dollars monthly.

The industrialization of social engineering means the attacker does not need to be exceptional. They need to be systematic. They run enough attempts against enough targets that the statistical properties of human psychology, the percentage who will comply with an urgent request from apparent authority, the percentage whose alarm will fire but whose professional culture suppresses acting on it, generate a reliable return. This is cold empathy at scale in its most literal sense: not one skilled manipulator modeling one target, but a systematic operation applying a statistical model of human vulnerability across thousands of targets. The cognitive modeling is aggregated. The outputs are actuarial.

Phishing-as-a-service platforms have extended this industrialization further, providing the complete infrastructure: email delivery, landing page templates, credential harvesting backend, analytics dashboards showing which lures produced the most clicks in which industries. The operator provides only the targeting. The psychological model is baked into the platform. And agentic AI is beginning to extend it further still. Documented attacks in 2025 involved AI agents operating autonomously over extended periods, building synthetic professional profiles, cultivating relationships through legitimate channels over weeks before making a request. Essay Three will examine the most developed instance of this pattern: the North Korean state-sponsored campaign that used synthetic identities to infiltrate over three hundred companies.


Why Training Cannot Fix a Structural Problem

The security industry's response to social engineering has been, for thirty years, predominantly educational. Train the user. Teach them the signals. Run simulated phishing campaigns. Measure click rates. The model has a seductive internal logic: if people are being deceived, they need to recognize deception; if they need to recognize deception, they need training.

The problem is empirical: it hasn't worked. Phishing click rates have remained stubbornly consistent. BEC fraud losses have grown year over year. The people falling for these attacks are not naive or untrained. Many of them have completed security awareness training in the previous twelve months. Some of them are themselves security professionals.

I have watched this cycle from the inside for long enough to feel the weight of it. The response from the training industry has been to add more training, make it more frequent, gamify the compliance, personalize the curriculum. More of the same. Because the model says the problem is insufficient awareness, the solution must be more awareness.

But if the problem is the suppression of awareness that already exists, then more training may be actively counterproductive. Consider what happens when a training module teaches someone to "verify unexpected requests through a separate channel." Good advice. But it teaches something implicit: that the appropriate response to an uncomfortable feeling is not to trust the feeling, but to run a verification procedure. If the procedure checks out, the uncomfortable feeling is supposed to be dismissed. A sophisticated attacker can defeat that procedure. They can spoof callback numbers. They can compromise the manager's email. They can set up a look-alike domain that passes a casual check. When verification appears to succeed but the attack is real, the training has actively suppressed the alarm by telling the target: you verified, so the feeling was wrong.

I should qualify this, because the argument I'm making risks sounding like an argument against all training, which it is not. Training has value at the margin. It raises the baseline. It catches the unsophisticated attacks, the spray-and-pray phishing that relies on volume rather than precision. What it cannot do is address the sophisticated attack that has already mapped the verification procedures and built its pretext to survive them. And it is the sophisticated attack, the one that models the target's psychology and manages the alarm, that produces the catastrophic losses. The awareness training model treats the alarm as an insufficient instrument that needs to be replaced by procedure. The correct model treats the alarm as a valuable instrument that needs to be protected from suppression.


The Suppression Mechanism in Professional Culture

Where does the suppression pressure come from? Not primarily from the attacker, though skilled attackers manage it deliberately. The primary source is organizational culture, and it is generated by three forces that operate in combination.

Professional organizations are hierarchical, and hierarchy generates its own compliance pressure. A request from a superior carries authority independent of its content. Questioning a directive from the apparent CFO, even when the alarm is firing, requires overcoming a deeply ingrained professional reflex to defer upward. The social engineer exploits this by impersonating authority, or by referencing authority in ways that import this compliance pressure into the interaction. The hierarchy norm is not a pathology; it is functional for the ninety-nine percent of interactions in which the authority is legitimate. It becomes a vulnerability only when legitimate and illegitimate authority signals become indistinguishable.

Professional environments also select for efficiency: people who resolve requests quickly, who don't create unnecessary friction, who are responsive and decisive. The person who pauses every ambiguous request for extended verification is regarded as difficult, overcautious, a bottleneck. The social engineer's urgency framing exploits this by making the cost of verification feel like the cost of inefficiency. The target who stops to verify is, in the narrative the attacker has constructed, failing at their professional role.

And expressing distrust of someone presenting convincingly as a colleague violates basic professional courtesy. Saying "I'm not sure I believe you are who you say you are" to someone who has supplied the correct contextual details is, in most professional contexts, deeply awkward. It implies suspicion, which implies accusation. The social engineer's performance of normalcy makes the expression of the alarm feel like rudeness.

These three norms, hierarchy, efficiency, and social grace, combine to create a professional culture that is structurally hostile to the expression of unverifiable alarm. They do not represent individual failures. They represent the predictable operation of organizational culture in an adversarial environment it was not designed for.


The Insider Threat as Confirmation

The social engineering problem has a darker inner layer: the insider threat. The insider has legitimate access, legitimate authority signals, and detailed knowledge of exactly where the gaps between stated and actual security posture are located. They don't need to research the organization; they live in it.

The 2024 Insider Threat Report found that eighty-three percent of organizations reported at least one insider attack, with the number experiencing eleven to twenty attacks in a year increasing fivefold from 2023. The Tesla breach of 2023, in which two former employees leaked the personal data of over seventy-five thousand individuals to a foreign media outlet, was not a technical exploitation. It was a decision by insiders who had legitimate access and used it for purposes the organization had not anticipated. Colleagues may have noticed something. The insider threat literature suggests they usually do. But the organizational culture that would need to convert that noticing into action is the same culture that suppresses the alarm in external social engineering: you do not speculate about a colleague's motives. You do not report a feeling you cannot justify.

The insider threat is the external social engineering problem inverted: instead of an outsider exploiting organizational suppression norms to prevent detection, an insider benefits from those same norms, which prevent colleagues from acting on accurate alarms.

And so insiders are identified, when they are identified, by exactly the same mechanism that fails in external social engineering: a feeling, imprecise and hard to articulate, that something is off about this person. That their interest in certain systems is slightly too focused. That their questions about access are slightly too specific. That something in the texture of their professional behavior is not quite coherent with everything else. The alarm fires. The suppression mechanism engages. The insider continues.


What Would Actually Work

If the problem is the suppression mechanism rather than detection capacity, the solution space looks different. And if the suppression mechanism operates through three specific organizational norms (hierarchy, efficiency, and social grace) then effective defense must address those norms directly, not the detection layer they suppress.

Addressing the hierarchy norm requires more than written policy stating that employees may verify requests from superiors. It requires organizational cultures in which questioning a request from apparent authority is normal, expected, and cost-free, which means addressing the informal signals through which professional culture actually operates. Who gets promoted? Who gets praised? Whose caution is celebrated, and whose is criticized as obstruction? The policy is the documentation. The culture is the posture. And the gap between them is where the social engineer operates. Anyone who has run a security program in a hierarchical organization knows this gap intimately, and knows how difficult it is to close from below.

Addressing the efficiency norm requires inverting the organizational response to urgency. Any request that arrives with urgency should automatically trigger more scrutiny, not less. The finance worker who refuses to execute a large transfer because something about the request felt off, even though they couldn't say what, needs to live in an organization where that decision is celebrated rather than criticized.

Addressing the social grace norm is the deepest challenge. It requires explicitly validating the alarm, teaching people that their sense of wrongness, even when it cannot be articulated, is worth pausing for. Not that it is always correct, but that it is always worth acknowledging as data rather than dismissing as irrationality.

None of these are training problems. They are organizational design problems. What they require is not the transmission of information but the restructuring of permission: the creation of organizational contexts in which the alarm's outputs are treated as intelligence rather than noise. Human risk management, the emerging field that frames security behavior as a function of organizational culture rather than individual training, is moving in this direction. But the industry's response to social engineering is still, predominantly, more training.


The Suppression Window Is Not a Bug

There is a harder thing to say, and it needs to be said clearly.

The suppression window that social engineers exploit is a necessary feature of cooperative social life, not a design flaw in human psychology. The norms that tell us to give people the benefit of the doubt, to treat unexpected requests with charity rather than suspicion, to avoid accusing colleagues of deception without strong evidence: these norms exist because cooperative life requires them. A world in which every organizational interaction was treated as potentially adversarial would be paralyzed. I have worked in organizations that tried to operate at that alert level, and the result was not security. It was dysfunction.

The social engineer's genius (and it is a kind of genius, however malignant) is to operate inside the norms of cooperative life while not participating in its substance. The norms of benefit of the doubt were designed for environments where most actors are operating in good faith. They fail in the presence of actors who are operating in bad faith while producing all the signals of good faith.

This is, again, the structure of cold empathy: the production of cooperative signals without the cooperative substance. The signal and the source have split.


Everything described in this essay exists at a scale that makes individual defense insufficient as a strategy. Individual training has diminishing returns past a certain point, and we passed that point years ago. The remaining returns are structural: organizational permission structures, the design of communication and authorization processes that are adversarially resistant by default rather than requiring individual vigilance to maintain security.

The vulnerability is the organizational culture that makes acting on alarm socially costly and procedurally difficult. Not the individual. And until that culture is addressed, not through training but through design, the suppression mechanism will continue to do the attacker's work for them.

This is perhaps the most uncomfortable implication of the entire analysis: the attacker's epistemic advantage is cultural rather than technical. The attacker understands what the organization cannot afford to acknowledge about itself, because the organization's professional culture has made that acknowledgment socially impermissible. The cold empathy of the social engineer is directed precisely at the gap between what the organization claims about its own security behavior and what that behavior actually looks like under pressure. The organization cannot see the gap because it has been socialized not to look. The attacker can see nothing else.

The structural problem of social engineering has been fundamentally altered by a technological development that the next essay examines: the synthetic reproduction of the signals that the alarm monitors. When the alarm can be defeated not just by skilled psychological manipulation but by sufficiently perfect simulation of trusted individuals, their voice, their face, their writing, the defense problem changes again. The alarm still works in the world this essay has described. In Essay Three, the conditions for it to fire begin to erode.


Next: Essay Three — The Death of the Signal. On deepfakes, synthetic identity, and what happens when the uncanny valley has been crossed.


Sources

Case Study

Orion S.A. (2024). Form 8-K filing with the U.S. Securities and Exchange Commission, August 12, 2024. Disclosure of approximately $60 million in losses from fraudulently induced wire transfers targeting a non-executive employee.

Breach and Threat Statistics

Verizon. (2025). 2025 Data Breach Investigations Report. Verizon Business.

Federal Bureau of Investigation, Internet Crime Complaint Center. (2024). 2024 Internet Crime Report. FBI IC3. (BEC losses of $2.77 billion in 2024.)

Cybersecurity Insiders / Gurucul. (2024). 2024 Insider Threat Report.

Cold Empathy and Psychopathy

Cleckley, H. (1941). The Mask of Sanity: An Attempt to Clarify Some Issues About the So-Called Psychopathic Personality. C.V. Mosby.

Hare, R.D. (1993). Without Conscience: The Disturbing World of the Psychopaths Among Us. Pocket Books/Simon & Schuster.

Vaknin, S. (2003). Malignant Self-Love: Narcissism Revisited. Narcissus Publishing. See also Vaknin's published lectures and writings on cold empathy and the uncanny valley.

Insider Threat Case Study

Tesla data breach (2023). Two former Tesla employees leaked personal data of over 75,000 individuals, including names, addresses, Social Security numbers, and employment histories, to the German newspaper Handelsblatt. Reported by multiple sources including Reuters, August 2023.

Human Risk Management

The essay references the emerging field of human risk management as an alternative framework to security awareness training. Key contributors to this discourse include the work of organizations such as the SANS Institute, Gartner's human risk management framework, and practitioner literature on security culture design.

Cross-Series References

Brondani, M. Essay One: "The Alarm." The Valley of False Signals. Published at marcobrondani.com.