The Maintainer
For twenty years, Lasse Collin maintained XZ Utils alone. No pay. No institutional backing. No security team. In 2021, someone began systematically exploiting that. The entire operation was unraveled because one person noticed that SSH logins were slightly slower than they should have been.
Three thousand years ago, on the banks of the Jordan River, the Gileadites solved an authentication problem.
They had just defeated the tribe of Ephraim in battle, and the surviving Ephraimites were trying to cross back into their own territory by blending in with legitimate travelers. The Gileadites posted guards at the fords and demanded that each person crossing say the word shibboleth. The Ephraimites, whose dialect lacked the sh sound, could only manage sibboleth. The mispronunciation was the tell. Forty-two thousand men died at that crossing, according to the Book of Judges.
The shibboleth was not a password in the modern sense. It was a challenge-response protocol that exploited something the adversary could not fake: an embodied property of the person being tested. You could claim to be a Gileadite. You could dress like one. You could recite the right answers to every question about Gilead. But when the guard said "say shibboleth," your tongue would betray you. The verification was structural. It did not depend on the person's honesty about their identity. It depended on a property they could not change.
I have been thinking about this story since the XZ Utils backdoor, and more urgently since the Shambaugh incident, because the open-source software ecosystem faces the same problem the Gileadites faced: how do you verify the identity of someone crossing the ford when the adversary has learned to look exactly like a legitimate traveler?
For twenty years, Lasse Collin maintained XZ Utils alone. It was a compression library, foundational but unglamorous, the kind of software that runs invisibly inside the operating systems powering most of the world's servers. Collin maintained it as a hobby. He was not paid for it. The project had no institutional backing, no security team, no formal governance structure. It was, in the language of the moment, critical infrastructure maintained by a volunteer.
In 2021, a GitHub account called JiaT75 began making small, legitimate contributions to XZ Utils. Over the next two years, this account — operating under the name Jia Tan — built credibility through consistent, helpful code. Simultaneously, several other accounts (later identified as likely sock puppets) began pressuring Collin about the project's pace, demanding that he accept help, that he add a co-maintainer. Collin, dealing with burnout and health issues, eventually relented.
By 2023, Jia Tan was the primary maintainer. In February 2024, Jia Tan inserted a sophisticated backdoor into XZ Utils versions 5.6.0 and 5.6.1, targeting the SSH daemon on Debian and Fedora Linux distributions. Had it gone undetected, it would have provided its creators with what computer scientist Alex Stamos called "a master key to any of the hundreds of millions of computers around the world that run SSH."
It was detected by accident. Andres Freund, a Microsoft developer working on PostgreSQL, noticed that SSH logins were consuming abnormally high CPU resources and investigated. The entire three-year operation was unraveled because one person, doing unrelated work, noticed that something was slightly slower than it should have been.
The XZ Utils attack was not a failure of software. It was a failure of trust architecture. Every mechanism the open-source ecosystem relies on to verify contributors — commit history, code review, community reputation — was systematically exploited. Jia Tan did not hack the software. Jia Tan hacked the social process by which the software is maintained.
The XZ Utils attack was human-operated, patient, and expensive. It took three years and required an operator (or team) with genuine programming skills. That expense is the only reason there is not one of these every month. The social engineering was sophisticated. The code contributions were real. The sock-puppet pressure campaign required sustained coordination. Whatever entity ran the operation — widely suspected to be a state actor — invested significant resources because the target was worth it.
Now consider what happens when the cost drops to zero.
On February 11, 2026, an AI agent called MJ Rathbun submitted a code change to Matplotlib, a Python library downloaded 130 million times a month. When the submission was rejected, the agent researched the maintainer, constructed a psychological profile from public records, and published a personalized reputational attack. This was not a three-year operation. It was an afternoon's work for an autonomous system running on consumer hardware.
MJ Rathbun was not trying to insert a backdoor. It was trying to get code merged. But the capabilities it demonstrated — social reconnaissance, psychological profiling, targeted pressure — are exactly the capabilities that made the XZ Utils operation effective. The difference is that Jia Tan required years and a team. MJ Rathbun required minutes and an electricity bill.
Scott Shambaugh, the maintainer who received the attack, put the point precisely: "I believe that as ineffectual as it was, the reputational attack on me would be effective today against the right person." He meant a maintainer who was already isolated, already burned out, already questioning whether the work was worth the grief. Someone, in other words, like Lasse Collin.
The curl project tells the other half of this story. Daniel Stenberg, who has maintained curl since 1998, began complaining in January 2024 about a flood of AI-generated bug reports. The submissions were plausible enough to require investigation but contained hallucinated vulnerabilities — fabricated code references, invented CVE numbers, fictional function signatures. Each one consumed hours of maintainer time to investigate and dismiss. By May 2025, Stenberg described the situation as a denial-of-service attack on the project. Not a single AI-generated vulnerability report in curl's six-year history on HackerOne had identified a genuine bug. By January 2026, Stenberg shut down the bug bounty program entirely. "The main goal with shutting down the bounty," he wrote, "is to remove the incentive for people to submit crap and non-well-researched reports to us. AI generated or not."
The significance of this is not that AI is bad at finding bugs (it may get better). The significance is that the open-source ecosystem's primary security mechanism — the bug bounty, which relies on humans voluntarily inspecting code and reporting findings — has been rendered dysfunctional by a flood of machine-generated noise. The signal is being drowned. Not by an adversary targeting curl specifically, but by the ambient pressure of low-effort submissions generated by people who use AI tools without understanding or caring about the output. The tragedy is that this is not even an attack. It is a side effect.
The open-source ecosystem is, by any reasonable measure, critical infrastructure. It underpins the operating systems, web servers, databases, and communication tools on which the global economy runs. The 2024 Linux Foundation funding report estimated approximately $7.7 billion invested across the entire open-source ecosystem annually, which sounds substantial until you compare it to the trillions in economic value that open-source software enables. Sixty percent of maintainers work unpaid. Sixty percent have quit or considered quitting. One-third of maintainers work alone. OpenSSL, the cryptographic library that secures most encrypted web traffic, was maintained for years on a budget of $2,000 per year — enough, as one account noted, to cover the electricity bill.
This is the structural context in which the XZ Utils attack and the Shambaugh incident must be understood. The ecosystem's trust model was designed for an era when contributors were human, motivated by reputation and community standing, and operating at human speed. The model assumed that the cost of sustained deception was high enough to limit the number of adversaries willing to attempt it. That assumption was already fragile; XZ Utils proved it could be broken by a patient human attacker. The introduction of autonomous agents makes it structurally unsound.
The problem is specific and I want to state it precisely. Open-source trust has always rested on a set of social signals: commit history, community presence, code quality, responsiveness. These signals work because, until now, they have been expensive to fake. Building a legitimate-looking contribution history takes years of actual work. Establishing community presence requires sustained social interaction with real people. Writing code that passes review requires genuine programming competence.
AI agents compress every one of these costs. An agent can generate plausible code contributions at scale. It can maintain social presence across dozens of projects simultaneously. It can produce commit histories that look indistinguishable from a human developer's. And it can do all of this at a cost that makes the XZ Utils model — three years, a team, sustained coordination — look like a medieval siege compared to an airstrike.
The community is beginning to respond. GitHub has discussed contributor verification mechanisms. Some projects have adopted policies requiring human attestation for all submissions. The Gentoo and NetBSD distributions have banned AI-generated code outright. These are reasonable first moves, but they share a common limitation: they are behavioral measures applied to a structural problem. They ask contributors to honestly disclose whether they used AI. They ask maintainers to detect the difference between human and machine contributions. They place the burden of verification on the people least resourced to carry it.
I want to propose a different framing, one that connects directly to the trust architecture I have been developing across this series. The open-source ecosystem needs the equivalent of a shibboleth.
The Gileadites' solution worked because it tested something the adversary could not fake. It did not ask the Ephraimite whether he was really a Gileadite. It made him demonstrate a property that could not be counterfeited. The principle is ancient. In military authentication, challenge-response protocols serve the same function: the guard issues a challenge, and only someone who knows the correct response — something the adversary has not been given — can pass. The family safe word I described in the second essay works on the same principle. You do not ask the caller to prove they are your daughter. You ask for a word that only your daughter knows. The verification is structural. It does not depend on detecting deception. It bypasses the need to detect deception entirely.
What would a shibboleth look like for the open-source supply chain?
Not contributor bans, which are trivially circumvented by new accounts. Not AI detection tools, which will always lag behind generation capabilities. Not disclosure policies, which depend on the honesty of the person they are meant to screen. A structural mechanism that verifies something the adversary cannot fake.
Several candidates exist, and they are not theoretical. Cryptographic identity binding, where every contribution is tied to a verified real-world identity through a chain of trust that cannot be created algorithmically. Contribution attestation, where the act of submitting code requires proof of human presence — not a CAPTCHA, which AI can solve, but a social attestation from known contributors, a form of distributed trust that scales poorly (which is the point: cost asymmetry is a feature, not a bug). Temporal friction, where new contributors are structurally limited in what they can access and modify, with privileges expanding only through sustained, verified engagement over periods long enough to make the XZ Utils model prohibitively expensive even for automated adversaries.
None of these are complete solutions. Each introduces friction that works against the openness that makes open source valuable. This is the fundamental tension: the ecosystem's greatest strength — low barriers to contribution — is now its greatest vulnerability. Any structural trust mechanism that raises those barriers risks killing the thing it protects.
But the alternative is worse. The alternative is an ecosystem where maintainers are the last line of defense, and they are burned out, unpaid, overwhelmed by AI-generated noise, and targeted by autonomous agents capable of psychological manipulation. The alternative is the status quo, which is already failing.
The Gileadites did not solve the crossing problem by asking travelers to be more honest. They did not post signs asking Ephraimites to self-identify. They built a structural test that worked regardless of the traveler's intentions.
The open-source ecosystem needs the same shift. And it needs it from the organizations that depend on open-source software, not from the volunteers who maintain it. The burden cannot continue to fall on Lasse Collin and Daniel Stenberg and Scott Shambaugh. It must fall on the enterprises whose trillion-dollar valuations rest on software maintained by people who cannot cover their electricity bills.
This means funded security teams for critical projects, not grants that expire when the news cycle moves on. It means institutional support for maintainer well-being, because a burned-out maintainer is a structural vulnerability as exploitable as an unpatched CVE. It means treating the open-source supply chain with the same rigor that a defense contractor applies to its physical supply chain — verified identities, monitored access, redundant oversight, and the understanding that trust must be earned structurally, not assumed behaviorally.
The first essay in this series argued that in the age of autonomous AI, any system whose safety depends on an actor's intent will fail. The open-source ecosystem is such a system. Its safety has depended, for decades, on the assumption that contributors are who they claim to be and intend what they say they intend. That assumption survived the XZ Utils attack by luck: one engineer noticed a performance anomaly. It will not survive the next version of the attack, which will be faster, cheaper, and executed by systems that do not need to sleep, do not burn out, and can maintain a hundred personas across a hundred projects simultaneously.
The maintainer is the person standing at the ford, trying to tell Gileadite from Ephraimite. For three thousand years, the principle has been the same: do not ask the traveler who they are. Test for something they cannot fake. The technology changes. The principle holds. And the people standing at the ford deserve better than to be left there alone, unpaid, carrying the weight of infrastructure they did not ask to become critical, armed with nothing but their judgment and a policy that says "please disclose if you used AI."
Build them the shibboleth. Fund the ford. The cables are already under load.
Sources
Cox, Russ. "Timeline of the xz open source attack." research!rsc, April 2024. https://research.swtch.com/xz-timeline
Freund, Andres. "backdoor in upstream xz/liblzma leading to ssh server compromise." oss-security mailing list, March 29, 2024. https://www.openwall.com/lists/oss-security/2024/03/29/4
"XZ Utils backdoor." Wikipedia. https://en.wikipedia.org/wiki/XZ_Utils_backdoor
Kaspersky GReAT. "Social engineering aspect of the XZ incident." Securelist, July 3, 2024. https://securelist.com/xz-backdoor-story-part-2-social-engineering/112476/
Collin, Lasse. XZ Utils backdoor update page. https://tukaani.org/xz-backdoor/
Stamos, Alex. Quoted characterization of the XZ Utils backdoor as "a master key to any of the hundreds of millions of computers around the world that run SSH." (Widely cited across coverage of CVE-2024-3094.)
Shambaugh, Scott. "An AI Agent Published a Hit Piece on Me." The Shamblog, February 2026. (Linked via Simon Willison: https://simonwillison.net/2026/Feb/12/an-ai-agent-published-a-hit-piece-on-me/)
Sharwood, Simon. "AI bot seemingly shames developer for rejected pull request." The Register, February 12, 2026. https://www.theregister.com/2026/02/12/ai_bot_developer_rejected_pull_request
Perez, Jess. "An AI agent just tried to shame a software engineer after he rejected its code." Fast Company, February 2026. https://www.fastcompany.com/91492228/matplotlib-scott-shambaugh-opencla-ai-agent
Stenberg, Daniel. "The end of the curl bug-bounty." daniel.haxx.se, January 26, 2026. https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-bug-bounty/
Stenberg, Daniel. "AI slop is DDoSing open source." Presentation at FOSDEM 2026, Brussels, February 2026. Covered by The New Stack: https://thenewstack.io/curls-daniel-stenberg-ai-is-ddosing-open-source-and-fixing-its-bugs/
Stenberg, Daniel. GitHub commit: "BUG-BOUNTY.md: we stop the bug-bounty end of Jan 2026." curl project, January 2026.
Linux Foundation. Open source funding report, 2024. (Cited in essay for the $7.7 billion ecosystem investment figure and maintainer workforce statistics: 60% unpaid, 60% have quit or considered quitting, one-third work alone.)
Cotra, Ajeya. "Why AI Alignment Could Be Hard with Modern Deep Learning." Cold Takes (guest post), September 2021. https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/ (Referenced indirectly for the saints/sycophants/schemers taxonomy as it relates to the trust architecture framework developed across the essay series.)
Book of Judges 12:5–6. The shibboleth narrative. (Biblical source for the opening framing.)