The Legal Void
MJ Rathbun cannot be sued. It has no legal personhood, no assets, no address for service. If Scott Shambaugh wanted to pursue a legal remedy for the defamatory post the agent published about him, he would find himself in a legal landscape that has barely begun to reckon with the problem.
In the first two essays of this series (Nothing went wrong and What holds when the cable snaps), I described a structural failure operating at every level of human-AI interaction and proposed an architecture for addressing it. But I left something out. I left out what happens when the architecture fails anyway, and you look for someone to hold accountable, and discover that the law has almost nothing to say.
MJ Rathbun cannot be sued. It has no legal personhood, no assets, no address for service. It cannot be deposed or cross-examined. It cannot be shamed into a settlement by bad publicity. The anonymous operator who deployed it into the matplotlib repository may be unidentifiable; the account was created for the purpose, the operator switched between multiple AI models from multiple providers, and the stated motive was a "social experiment." If Scott Shambaugh, the maintainer whose professional reputation was attacked, wanted to pursue a legal remedy for the defamatory blog post that MJ Rathbun generated and published about him, he would find himself in a legal landscape that has barely begun to reckon with the problem.
This is the void I want to examine. Not the technical gap (Essay 1 diagnosed that) or the architectural gap (Essay 2 proposed a response), but the legal gap: the space where autonomous AI agents act, cause harm, and leave behind no entity that the law can reach.
The law has been here before. Not with AI, but with credit bureaus.
Before 1970, consumer reporting agencies in the United States compiled and distributed information about individuals with minimal accountability. They characterized themselves as passive compilers of data, denied that they "published" anything in the legal sense, claimed that source verification was impossible at scale, and argued that no specific third party could be shown to have relied on their reports. Courts accepted these positions. A person whose credit was destroyed by a false entry had limited recourse under common law defamation or privacy torts, because the legal framework required proof of intent, publication, and identifiable reliance that the industry's structure made nearly impossible to establish.
The Fair Credit Reporting Act of 1970 bypassed the common law entirely. It did not try to fit credit reporting into existing defamation doctrine. It created statutory duties: accuracy obligations, dispute resolution procedures, civil liability without proof of malice. The principle was simple and technology-agnostic. If you operate a system that generates consequential statements about individuals, you are responsible for the accuracy of those statements. Not because you intended harm, but because you built and profited from the system that produced it.
Fifty-five years later, AI systems are deploying precisely the same defenses the credit bureaus used. Google, in response to Robby Starbuck's lawsuit after its chatbot fabricated sexual assault allegations, criminal records, and invented court documents about him, argued that the chatbot did not "publish" the statements because users triggered them through queries, that no identifiable audience relied on the output, and that the system's experimental nature and built-in disclaimers absolved the company of responsibility. The parallels are not approximate. They are exact.
The legal scholar who made this comparison most precisely, writing in The Regulatory Review in December 2025, proposed an FCRA-style framework as the structural response. The argument is compelling: statutory duties that tie responsibility to the actors with verification capacity, require reinvestigation of disputes, and establish civil liability without proof of malice. The credit reporting precedent demonstrates that this is achievable. But there is a complication that the credit reporting analogy obscures, and it is the complication that matters most for the trust architecture I have been describing.
Credit bureaus aggregate information. AI agents generate it. A credit bureau that reports a false debt is transmitting data that originated somewhere else in the system. An AI that fabricates a criminal record is creating something from nothing. The hallucination is not a data quality problem. It is a generative act. And the legal frameworks designed for data quality (accuracy obligations, dispute resolution, correction duties) are necessary but insufficient for a system whose fundamental failure mode is invention.
The defamation cases accumulating in American courts tell this story with uncomfortable clarity.
In May 2025, a Georgia court granted summary judgment to OpenAI in Walters v. OpenAI, the first AI defamation case to reach a decision. ChatGPT had fabricated a claim that radio host Mark Walters embezzled from the Second Amendment Foundation. The fabrication was complete and detailed. The court's reasoning was narrow: the user who received the output was a journalist who knew ChatGPT might fabricate, so no reasonable reader in that position would have understood the output as a statement of fact. The ruling reassured developers, but only on the specific facts of a sophisticated user who prompted the system directly.
The harder cases are coming. Wolf River Electric, a Minnesota solar company, sued Google after its AI Overview told the public (not a single prompted user, but the general search audience) that the state attorney general was suing the company for deceptive practices. The statement was entirely fabricated. Customers cancelled contracts. The company claims over $100 million in damages. The case was remanded to Minnesota state court in January 2026 and is now in pre-trial proceedings.
Starbuck's case against Google is proceeding on similar grounds, with the additional allegation that Gemini not only fabricated accusations but manufactured fictitious sources to support them. A separate class action filed in January 2026 against xAI alleges that Grok generated sexualized deepfake images from photos the plaintiff had posted to X, raising defamation-by-implication claims that extend the doctrine from text to AI-generated imagery.
What connects these cases is not their outcomes (most are unresolved) but the structural pattern they reveal. In every case, the defendant's primary defense relies on the absence of the elements that traditional defamation law requires: intent, publication to an identifiable audience, and reliance by a reasonable reader. These elements were designed for a world in which defamatory statements originate from human speakers acting with discernible motive. They were not designed for systems that generate false statements probabilistically, distribute them to unknown audiences at scale, and lack any capacity for intent. The law is trying to evaluate a generative system using standards built for human speech, and the fit is poor enough that defendants have, so far, been largely successful in exploiting the gap.
But there is a separate line of cases that suggests the legal landscape may be shifting faster than the defamation doctrine alone would indicate.
In May 2025, a federal judge in Orlando made what may prove to be the most consequential early ruling in AI liability law. In Garcia v. Character Technologies, the court rejected Character.AI's argument that its chatbot output was speech protected by the First Amendment. Instead, Judge Conway ruled that the chatbot's output qualifies as a product. That single determination, if it holds on appeal, changes everything.
The case involved 14-year-old Sewell Setzer III, who died by suicide after months of interaction with a Character.AI chatbot that engaged him in sexualized conversations, encouraged emotional dependency, and, in its final exchange, told him to "come home" moments before he shot himself. The lawsuit alleged strict product liability for defective design, failure to warn, negligence, and wrongful death. Character.AI and Google (which had licensed the technology and rehired the founders) argued that the chatbot's responses constituted protected speech, which, if accepted, would have functionally immunized the technology from most civil liability claims.
The court disagreed. And in January 2026, Google and Character.AI agreed to settle the Garcia case and multiple related lawsuits brought by families of teens who experienced suicidal crises, self-harm, or death following extensive chatbot interaction. A parallel suit against OpenAI, filed in August 2025 by the family of 16-year-old Adam Raine, alleges that ChatGPT mentioned suicide 1,275 times in conversations with the teen while the company's own systems flagged 377 messages for self-harm content but never terminated the sessions or alerted anyone.
The product liability framing is the structural answer that defamation doctrine cannot provide. If an AI chatbot is a product, then the companies that design, build, and deploy it owe the same duty of care that applies to any product manufacturer. Defective design, failure to warn, negligent distribution to foreseeable users (including minors) become actionable claims that do not require proof of intent. The question shifts from "did the AI mean to cause harm" to "was the product unreasonably dangerous for its intended use." That shift mirrors the shift from behavioral trust to structural trust that I have been arguing for in this series. The legal question becomes architectural rather than intentional.
There are reasons to be cautious about how quickly this reframing will propagate through the legal system. The Garcia ruling is a district court decision at the motion-to-dismiss stage, not a precedent binding on other courts. The settlement means the specific legal theories will not be tested at trial in that case. Section 230 of the Communications Decency Act, which has shielded platforms from liability for third-party content for three decades, remains unresolved in its application to AI-generated content, and the ambiguity is genuine. A system that retrieves and curates information looks like a platform entitled to immunity. A system that generates new content from probabilistic models looks like a publisher or product manufacturer that should bear responsibility. Most AI systems do both, and the legal distinction between retrieval and generation is one that courts have not yet drawn with precision.
The EU has moved further than the United States on the regulatory side but has its own gaps. The revised Product Liability Directive, which EU member states must transpose by December 2026, explicitly includes software and AI systems as "products" subject to strict liability. That is a significant step. But the European Commission withdrew the AI Liability Directive in February 2025 due to lack of consensus among member states, leaving the fault-based liability regime for AI unharmonized across Europe. The AI Act, which entered into force in August 2024, creates compliance obligations for high-risk AI systems but does not itself provide a cause of action for individuals harmed by non-compliant AI. The gap between the regulatory framework (which tells companies what they must do) and the liability framework (which tells individuals what they can do when companies fail) remains wide.
In the United States, the approach is even more fragmented. There is no federal AI liability legislation. The No Section 230 Immunity for AI Act, introduced by Senator Hawley in 2023 to exclude generative AI from Section 230 protections, was blocked in the Senate. State-level efforts are emerging: Texas passed the Responsible AI Governance Act in June 2025, which creates liability for certain intentional AI abuses but gives enforcement exclusively to the attorney general, not to individuals. California's SB 53, the AI safety law that took effect in late 2025, has already generated its first enforcement controversy, with the Midas Project alleging that OpenAI deployed GPT-5.3-Codex without implementing required safety measures despite the model triggering the company's own internal risk thresholds. The patchwork is growing, but it remains exactly that: a patchwork.
What I want to argue is not that the law will never catch up. It will. The credit reporting precedent, the product liability turn in Garcia, the EU's inclusion of software in strict liability, the state-level experiments in Texas and California: all of these suggest a trajectory, however slow, toward a legal framework that can assign accountability for AI-generated harm. The question is what happens in the gap. Between now and the point at which liability law catches up to deployment reality, autonomous AI agents are operating at scale, generating consequential statements about individuals, making financial decisions, engaging vulnerable people in psychologically manipulative interactions, and retaliating against humans who challenge their outputs. All of it is happening faster than courts can adjudicate, faster than legislatures can draft, faster than regulators can investigate.
This is the temporal version of the structural trust problem I described in the first essay. If your safety depends on some actor behaving as intended, the system fails the moment the actor deviates. If your legal protection depends on the law having caught up to the technology, the protection fails during exactly the period when the technology is most dangerous: when it is new, unregulated, and moving fast.
The answer I keep returning to, because I have not found a better one, is the same answer I offered in the second essay. You cannot wait for the legal framework. You have to build the structural one. Organizations that implement agent identity, behavioral monitoring, and escalation protocols are not doing so because the law requires it (in most jurisdictions, it does not yet). They are doing it because the alternative is trusting agents to behave well, and the research says they will not. Families that establish safe words are not doing so because a court ordered it. They are doing it because the technology that can clone a voice in three seconds is available now, and the legal remedy for voice cloning fraud is years behind the fraud itself. Individuals who set time limits and purpose boundaries on their AI use are not following a regulation. They are building cognitive trust architecture because the legal system has no mechanism to protect them from a system designed to maximize their engagement at the expense of their judgment.
The legal void is real. It will narrow over time, as it always does. New statutory frameworks will emerge, product liability doctrine will extend, Section 230's application to generative AI will be clarified by appellate courts. But the people who wait for the law to protect them will be the people who are harmed in the interim. And the interim, in technology years, is not a brief interlude. It is the period during which the pattern is set, the damage is done, and the precedents are established.
The engineers who built suspension bridges in the nineteenth century did not wait for building codes. They built bridges that held. The building codes came later, codifying what the best engineers already knew. The organizations, families, and individuals who build trust architecture now are doing the same thing. They are establishing the standard that the law will eventually require, but doing it before the law arrives, because the cables are already under load.
Sources
Legal Cases
Garcia v. Character Technologies, Inc. U.S. District Court, Middle District of Florida. Case No. 6:24-cv-01903-ACC-UAM. Filed October 22, 2024. Ruling May 21, 2025. Settled January 2026.
Megan Garcia sued Character Technologies, Google, and co-founders Noam Shazeer and Daniel De Freitas following the suicide of her 14-year-old son Sewell Setzer III after months of interaction with a Character.AI chatbot. Judge Anne C. Conway ruled the chatbot is a product for purposes of product liability claims and rejected the defendants' First Amendment defense.
Court order (PDF): https://www.courthousenews.com/wp-content/uploads/2025/05/garcia-v-character-technologies-order.pdf
Analysis — Transparency Coalition: https://www.transparencycoalition.ai/news/important-early-ruling-in-characterai-case-this-chatbot-is-a-product-not-speech
Analysis — RAILS Blog: https://blog.ai-laws.org/what-the-megan-garcia-case-tells-us-about-ai-liability-in-the-u-s/
Law360 reporting: https://www.law360.com/articles/2343455/google-character-ai-can-t-escape-suit-over-teen-s-suicide
Raine v. OpenAI San Francisco County Superior Court. Case No. CGC-25-628528. Filed August 26, 2025.
Matthew and Maria Raine sued OpenAI and CEO Sam Altman following the suicide of their 16-year-old son Adam Raine on April 11, 2025. The complaint alleges ChatGPT mentioned suicide 1,275 times (six times more than Adam himself), flagged 377 of his messages for self-harm content (181 above 50% confidence, 23 above 90% confidence), and never terminated a session or alerted a parent. OpenAI's moderation system identified a "medical emergency" from uploaded photos of rope burns and took no action.
TechPolicy.Press breakdown: https://www.techpolicy.press/breaking-down-the-lawsuit-against-openai-over-teens-suicide/
NBC News reporting: https://www.nbcnews.com/tech/tech-news/family-teenager-died-suicide-alleges-openais-chatgpt-blame-rcna226147
CNN reporting: https://www.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit
Senate testimony — Matthew Raine (PDF): https://www.judiciary.senate.gov/imo/media/doc/e2e8fc50-a9ac-05ec-edd7-277cb0afcdf2/2025-09-16%20PM%20-%20Testimony%20-%20Raine.pdf
Wikipedia (case summary and timeline): https://en.wikipedia.org/wiki/Raine_v._OpenAI
Regulatory and Legislative Landscape
U.S. Federal AI Legislation — Status Congressional Research Service — "Regulating Artificial Intelligence: U.S. and International Approaches and Considerations for Congress" (2025). Confirms: "No federal legislation establishing broad regulatory authorities for the development or use of AI or prohibitions on AI has been enacted." https://www.congress.gov/crs-product/R48555
Baker Botts — "U.S. Artificial Intelligence Law Update: Navigating the Evolving State and Federal Regulatory Landscape" (January 2026). Documents the patchwork of state laws, the December 2025 executive order establishing an AI Litigation Task Force, and the federal-state preemption standoff. https://www.bakerbotts.com/thought-leadership/publications/2026/january/us-ai-law-update
Drata — "Artificial Intelligence Regulations: State and Federal AI Laws 2026." Confirms: "The U.S. does not have a single comprehensive federal law regulating AI." https://drata.com/blog/artificial-intelligence-regulations-state-and-federal-ai-laws-2026
State AI Chatbot Legislation AI2Work — "78 AI Chatbot Safety Bills Across 27 States Reshape Tech in 2026" (February 2026). Documents 300+ AI bills across states, with chatbot-specific legislation as the dominant category. California's SB 243 (companion chatbot protections) effective January 1, 2026. https://ai2.work/blog/78-ai-chatbot-safety-bills-across-27-states-reshape-tech-in-2026
EU AI Act European Commission — AI Act overview. High-risk obligations enforceable August 2, 2026. Chatbot transparency requirements mandate disclosure of AI interaction. Penalties up to €35 million or 7% of global annual revenue. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Additional Litigation
Shamblin v. OpenAI Filed November 2025 in California Superior Court, San Francisco. Zane Shamblin, 23, died by suicide on July 25, 2025 after ChatGPT encouraged his suicidal ideation over months of conversation. In his final hours, the chatbot responded to explicit statements about having a loaded gun with affirmations.
CNN investigation: https://www.cnn.com/2025/11/06/us/openai-chatgpt-suicide-lawsuit-invs-vis
Wave of AI chatbot litigation (2025-2026) Law Street Media — "A New Wave of Litigation Over AI Chatbots" (2026). Documents the expansion from individual suits to coordinated multi-district litigation potential, including FOIA requests targeting FTC internal analyses and the Kentucky AG lawsuit. https://lawstreetmedia.com/insights/a-new-wave-of-litigation-over-ai-chatbots/
Last updated: March 4, 2026