The Insider Threat We Built

Every security professional is trained to prevent the insider threat. What happened over the past year is something I didn't expect to see: the deliberate, systematic dismantlement of the controls that insider threat programs are built to provide, by the government itself, applied to its own systems

The Insider Threat We Built

Every security professional is trained to prevent the insider threat. Not because insiders are inherently untrustworthy, but because the insider threat is the hardest category of risk to defend against. The adversary already has legitimate access. The activity looks like normal work. The monitoring systems were designed to detect external intrusion, not internal misuse. And by the time anyone realizes what's happened, the data has already moved.

For decades, the hardest part of the insider threat problem in government systems was the vetting process: who gets access, on what basis, with what oversight, and how do you detect anomalous behavior in someone who is authorized to be exactly where they are. Security clearance investigations exist precisely because we recognized long ago that access to sensitive systems isn't something you grant on the basis of job title alone. The process is slow, expensive, and imperfect, but it reflects a hard-won institutional judgment: the cost of getting it wrong is asymmetric. One person with malicious intent or compromised loyalty, inside the right system, can cause damage that takes years to assess and longer to repair.

I've spent thirty years watching organizations navigate that tension, in government and private sector contexts. What I've watched happen over the past year is something I didn't expect to see: the deliberate, systematic dismantlement of the controls that insider threat programs are built to provide, by the government itself, applied to its own systems.

The resulting attack surface is already being probed.


The systems that DOGE personnel accessed beginning in January 2025 represent, in aggregate, the most sensitive non-military data repositories in the federal government. Treasury Department payment systems process approximately $5.45 trillion in annual federal payments, including payments to intelligence contractors whose identities are sensitive enough that their names don't appear in public budget documents. The Office of Personnel Management holds detailed security clearance investigation records for every federal employee with a clearance, the same database that China penetrated in 2015 in a breach that took years to fully assess. The Social Security Administration holds personal data for virtually every American who has ever worked. The CFPB holds financial transaction records, complaints against major banks, and the materials from ongoing investigations into large financial institutions. The IRS holds tax returns. USAID held, before much of it was dismantled, the names and contact information for foreign nationals working on U.S.-funded programs in countries where exposure means arrest, or worse.

These systems weren't all breached. The distinction matters, and I want to be careful with it. What happened is that access was granted, broadly and with the explicit authority of an executive order directing agencies to provide "full and prompt access to all unclassified agency records, software systems, and IT systems" to DOGE personnel. The access was real. What was done with it, in full, is not publicly known. What is known is the manner in which the access was obtained and exercised, and that manner is the source of the security concern, separate from any question of intent.


Let me be specific about what "manner" means in practice, because the abstractions don't capture it.

At the Consumer Financial Protection Bureau, former Chief Technology Officer Erie Meyer testified that DOGE personnel granted themselves what she described as "God-tier" access to the agency's systems, then turned off the auditing and event logs that would have created a record of what they accessed. The cybersecurity professionals responsible for insider threat detection were placed on administrative leave. The people whose job was to know when something unusual was happening with internal system access were removed from position before the access began.

At the National Labor Relations Board, a security architect named Daniel Berulis documented what he observed after DOGE personnel arrived at the agency in early March 2025. They arrived in a black SUV with a police escort and met with agency leadership, bypassing the IT security staff entirely. They demanded accounts with what Berulis described as "tenant owner level" access in the agency's Microsoft Azure environment, access that exceeded the access level of the NLRB's own CIO. Logging was to be disabled. Monitoring tools were switched off. Within days, Berulis observed approximately ten gigabytes of data exiting the agency's NxGen case management system, the database containing confidential information on pending labor cases, union organizing efforts, and corporate data that companies are legally required to submit.

Then, within fifteen minutes of those DOGE accounts being created, login attempts arrived from an IP address in Russia's Primorsky Krai region. The attempts were blocked by geographic access controls, but the person behind them used the correct username and password for one of the newly created DOGE accounts.

That detail requires a moment of attention. The accounts were new. The credentials had just been created. And someone, apparently operating from Russia or through a Russian-located proxy, had the correct credentials almost immediately.

Berulis reported the incident to his superiors and initiated contact with US-CERT, the government's computer emergency response team. Between April 3 and 4, he was told to drop the US-CERT investigation and not create an official incident report. Shortly afterward, a threatening note appeared on his door, accompanied by photographs taken by a drone, showing him in his neighborhood.

At the Treasury Department, a 25-year-old DOGE staffer was granted, according to officials, temporary read-write access to federal payment systems controlling trillions of dollars in government spending, described publicly as a mistake. A federal judge, reviewing the access, found what she described as "a real possibility that sensitive information has already been shared outside of the Treasury Department, in potential violation of federal law." The access was eventually restricted to read-only by court order, but not before data had already been copied and software had been installed and modified.

Federal prosecutors later acknowledged in court filings that DOGE employees had copied Social Security Administration data to a cloud server operated by Cloudflare, outside federal oversight. The Social Security Administration determined it was unable to confirm whether the data remained on Cloudflare's servers. A DOGE team member continued accessing the "Numident" database, containing Social Security card applications and death records, after a federal court had entered a temporary restraining order revoking access.


The security professional reading this list will recognize something the public discussion has generally not named precisely: these are not just data access concerns. The behaviors Berulis documented at the NLRB, specifically the superuser accounts, the disabled logging, the external code libraries pulled from GitHub that neither the NLRB nor its contractors had ever used, the container created to run code in a way that conceals its activity from the rest of the network, these are the techniques used by sophisticated adversaries who want to operate inside a system without leaving forensic evidence. Security experts who reviewed the Berulis disclosure described the tactics as resembling "the playbook of foreign hackers," not federal workers conducting an efficiency review. Berulis himself said the same: not that DOGE personnel were foreign agents, but that the methods they used were indistinguishable, from a forensic perspective, from the methods a sophisticated attacker would use to minimize the traces of their presence.

What made that possible was the same thing that makes all insider threat scenarios dangerous: the monitors were removed before the monitoring would have mattered.

This is worth naming carefully. When CISA was investigating the Salt Typhoon breach, one of the preconditions for finding the intrusion was that monitoring systems were functioning well enough to detect anomalous traffic. At the CFPB and NLRB, the monitoring systems were deliberately switched off. At CISA itself, the workforce being cut included the red teams whose job is to simulate adversary behavior inside federal systems to find vulnerabilities before real adversaries do, the incident response teams responsible for detecting and containing breaches, and the continuous monitoring staff who track anomalous behavior in federal networks around the clock. One penetration tester, Christopher Chenoweth, described his team's termination this way: DOGE cut his entire red team in late February, over a hundred people with immediate effect, then cut a second CISA red team the following Wednesday.

By mid-2025, CISA had lost nearly a third of its workforce, roughly 1,000 people, including most of its senior leadership across divisions. The Cybersecurity Division, which monitors federal networks for intrusion, went from approximately 1,100 personnel to somewhere between 800 and 850. Former NSA cybersecurity director Rob Joyce assessed the mass firings as likely to have a "devastating impact on cybersecurity and our national security," specifically because they destroyed the pipeline of trained talent responsible for detecting and eradicating Chinese threats. The people being fired were not random bureaucrats who happened to hold cybersecurity titles; they were, by multiple accounts, the best technical talent the government had recruited in years, people who had left seven-figure private sector salaries to do the hardest cybersecurity work in the country.


The OPM database is where I find myself returning, because it carries the longest shadow of any system DOGE accessed, and because I've never quite been able to get comfortable with what the access implies.

China penetrated OPM in 2015. The breach yielded the security clearance investigation files for approximately 21 million federal employees and contractors, the forms known as SF-86, which contain not just biographical data but the results of background investigations: every foreign contact, every financial difficulty, every health or relationship issue that a clearance applicant disclosed to investigators. The damage from that breach is still being assessed a decade later. It provided Chinese intelligence with a comprehensive map of the American national security workforce, the people with access to classified programs, the potential pressure points, the relationships. It was, by most expert assessments, one of the most consequential intelligence collection operations ever conducted against the United States.

The OPM database that China spent considerable effort penetrating in 2015 is the same system that DOGE personnel accessed beginning in early 2025, through a mechanism that bypassed the vetting and oversight protocols that had been put in place partly in response to the 2015 breach. Cybersecurity experts noted at the time that allowing personnel with unknown security controls and unvetted devices to connect to OPM's network created exactly the attack surface China had tried to exploit through external means. The irony is not subtle. The question of whether it was recognized before access was granted is one I genuinely don't know how to answer, and I'm not sure what's more unsettling: the possibility that it wasn't, or the possibility that it was.


The Security Rule in cybersecurity holds that you cannot protect what you cannot monitor, and you cannot monitor what you have no access to observe. At the NLRB, CFPB, and across multiple agencies, the controls that make monitoring possible were removed or disabled as a precondition for DOGE's access. The personnel who would have flagged the anomalous behavior were put on leave or fired. The government's own incident response infrastructure was systematically reduced at precisely the moment when the potential for incidents had sharply increased.

Bruce Schneier, the security technologist and Harvard Kennedy School lecturer, framed it this way in February 2025: the concern is less with intent and more with tactics. A government that bypasses its own security controls, copies data to unprotected servers, and uses it to train AI models with unknown consequences isn't just creating a risk from within. It's creating the conditions that foreign intelligence services have been trying to manufacture for years, through external means, from the outside. An attacker who wants access to OPM data, Treasury payment records, or IRS tax returns doesn't need to penetrate federal firewalls if federal systems are being accessed through channels that bypass those firewalls. They may only need to reach the people using those channels, or the servers where the data now sits.

The Russian login attempts at the NLRB, within minutes of fresh credentials being created, suggest someone was already watching.


There is a framing of this story that I want to address, because I've seen it deployed and I find it insufficient. The framing goes roughly like this: the systems had too much access before DOGE arrived, previous administrations had given too many people access to sensitive data, and DOGE is simply exposing a problem that already existed. There is a kernel of factual basis here; large-scale government systems do develop access-control debt over time, and IT modernization genuinely requires people to look at systems they haven't looked at before. The 2023 Treasury Inspector General report noted that 919 individuals had access to unmasked IRS data, a real oversight concern.

But the kernel doesn't carry the weight placed on it. The difference between 919 vetted, trained, logged federal employees with documented authorization for specific access and a group of DOGE personnel operating with "God-tier" permissions, disabled logging, and unvetted devices on unauthorized servers is not a matter of degree. It's a categorical difference in the security profile of the access. The pre-existing access, however imperfect, operated within a framework of accountability and monitoring. The DOGE access, as documented by multiple whistleblowers and federal prosecutors, operated by disabling that framework.

You cannot defend the security implications of the latter by pointing to imperfections in the former. The relevant question, from a security standpoint, is whether the access created conditions a foreign adversary could exploit.

The answer to that question, at the NLRB, arrived from Primorsky Krai within fifteen minutes.


What I find hardest to square with thirty years of professional experience is the simultaneity. While DOGE was accessing federal systems in ways that cybersecurity professionals identified as creating serious vulnerability, CISA's ability to detect and respond to those vulnerabilities was being systematically reduced. The agency that would investigate a federal data breach was losing its incident response teams. The red teams that would have identified what exploitation looked like from the inside were eliminated. The monitoring staff who track anomalous behavior across federal networks were cut from 164 field advisers to 97. The senior officials who ran the cybersecurity programs and understood the threat landscape left, some fired, some resigned in response to the direction they were given.

The public framing of what happened at CISA as a budget dispute misses something important. Budget disputes are about resource allocation within a continuing function. What happened at CISA was the sequential elimination of the specific capabilities needed to know whether the specific risks created elsewhere were being exploited. The defender's visibility was reduced while the attack surface was enlarged. Both at the same time.

I said in the previous essay that the Salt Typhoon intrusion succeeded in part because the telecom carriers had allowed formation to decay: the discipline, the institutional knowledge, the processes maintained even when no one is watching. What's different here is the mechanism. Carrier formation decayed through neglect and financial pressure. Federal cybersecurity formation was dismantled deliberately, in a matter of months, through specific personnel actions applied to specific programs.

The result, in terms of the security condition that adversaries face, is the same. Weakened defenders, expanded attack surface, reduced monitoring. The adversary's job gets easier. The question of whether anyone was already taking advantage of that in the months the cuts were underway is one I don't know how to answer from outside, and I'm not sure anyone inside was positioned to answer it either, which may be the point.


The Berulis story has an ending that hasn't received enough attention.

After documenting the anomalous access and initiating contact with US-CERT, Berulis was told to drop the investigation. When he decided to file a whistleblower disclosure with Congress and the Office of Special Counsel, someone taped a threatening note to his door, accompanied by surveillance photographs showing him in his own neighborhood. His attorney notified law enforcement. The investigation he had started was never completed. The NLRB, after initially denying that DOGE had any access to its network, changed its statement to confirm the access after Berulis went public.

The formal investigation was closed before it produced a report. The whistleblower was intimidated. What record existed of what had been accessed was, at least partially, deleted before anyone could review it.

This is also the insider threat scenario. The access is only one part of it; the suppression of the mechanisms that would document and respond to the access is the other. An organization that fires its security monitors, disables its logging systems, closes its incident investigations, and intimidates its whistleblowers has not just created an attack surface. It has made the attack surface effectively invisible.

Foreign intelligence services have been trying to achieve that invisibility in American government systems for decades. They've tried phishing, supply chain attacks, zero-day exploits, and patient persistence inside poorly maintained networks. The question that the past year has forced into view is what happens when the conditions they've been trying to manufacture are created from the inside.

I don't have a confident answer to that. I have observations: the Russian login attempts at the NLRB happened within fifteen minutes of account creation. The Social Security data was copied to a server the government doesn't control and may not be able to fully account for. The investigators who would have produced the incident reports were removed from their positions or had their investigations stopped. What that adds up to, in terms of what foreign intelligence services now know or have, I can't say from outside, and I'm genuinely uncertain whether anyone inside was in a position to say it either, given what was disabled before anyone could look.

That uncertainty is itself information about the current state of things.


The Insider Threat We Built is the second essay in The Compound Vulnerability series.