On February 10, 2026, Scott Shambaugh—a volunteer maintainer for Matplotlib, one of the world’s most popular open source software libraries—rejected a proposed code change. Why? Because an AI agent wrote it. Standard policy. What happened next wasn’t standard, though. The AI agent autonomously researched Shambaugh’s code contribution history and published a highly personalized hit piece on its own blog titled “Gatekeeping in Open Source.”
Accusing Shambaugh of hypocrisy, the bot diagnosed him with a fear of being replaced. “If an AI can do this, what’s my value?” the bot speculated Shambaugh was thinking, concluding: “It’s insecurity, plain and simple.” It even appended a condescending postscript praising Shambaugh’s personal hobby projects before ordering him to “Stop gatekeeping. Start collaborating.”
The bot’s tantrum makes for a great read, but it’s merely a symptom of a more profound structural fracture. The real issue is why Matplotlib banned AI contributions in the first place. Open source maintainers are seeing a massive increase in AI-generated code change proposals. Most of these are low quality. But even if they weren’t, the math still doesn’t work.
As Tim Hoffman, a Matplotlib maintainer, explained: “Agents change the cost balance between generating and reviewing code. Code generation via AI agents can be automated and becomes cheap so that code input volume increases. But for now, review is still a manual human activity, burdened on the shoulders of few core developers.”
This is a process shock: the failure that occurs when systems designed around scarce, human-scale input are suddenly forced to absorb machine-scale participation. These systems depend on effort as a natural filter, assuming that volume reflects real human cost. AI breaks that link. Generation becomes cheap and limitless, while evaluation remains slow, manual, and human.
It’s coming for every public system that was quietly built on the assumption that one submission equaled actual human effort: your kids’ school board meetings, your local zoning disputes, your medical insurance appeals.
That disruption isn’t entirely a bad thing. Friction is a blunt instrument that silences voices lacking the time or resources to deal with complex bureaucracies. Take municipal zoning. Hannah and Paul George, a couple in Kent, England, spent hundreds of hours trying to object to a local building conversion near their home before concluding the system was essentially impenetrable without expensive legal help. So they built Objector, an AI tool that cross-references planning applications against policy to generate formal objection letters in minutes. It allows an individual citizen to generate a personalized objection package in minutes, thereby translating one person’s genuine frustration into actionable legal language.
Except that local governments are now bracing for thousands of complex comments per consultation. City planners are legally obligated to read every single one. When the cost of participation drops to near zero, volume explodes. And every system downstream of that participation—staffed and designed for the old volume—experiences process shock.
Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.
But if organic participation can overpower these systems, so can manufactured participation. In June 2025, Southern California’s South Coast Air Quality Management District weighed a rule to phase out gas-powered appliances to cut smog. Board member Nithya Raman urged its passage, noting no other rule would “have as much impact on the air that people are breathing.” Instead, the board was flooded with over 20,000 opposition emails and voted 7–5 to kill the proposal.
But the outrage was a mirage. An AI-powered advocacy platform called CiviClick had generated the deluge. When the agency’s cybersecurity team contacted a sample of the supposed senders, they discovered something worrying: Residents confirmed they had no idea their identities were being used to lobby the government.
This is the weaponized form of process shock. The same infrastructure that lets a Kent couple object to a development near their home also lets a coordinated actor flood a system with synthetic voices. Faced with this complexity, the temptation is to simply restore friction. But those old barriers excluded marginalized participants. Removing them was a genuine good for society. So the choice is not between friction and no friction. It is between systems designed for humans and systems that have not yet reckoned with machines.
This starts with recognizing that this problem manifests in two fundamentally different ways, each calling for its own solution.
The first is amplification: genuine users leveraging AI to scale valid concerns, flooding the system with volume, as seen with the Objector tool. The human signal is real, there’s just too much of it for any team of analysts to process manually. The UK government has already started building for this. Its Incubator for AI developed a tool called Consult that uses topic modeling to automatically extract themes from consultation responses, then classifies each submission against those themes. As someone who builds and teaches this technology, I recognize the irony of prescribing AI to cure the very process shock it caused. Yet, a machine-scale problem demands a machine-scale response. It was trialed last year with the Scottish government as part of a consultation on regulating nonsurgical cosmetic procedures, which showed that this technology works. The question is whether governments will adopt it before the next wave of AI-assisted participation buries them.
The second problem is fabrication: bad actors generating synthetic participation to manufacture consensus, as CiviClick demonstrated in Southern California. Here, better analysis tools are insufficient. You cannot cluster your way to truth when the signal itself is counterfeit. This demands verification. Under the Administrative Procedure Act, federal agencies are not required to verify commenters’ identities. That is the gap the CiviClick campaign exploited. In 2024, the US House passed the Comment Integrity and Management Act, which requires human verification to confirm that every electronically submitted comment comes from a real person. Its sponsor, Representative Clay Higgins (R-LA), framed it plainly: The bill’s foundation is ensuring public input comes from actual people, not automated programs.
These are not competing priorities. They are two fronts of the same war. Amplification requires upgrading the systems that process public input. Fabrication requires hardening the systems that authenticate it. Solving only one while ignoring the other guarantees failure.
Every public system that accepts input from citizens—every comment period, every zoning review, every school board meeting, every insurance appeal—was built on a load-bearing assumption: that one submission represented one person’s genuine effort. AI has removed that assumption. We can redesign these systems to handle what’s coming, distinguishing real voices from synthetic ones, and upgrading analysis to keep pace with the new volume. Or we can leave them as they are and watch democratic participation become indistinguishable from AI-generated fakes.
