We're Losing Open Contribution
The Drawbridge Going Up
I've been contributing to open source for twenty years. In that time, the barrier to contribution has only gotten lower, until now.
Ghostty is getting an updated AI policy. AI assisted PRs are now only allowed for accepted issues. Drive-by AI PRs will be closed without question. Bad AI drivers will be banned from all future contributions. If you're going to use AI, you better be good. https://t.co/AJRX79S8XD
— Mitchell Hashimoto (@mitchellh) January 22, 2026
Yesterday, Mitchell Hashimoto announced that Ghostty is banning unattributed AI contributions. AI-assisted PRs are now only allowed for accepted issues. Drive-by AI PRs will be closed without question. Bad AI drivers will be banned from all future contributions. tldraw already auto-closes external PRs entirely. More will follow.
I understand why. I'm sad that this is where we are. And I'm frustrated that it didn't have to be this way.
How Did We Get Here?
Open source maintenance is broken. It was in a rough spot before the AI floodgates opened.
Mitchell Hashimoto tweeted last week: "It's a fucking war zone out here man. Maintainer morale at an all time low." Another tweet from him: "If you took a list of PRs today and showed it to me 5y ago without any context, I'd think that somehow everyone got learning disabilities. And that's insulting to people with learning disabilities because they're actually perfect, while these PRs are by deeply flawed people. That's how stupid 90% of AI coded PRs look."
Steve Ruiz from tldraw announced they are auto-closing PRs from external contributors. My colleague Brian Douglas wrote "Death by a Thousand AI Pull Requests" talking about the tipping point we have reached.
This is not a future problem. It is happening right now.
For something approaching twenty years (SourceForge in 1999 and GitHub was founded in 2008), we have been telling developers: contribute to open source to build your resume. Get those GitHub contributions all green. Get noticed. Get a job. The barrier to filing a PR was generally high enough that it self-selected for folks that would go the extra mile.

It is not like bad contributions are new. Hacktoberfest spam was so notorious that Drew DeVault called the 2020 event "a corporate-sponsored distributed denial of service attack" on maintainers. People were adding semicolons to READMEs, fixing whitespace, submitting random "Hello World" scripts for a free t-shirt. That spam was obviously bad. You could spot it in seconds and close it.
The AI-generated PRs flooding projects today are not like that. They look fine. Tests pass. The code might even work. The problem is there is nobody home, no follow-through, no understanding.
I think most of these folks are well-meaning and just do not know better. They do not have the context or experience to know what good looks like, or maybe they figured a drive-by PR was enough. But whether it is ignorance or laziness, the impact is the same. It is still a problem for maintainers. The bar is so low now that people apparently do not think twice.
The Problem Is Not Code Generation
The original pitch for AI coding tools was straightforward: writing code is slow, AI makes it faster. But writing code was never the hard part. The hard part is everything else. Understanding the codebase. Knowing why decisions were made. Following through on review comments. Actually maintaining what you shipped.
At Continue, we talk about the "Amplified Developer." AI should make you more productive at the whole job, not just the typing part. But what we are seeing in open source, is people treating AI like a magic code printer. Find an issue labeled "good first issue," paste it into Claude, get a PR out, and disappear when review comments come in.
What Good Looks Like
Mitchell also shared an interesting counter example that shows what AI could do for expanding open source contribution. Someone filed a bug report for Ghostty who did not know the stack but were "great AI drivers." They used AI to write a Python script that could decode crash files, analyze the codebase for root causes, and extracted that into a Claude skill.
Then they came into Discord, warned the team they did not know Zig, macOS dev, or terminals, and that they used AI. But they thought critically about the issues and believed they were real. They asked if the team would accept them. Mitchell looked at one, was impressed, and said send them all. This fixed 4 real crashing cases.
Mitchell said: "They didn't just toss slop up on our repo. They came to Discord as a human, reached out as a human, and talked to other humans about what they've done. They were careful and thoughtful about the process."
What does good look like in 2026? I believe it is transparent, expert AI driving, and a willingness to navigate the human side of a project. You are going to need to talk to someone. Take feedback and likely iterate. A long time ago I worked with Doug Cutting who started many impactful Open Source Projects and was for a while the Chairman of the board of the Apache Foundation. Doug spoke about the power of open source this way: "You don't build things just for yourself, because if you do, it's not going to go very far," he says. "What you're trying to build is communities, and things that last."
This is what we're at risk of losing access to.
What We Actually Need
Mitchell also tweeted about a possible solution: "The real future we need is better broad support for mapping each line in the diff back to a complete and public history including prompts. Thread sharing from Amp and Opencode help with this, but I need a git blame equivalent. The session exposes true expertise or slop."
This is the North Star. Not just "I used AI" in a commit message, but line-level attribution. Every change mapped back to its full context: the prompts, the reasoning, the iteration. A git blame that shows not just who wrote the code, but how it came to be. Whether it was thoughtfully crafted or hastily generated.
For that to work at scale, we need Git or jj to support native metadata. We need GitHub and GitLab to surface it meaningfully. We need LLM providers to sign their outputs. Maybe we need hardware attestation via Yubikey-shaped devices to prove provenance. This is not a small lift.
But even native Git support wouldn't be enough on its own. As Mitchell pointed out in a discussion about git notes, commit metadata "can be tampered with. I really want the agent itself to attest via signing or some other mechanism that the prompt is untampered. More simply: a web link works, I can trust the provider."
This is the real challenge: cryptographic attestation from the AI provider. A signed, unfakeable record that says "yes, this code came from these prompts, and we're the ones who generated it." Without that, any commit-level transparency system remains theater—useful theater, perhaps, but ultimately unenforceable.
But it is the future that makes open source sustainable in an AI world. Where maintainers can see at a glance whether a PR represents deep understanding or surface-level slop. Where good AI usage is celebrated and bad AI usage is obvious.
Given the complexity, the industry's current focus on generation, and multitude of large corporate stakeholders, I worry we will be waiting for a while.

God Dammit Leeroy
So given this—and knowing we're years away from the real solution, I wanted to build something 1% better. So I built Leeroy. Mostly so he can hang out with Ralph and the mayor of Gas Town.
We don't have Mitchell's vision yet. Hell, we're not even close. But here's what you can do with today's tools.
Leeroy is just a demo for what transparent AI attribution in commits could look like. When you use Claude Code, Leeroy automatically logs what tool you used, what model, what prompts you gave, what files were modified:
Add log function to calculator with tests
---
AI-Assisted: true
AI-Tool: claude-code/2.1.12
AI-Model: claude-opus-4-5-20251101
AI-Prompts:
- [22:53:57] Lets add log to the calculator.
- [22:54:58] Lets commit the change.
Transparency. "Yes, I used an AI tool to do this. This was my reasoning. These were my prompts."
Steve Ruiz wrote something that captures why this matters: "When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero."
Without some way to distinguish between thoughtful AI usage and slop, maintainers will have no choice. They will pull up the drawbridge. GitHub Actions will be added to auto-close PRs. We will find ourselves in a world where projects only accept contributions from known developers. I do not want that future.
Leeroy isn't what we need. It's just making it slightly easier to do the right thing while maintainers are forced to close the gates. Right now, it is self-reported transparency. You can lie. There is no enforcement. This is a demo, not the solution.
But we do not have to wait for the full vision to start being transparent. The low-tech version works today. Just be honest in your PRs and commit messages. Leeroy makes transparency automatic. You do not have to remember to document your AI usage or manually format attribution. It just happens. We cannot tool our way out of this mess entirely, but we can make it easier for people to do the right thing. When doing the right thing is automatic, more people will do it.
Give a Shit
Use AI. Be transparent. Get amplified. Build things you could not have built before. But be thoughtful about it. Put in the work to understand what you are contributing. Show your reasoning. Follow through on review comments.
Because if we don't, more projects will follow Mitchell and tldraw. And we'll all be worse off for it.
Don't just charge in screaming your own GitHub handle.
Maybe give Leeroy a shot: github.com/metcalfc/leeroy-jenkins
Open source maintainers can close the gates. Companies cannot. If your team is drowning in a backlog of PRs that need review, you cannot just auto-close them—you need to get through them. That is why we built Mission Control. Agents that address review comments, fix failing CI, resolve merge conflicts, and raise the bar on quality before you merge. It is designed to clear your PR backlog and keep velocity high without sacrificing standards. When you cannot afford to stop shipping, Mission Control helps you maintain quality at scale.
References
- Mitchell on Ghostty's new AI policy
- Mitchell on the real future we need
- Mitchell on cryptographic attestation
- Mitchell on the war zone
- Mitchell on bad AI PRs
- Mitchell on good AI usage
- Steve Ruiz auto-closing PRs
- Brian Douglas: Death by a Thousand AI Pull Requests
- Steve Ruiz: Stay away from my trash
- Hacktoberfest spam (Drew DeVault)
- AI is Glue
- Intervention Rates Are the New Build Times