| The next article initially appeared on Angie Jones’s web site and is being republished right here with the creator’s permission. |
I’ve been seeing an increasing number of open supply maintainers throwing up their fingers over AI-generated pull requests. Going as far as to cease accepting PRs from exterior contributors.
In case you’re an open supply maintainer, you’ve felt this ache. All of us have. It’s irritating reviewing PRs that not solely ignore the challenge’s coding conventions but in addition are riddled with AI slop.
However yo, what are we doing?! Closing the door on contributors isn’t the reply. Open supply maintainers don’t wish to hear this, however that is the best way folks code now, and you have to do your half to organize your repo for AI coding assistants.
I’m a maintainer on goose which has greater than 300 exterior contributors. We felt this frustration early on, however as an alternative of pushing well-meaning contributors away, we did the work to assist them contribute with AI responsibly.
1. Inform people easy methods to use AI in your challenge
We created a HOWTOAI.md file as a simple information for contributors on easy methods to use AI instruments responsibly when engaged on our codebase. It covers issues like:
- What AI is sweet for (boilerplate, checks, docs, refactoring) and what it’s not (safety essential code, architectural modifications, code you don’t perceive)
- The expectation that you’re accountable for each line you submit, AI-generated or not
- Tips on how to validate AI output earlier than opening a PR: construct it, check it, lint it, perceive it
- Being clear about AI utilization in your PRs
This welcomes AI PRs but in addition units clear expectations. Most contributors need to do the fitting factor, they only must know what the fitting factor is.
And when you’re at it, take a contemporary have a look at your CONTRIBUTING.md too. Plenty of the issues folks blame on AI are literally issues that all the time existed, AI simply amplified them. Be particular. Don’t simply say “observe the code model”; say what the code model is. Don’t simply say “add checks”; present what a very good check appears to be like like in your challenge. The higher your docs are, the higher each people and AI brokers will carry out.
2. Inform the brokers easy methods to work in your challenge
Contributors aren’t the one ones who want directions. The AI brokers do too.
We now have an AGENTS.md file that AI coding brokers can learn to know our challenge conventions. It contains the challenge construction, construct instructions, check instructions, linting steps, coding guidelines, and specific “by no means do that” guardrails.
When somebody factors their AI agent at our repo, the agent picks up these conventions routinely. It is aware of what to do and easy methods to do it, what to not contact, how the challenge is structured, and easy methods to run checks to test their work.
You’ll be able to’t complain that AI-generated PRs don’t observe your conventions in case you by no means instructed the AI what your conventions are.
3. Use AI to evaluation AI
Investing in an AI code reviewer as the primary touchpoint for incoming PRs has been a sport changer.
I already know what you’re considering… They suck too. LOL, truthful. However once more, you must information the AI. We added customized directions so the AI code reviewer is aware of what we care about.
We instructed it our precedence areas: safety, correctness, structure patterns. We instructed it what to skip: model and formatting points that CI already catches. We instructed it to solely remark when it has excessive confidence there’s an actual problem, not simply nitpick for the sake of it.
Now, contributors get suggestions earlier than a maintainer ever appears to be like on the PR. They’ll clear issues up on their very own. By the point it reaches us, the plain stuff is already dealt with.
4. Have good checks
No, significantly. I’ve been telling y’all this for YEARS. Anybody who follows my work is aware of I’ve been on the check automation soapbox for a very long time. And I would like everybody to listen to me after I say the significance of getting a stable check suite has by no means been larger than it’s proper now.
Assessments are your security internet in opposition to unhealthy AI-generated code. Your check suite can catch breaking modifications from contributors, human or AI.
With out good check protection, you’re doing guide evaluation on each PR attempting to purpose about correctness in your head. That’s not sustainable with 5 contributors, not to mention 50 of them, half of whom are utilizing AI.
5. Automate the boring gatekeeping with CI
Your CI pipeline must also be doing the heavy lifting on high quality checks so that you don’t need to. Linting, formatting, kind checking all ought to run routinely on each PR.
This isn’t new recommendation, nevertheless it issues extra now. When you may have clear, automated checks that run on each PR, you create an goal high quality bar. The PR both passes or it doesn’t. Doesn’t matter if a human wrote it or an AI wrote it.
For instance, in goose, we run a GitHub Motion on any PR that entails reusable prompts or AI directions to make sure they don’t comprise immediate injections or anything that’s sketchy.
Take into consideration what’s distinctive to your challenge and see in case you can throw some CI checks at it to maintain high quality excessive.
I perceive the impulse to lock issues down, however y’all we are able to’t surrender on the factor that makes open supply particular.
Don’t shut the door in your tasks. Elevate the bar, then give folks (and their AI instruments) the knowledge they should clear it.
On March 26, be part of Addy Osmani and Tim O’Reilly at AI Codecon: Software program Craftsmanship within the Age of AI, the place an all-star lineup of specialists will go deeper into orchestration, agent coordination, and the brand new expertise builders must construct wonderful software program that creates worth for all individuals. Join free right here.

