Code Is Cheap. Review Isn't.

AI made it easier to produce code-shaped output. Good contribution still means making your work easy to understand, review, and trust.

You can now produce a pull request in the time it takes to make a coffee.

Often less.

This depends on the coffee, obviously. If you grind the beans, heat the water properly, and have opinions about extraction, the pull request may win by a comfortable margin.

That should make us pause.

Fast tools are useful. A coding assistant can help you read unfamiliar code, sketch an approach, write a test, or spot an edge case you would have missed. There is no virtue in doing everything the slow way just because it gives the work a faint smell of candle wax and suffering.

The problem starts when the diff becomes almost free.

A pull request has never been only a diff. It asks someone else to spend attention, apply judgment, take responsibility, and decide whether this change belongs in a project other people depend on.

That cost still lands on a human.

Ashley Wolf on the GitHub Blog recently described open source as entering its own “Eternal September”: contribution friction has dropped, volume is rising, and maintainers are having to respond with better trust signals, triage systems, and project-level controls. The key sentence is brutally simple: “The cost to create has dropped but the cost to review has not.” (Source: GitHub Blog)

Abigail Mayes, also on the GitHub Blog, made the same point in their writing on mentorship in the AI era, noting that developers merged nearly 45 million pull requests per month in 2025, up 23% year over year. More pull requests. Same maintainer hours. (Source: GitHub Blog)

That is the bridge from “Don’t Open a Pull Request Yet”. In that piece, the point was: learn the project before asking it to absorb your change.

This is the next step: once your work reaches a maintainer, make the review cheaper than the diff was to create.

A pull request is a question

A pull request looks like a contribution because GitHub gives it a nice interface.

It has a title. It has a diff. It has a button. It may even have a green checkmark, which is the software equivalent of a tiny approving priest.

But to a maintainer, a pull request starts as a series of questions:

  • Is this needed?

  • Is this correct?

  • Does this fit the project?

  • What behaviour changes?

  • What did the author test?

  • What happens if we merge it and they disappear?

The code matters. Broken code is rarely rescued by a charming personality. But the diff alone does not answer enough.

A one-line change can still be expensive if it arrives without context. Someone has to work out why it exists, whether the issue is real, whether the change is too narrow or too broad, whether it matches the project’s direction, and whether the contributor can respond to review.

That is why “small PR” and “easy PR” are different things.

A small PR changes few lines.

An easy PR reduces guesswork.

Aim for the second one.

The new problem is polished uncertainty

Polished uncertainty is work that looks credible before anyone knows whether it is correct.

That is the new burden.

Weak contributions used to look weak faster. Not always. People have been confidently wrong for longer than we have had package managers. But there were usually signs. The issue description was vague. The patch ignored the contribution guide. The author had not read the surrounding code. The proposed fix had that special “I changed the thing that looked closest to the error message” aroma.

AI changes the packaging.

A shallow contribution can now arrive with a polite description, a reasonable structure, a test plan, and the emotional posture of a senior engineer in a calm meeting room.

Lovely.

Now someone has to find out whether any of it is real.

The Register recently reported on this shift in curl. Daniel Stenberg, curl’s founder and lead developer, said the project had largely stopped receiving obvious AI-slop security reports. Instead, it was receiving more good-looking, AI-assisted reports, faster than before, which still created a growing workload because maintainers had to verify them. (Source: The Register)

Obvious nonsense is annoying. Plausible uncertainty is expensive.

When something looks credible, you cannot dismiss it quickly. You have to inspect it, reproduce it, and compare it against the project’s actual behaviour rather than the story around it.

Developers already know this feeling from their own AI-assisted work. Stack Overflow’s 2025 Developer Survey found that 84% of developers use or plan to use AI tools, while 46% said they do not trust the accuracy of AI output. The most common frustration was AI answers that are “almost right, but not quite”, cited by 66% of developers. (Source: Stack Overflow)

“Almost right” sounds harmless until you have lived inside it.

It means the import exists, but not in this version. The API call is real, but the arguments are wrong. The algorithm works for the example, then quietly eats the edge case. The explanation sounds confident until you realise it skipped the one constraint the whole project cares about.

Clean code can still be the wrong code. A passing test can still test the wrong promise. A tidy refactor can still erase a weird-looking behaviour that was there because three users, one browser, and a printer from 2011 formed a pact with the underworld.

Maintainers know this because they have seen the ghosts.

A contributor who wants to help should respect that. The goal is not to perform confidence. The goal is to make verification cheaper.

Make verification cheaper

This is the contributor standard I would use now:

A good contribution reduces the amount of guessing required to review it.

That applies whether you used AI or not.

Before you open a pull request, make sure the reviewer can see the path from problem to change. Do not make them reverse-engineer your reasoning from the diff like a crime scene investigator with worse lighting.

A useful pull request explains:

  • what problem you are solving

  • where that problem was discussed or observed

  • why this change is the right size

  • what alternatives you considered

  • what you tested

  • what could still be wrong

That last part matters.

A contributor who says “I am least sure about this branch because I could not find an existing test for this behaviour” is easier to review than one who arrives with a polished paragraph of total certainty.

Total certainty is cheap now. You can generate it in seconds.

Specific uncertainty is more useful. It tells the maintainer where to look. It shows that you understand the edge of your own understanding. It turns review into collaboration instead of excavation.

A good pull request does not need a novel attached.

Please do not write a novel. Maintainers have families, hobbies, pets, and in some tragic cases, other repositories.

But the PR should contain enough context to be reviewable without guesswork. For many projects, that means five small things.

First, link the reason. If there is an issue, discussion, bug report, failing test, or documented confusion, point to it. If there is no prior discussion, explain why you are opening code first.

Second, explain the change in project terms. “Refactored logic” is vague. “Moves validation before normalisation so invalid input fails before it reaches the parser” is useful.

Third, describe the test. Name the command you ran. Mention the case you added. If you did not test something, say so. Silence does not create confidence. It creates homework.

Fourth, keep the diff aligned with the claim. If the PR says it fixes one bug, it should not also rename files, reformat half a module, and introduce a new helper because the old one offended your sense of symmetry.

Fifth, stay available. A pull request is a conversation until it is merged or closed. If you disappear after opening it, the maintainer has to decide whether they are reviewing a contribution or adopting an orphan.

That is the minimum. Not perfection. Reviewable shape.

Use AI before you ask for attention

There is a good way to use AI in open-source contribution.

Use it before the public part.

Ask it to explain unfamiliar code. Ask it to compare two possible fixes. Ask it to list edge cases. Ask it to help you turn messy notes into a clearer issue comment. Ask it what assumptions your approach might be making.

Then do the work a contributor has always had to do:

Read the relevant files. Run the tests. Check the project’s conventions. Compare your change with past pull requests. Remove the parts that are too broad. Write the explanation yourself.

The public artefact should show your understanding, not the tool’s fluency.

A useful rule:

Do not submit anything you cannot explain with the tab closed.

This is responsibility, not purity.

The maintainer is not merging your chat transcript. They are merging a change into a living project.

Your name is on the work. Your judgment has to be in it.

That also gives us a practical rule for disclosure: disclose when AI materially shaped the code, test, security claim, or design choice.

GitHub’s recent writing on mentorship in the AI era uses three filters for deciding where maintainers should invest attention: comprehension, context, and continuity. Does the contributor understand the problem? Have they given enough information to review the work? Do they keep engaging after the first interaction? (Source: GitHub Blog)

AI use can affect all three. The point is not to force ritual humiliation. Nobody needs a little badge that says “assisted by robot, spiritually complicated”.

The point is to help the reviewer calibrate.

A simple note is enough:

I used an AI tool to help trace the call path and draft the initial test case. I reviewed the final change manually and can explain the touched code.

That tells the maintainer where the tool helped, and where your judgment entered the process.

The dangerous version is unowned work. If the tool wrote something you do not understand, you are handing the project an IOU made of fog.

Do not do that.

Boundaries protect useful attention

When projects tighten contribution rules, it is easy to read that as hostility.

Sometimes it is. Some communities do become opaque, defensive, or needlessly sharp. “Quality standards” can be a real principle, or it can be a decorative shield for bad manners.

But many boundaries are less dramatic than that.

They are attention budgets.

GitHub has shipped repository settings that let maintainers disable pull requests entirely or restrict them to collaborators. It describes these controls as useful for read-only projects, mirrors, or projects that want to share code publicly without managing outside contributions. (Source: GitHub Changelog)

That sounds severe until you remember the imbalance.

A contributor may spend ten minutes generating a patch. A maintainer may spend forty minutes proving it does not belong. Repeat that across a popular project, and the “open” in open source starts to feel less like generosity and more like an inbox with a roof leak.

Clearer rules are not automatically a rejection of newcomers. Done well, they protect the conditions that let good newcomers receive real attention.

If every drive-by patch gets a careful mentorship session, nobody gets mentored for long.

The boring skills got more important

There is a comforting fantasy that better tools make fundamentals optional.

I would love this to be true. I have many fundamentals I would like to place gently in a lake.

But better generation makes review skill more important.

When code appears quickly, you need a stronger model for deciding whether it deserves to stay. That means reading code carefully. Understanding tests. Knowing how data moves. Noticing ownership, boundaries, side effects, and failure modes.

This is where the C Systems Lab bias enters the room wearing steel-toed boots.

Low-level programming is not useful because everyone needs to write C every day. Most people do not. The world has suffered enough segmentation faults for several lifetimes.

It is useful because it makes certain review questions harder to avoid.

Where does this memory live? Who owns this resource? What happens at the boundary? What does the caller assume? What fails if the input is weird? Which part of the system now has to carry the cost?

Those questions still matter when the code is written in a friendlier language. They matter even more when the first draft came from a machine that can produce plausible code without understanding the project’s ghosts.

Machines remain machines, even when the assistant explains them in a soothing voice.

So yes, learn the boring things. Not because boredom is noble. Boredom is often just bad documentation wearing a tie.

Learn them because they make you harder to fool.

Trust is the contribution

The first post ended with a simple idea: good contribution starts before the pull request.

This one adds the pressure of 2026: the pull request itself is easier to produce than ever, so the work around it matters more.

Do not let the tool’s fluency become your public judgment. Do not make maintainers guess why your change exists. Do not send code you cannot explain once the tab is closed.

The diff is the visible part.

The contribution is the trust you build around it.

Everything else is just another notification.