Code reviews: from nitpicking to cooperation

After we gave our talk at DebConf 20, Doing things together, there were 5 minutes left for the live Q&A. Pollo asked a question that I think is interesting and deserves a longer answer: How can we still have a good code review process without making it a "you need to be perfect" scenario? I often find picky code reviews help me write better code.

I find it useful to first disentangle what code reviews are good for, how we do them, why we do them that way, and how we can potentially improve processes.

What are code reviews good for?

Code review and peer review are great methods for cooperation aiming at:

  • Ensuring that the code works as intended
  • Ensuring that the task was fully accomplished and no detail left out
  • Ensuring that no security issues have been introduced
  • Making sure the code fits the practices of the team and is understandable and maintainable by others
  • Sharing insights, transferring knowledge between code author and code reviewer
  • Helping the code author to learn to write better code

Looking at this list, the last point seems to be more like a nice side effect of all the other points. :)

How do code reviews happen in our communities?

It seems to be a common assumption that code reviews are—and have to be—picky and perfectionist. To me, this does not actually seem to be a necessity to accomplish the above mentioned goals. We might want to work with precision—a quality which is different from perfection. Perfection can hardly be a goal: perfection does not exist.

Perfectionist dynamics can lead to failing to call something "good enough" or "done". Sometimes, a disproportionate amount of time is invested in writing (several) code reviews for minor issues. In some cases, strong perfectionist dynamics of a reviewer can create a feeling of never being good enough along with a loss of self esteem for otherwise skilled code authors.

When do we cross the line?

When going from cooperation, precision, and learning to write better code, to nitpicking, we are crossing a line: nitpicking means to pedantically search for others' faults. For example, I once got one of my Git commits at work criticized merely for its commit message that was said to be "ugly" because I "use[d] the same word twice" in it.

When we are nitpicking, we might not give feedback in an appreciative, cooperative way, we become fault finders instead. From there it's a short way to operating on the level of blame.

Are you nitpicking to help or are you nitpicking to prove something? Motivations matter.

How can we improve code reviewing?

When we did something wrong, we can do better next time. When we are told that we are wrong, the underlying assumption is that we cannot change (See Brené Brown, The difference between blame and shame). We can learn to go beyond blame.

Negative feedback rarely leads to improvement if the environment in which it happens lacks general appreciation and confirmation. We can learn to give helpful feedback. It might be harder to create an appreciative environment in which negative feedback is a possibility for growth. One can think of it like of a relationship: in a healthy relationship we can tell each other when something does not work and work it out—because we regularly experience that we respect, value, and support each other.

To be able to work precisely, we need guidelines, tools, and time. It's not possible to work with precision if we are in a hurry, burnt out, or working under a permanent state of exception. The same is true for receiving picky feedback.

On DebConf's IRC channel, after our talk, marvil07 said: On picky code reviews, something that I find useful is automation on code reviews; i.e. when a bot is stating a list of indentation/style errors it feels less personal, and also saves time to humans to provide more insightful changes. Indeed, we can set up routines that do automatic fault checking (linting). We can set up coding guidelines. We can define what we call "done" or "good enough".

We can negotiate with each other how we would like code to be reviewed. For example, one could agree that a particularly perfectionist reviewer should point out only functional faults. They can spare their time and refrain from writing lengthy reviews about minor esthetic issues that have never made it into a guideline. If necessary, author and reviewer can talk about what can be improved on the long term during a retrospective. Or, on the contrary, one could explicitly ask for a particularly detailed review including all sorts of esthetic issues to learn the best practices of a team applied to one's own code.

In summary: let's not lose sight of what code reviews are good for, let's have a clear definition of "done", let's not confuse precision with perfection, let's create appreciative work environments, and negotiate with each other how reviews are made.

I'm sure you will come up with more ideas. Please do not hesitate to share them!

Update 12/2020: I highly recommend this talk (in French) by Kim Lai Trinh: Autocritique de la revue de code (bienveillante). He makes the point that code reviews are a social interaction and as such carry privileges across. He changes his expectations for code review from getting "quality code" to "Make sure everyone in the team contributes, gets gratitude (reconnaissance) out of their contribution, and gains more autonomy".