2 min read

Categorizing comments for better Code Reviews in Distributed Teams

With six developers, evenly distributed over San Francisco and Belgium, the team had few overlapping working hours. We had to work well asynchronously and looked for opportunities to work together better.

One such way was our Code Review process. We used it to maintain codebase quality and familiarize the team with code they didn’t work on.

Code Review process

Our Code Review process was typical. The contributor would open a Pull Request in Bitbucket, linking it to the Jira ticket. There, reviewers left comments on the code. Once the contributor addressed all comments and changed his code, the reviewers would approve the Pull Request. Only then the contributor could merge the code.

Our process was particular in that we expected all team members to review, not only the most senior ones. Reviewing is as much a learning experience for the reviewer as for the contributor. As such, we tried to use it as a coaching tool in addition to a code quality tool.

Two issues

This process worked reasonably well for us, but two things stood out that we wished to improve.

First, most feedback focussed exclusively on significant offenses, things that had to change. This lack of diversity in feedback could diminish the contributor’s motivation and hinder developers from learning the finer nuances of coding.

Second, when reviewers did suggest minor improvements or alternative solutions, it became unclear to the contributor what had to change and whatnot. Without explicit agreements, expectations diverge, and frustrations follow. The reviewer doesn’t want his critical remark to be ignored. The contributor wants to distinguish critical changes from mere suggestions quickly. We spilled too many words on establishing what was what.

Improvement

To address these two issues, we settled on a system of categorizing the feedback into six buckets. Reviewers would prefix each comment with the category. Each category specifies the contributor’s action, aligning the expectations between contributor and reviewers.

We adopted these six categories:
++: Compliments quality code. Reinforces best practices and celebrates eloquent snippets. Not only does it motivates the contributor, but it also teaches other reviewers good coding patterns. No action is needed.

Q: Question. The reviewer wants more information. Usually, to ask why the contributor coded it in this particular way. Often leads to a small discussion. The contributor needs to provide an answer.

NTH: Nice To Have: A minor improvement the reviewer would like to see. Up to the contributor to make the change depending on his time constraints.

R: Required: A change deemed required to meet the team’s quality standards. Contributor needs to update the code. He can push back on the changes if he replies with a strong motivation for doing so, which happened occasionally.

PP: Personal Preference: The reviewer offers an alternative, usually explaining the tradeoffs. Instrumental in coaching developers to extend their coding repertoire. Up to the contributor to adopt it.

CS: Code Style: Easy to make changes about patterns the team agreed upon. Links to the code style document for the contributor to review. The contributor needs to make the change.

The team adopted this convention with ease. We saw more diverse and better feedback. The process became more positive, and junior developers felt more confident participating and expressing their opinions. Our Code Review process became a practical coaching tool in our distributed team. Give it a try and see whether it improves your Code Review process too.