Using Peer Reviews to Boost Code Reviews

Run-through

AI-Powered Chatbots in Customer Service and Engagement

Using AI for customer service in your company is a definite method to save time and money. If you’re like most business owners, you’re constantly searching for fresh, creative ways to improve your enterprise. We’re here to inform you that improving AI customer service is a simple and rapid win.

Read More »

All agile software developers are familiar with code reviews, but what about peer reviews? According to SmartBear’s 2020 State of Code Review, four out of five developers either agree or strongly agree that they learn from code reviews. There are two fundamental objectives with all code reviews – improve the code and help developers learn something in the process. Code reviews are good. Peer reviews can make them even better.

Source: SmartBear’s 2020 State of Code Review

Peer Code Reviews vs Other Code Reviews

Peer Code Reviews start before any code is written. That makes them different than other types of code reviews (formal/informal, technical, inspections, walkthroughs, etc.). The main objective of peer code reviews is to communicate and collaborate with other team members to make sure you start off and stay on the right track. These kinds of peer reviews objectify, “An ounce of prevention is worth a pound of cure.” Peer code reviews offer a more efficient approach to pair programming and over-the-shoulder code reviews. They’re lightweight, fast, and efficient – but serve to reinforce everything involved with more formal/technical code reviews (and pull request reviews). Perhaps more importantly, peer code reviews reinforce teamwork and can make pull request reviews more efficient.

Pre-Code Review?

A good (maybe not the best) way to envision peer code reviews is like when you’re moving into a new apartment with a spouse or roommate. It’s a “good idea” – before you start moving the furniture to plan out where what will go where.

“Honey, what do you think? The couch would fit here, we could put the television there, and the bookshelves over there. And we can hang all the pictures of Fido on the wall… after we paint it a sickening lemon color. Does that sound good?”

The best difference with code reviews is that you can discuss the details with confidence that you won’t be forced to sleep on the couch. In real life at home with your significant other, it’s usually just best to agree, “Yes Dear. Whatever you say, Dear. I really like that layout!”

Besides, it doesn’t matter what you say, you’ll still have to move the furniture (iterate) four or five different times, anyway. But, in peer code reviews – you’ve got a really good chance of nailing it on the first attempt.

Otherwise, the substance and objectives of peer code reviews reinforce the rest of your development effort once the coding begins.

A Checklist for Writing and Reviewing Code

Our first objective is to do everything we can to make it easy for the individual reviewing our code. We’re asking for their time and expertise. We don’t want to waste their time on silly things like pointing out typos and style issues – or guessing about what they’re looking at.

The frequency of code reviews warrants setting a standard of what should be done before requesting one. This standard, or checklist, is best to formally keep as part of your team documents and provided to every new member of your team.

The Code Review Checklist below provides the essentials every team should follow, but customize it for your team and work processes. In a Test-Driven Development Team, test first requirements would be emphasized.

The same criteria apply whether we’re writing or reviewing code:

  • Verify code follows style guidelines. It’s helpful to configure rules in a linter to automatically enforce them.
  • Make sure you applied to team naming conventions (branch, library, variables, etc.).
  • Review for grammar and typos.
  • Ideally, code should explain, “How it works.” It’s helpful to have comments concisely explaining “why” to help everyone reading the code understand it faster.
  • Simplify your code wherever possible – strive to make it easy for reviewers to read.
  • Verify your code applies to sound architectural principles (separation of concerns, encapsulation, Don’t Repeat Yourself, etc.)
  • Test your code to verify it works and revise accordingly.
  • Verify the code meets any performance requirements.
  • Provide a concise but explicit title for your Pull Request.
  • Explain what the PR changes and/or highlight potential hot spots for extra scrutiny.

Including Everyone in Peer Code Reviews

It’s important to include everyone in peer reviews. Equally important are guidelines for prioritizing who they are allocated to and when.

Four peer review combinations:

  1. Senior Writer / Senior Reviewer: This is the ideal scenario for peer reviews. Defects and design issues are the least likely to get past this pairing. Both developers may learn something from the other in the process. But, this is a resource-intensive combo best reserved for the most complex code.

  2. Junior Writer / Senior Reviewer: The “typical” pairing for code reviews, it’s cost-efficient and effective. The senior developer’s likely to catch most defects and design issues. The junior developer has a good chance of learning some tips and techniques. However, having too high of a junior to senior developer ratio can be detrimental to the team and project.

  3. Senior Writer / Junior Reviewer: This pairing is good for training and mentoring, but not so effective for catching defects or design issues. It’s still better than no review at all. Even the most experienced developers can make mistakes. Even if the reviewer doesn’t find issues, it’s a good sign if they are at least asking questions… Or… perhaps, they should be answering them? This pairing makes a good case for conducting “Reverse Walkthroughs” – either on a 1-to-1 or team basis. This sort of takes us back to high school when teachers called upon students to explain how things work.

  4. Junior Writer / Junior Reviewer: The least resource-intensive option for many teams, it is useful for catching coding issues, knowledge-sharing, and most importantly, reinforcing the code review process as a constant habit. The peer-review itself is what’s most important.
Just as mid-level developers are somewhere in-between these two poles; who is junior or senior can vary by experience with a specific programming language, and sometimes by other factors. Pairing by skill in a language is a good practice for cross-training to help your C++ developers learn Python, and vice versa.

Structured Peer Code Review Process

The process for structured peer reviews is nearly the same as, and can fit in line with, pull request reviews:

 Peer Code ReviewPull Request Review
Step 1.Author presents their idea/logic.Author fixes/submits their code – addressing any questions.
Step 2.Reviewer evaluates and can propose alternatives.Reviewer evaluates the code noting any issues to fix, asking questions if something is unclear.
Step 3.Steps 1 and 2 repeat until there’s agreement; author implements.Steps 1 and 2 repeat until the reviewer recommends the code for merging.
Step 4.

Author checks in frequently with reviewer to verify they’re on track; submits PR when complete.

Continues to PR Review >>>

The code is merged and proceeds to testing in a production environment. If it passes, all is good. If it fails, we go back to Step 1 of the Peer Code Review.

Code Reviews and Lines of Code

“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” – Bill Gates

On a stand-alone basis, LOC is a silly metric. Combined with other statistics, it is more useful, suffice that it can be used for resource allocation. LOC will fluctuate quite radically day by day, hour by hour, by project, team, and individual developer for a multitude of factors. But over several projects, it is possible to get a rough average of the LOC produced by developer/team which can be helpful for allocating time for code reviews.

It’s generally expected for code reviews to cover 400-500 lines of code per hour and catch 70-90% of defects.

Time and Cost of Peer Code Reviews based on Lines of Code and Review Speed

Lines of Code Case: LoC Reviewed per Hour Hours to Review. Base Cost*:
100,000 Baseline for comparison 400 250 $13.2k
100,000 Faster Reviews 500 200 $10.6k
80,000 Reduced Code Complexity 500 160 $8.5k
*A base of $52.95/hour per US Bureau of Labor Statistics average US software developer wage.

Peer code reviews should help developers start and stay on the right track. They can help avoid a lot of problems, like high code churn. Coupled with an effort to produce concise, well-structured code – the corresponding decrease in code complexity should make code reviews faster and easier to perform. That’s not to imply they should be rushed.

It’s always nice to try to put a dollar figure to things, though developer time is the true bottleneck.

Walkthroughs

Walkthroughs serve as an extension to mentoring and even as a hybrid “classroom” training session. Reverse walkthroughs, where junior developers try to explain the rationale behind a senior developer’s code is another practice to consider.

Leastwise, teams with many junior and few senior developers can benefit from frequent walkthroughs. Frequency is best tied to the urgency of team performance issues, from 1x to 4x per month.

For best results, the software engineering manager needs to identify what’s challenging their team the most:

  • Architectural principles
  • Simplifying logic
  • How developers can break up large story point tasks
  • Common types of defects
  • Coding tips and techniques
  • Writing effective tests
  • Configuring automated tools

Specific issues can be identified via performance analytics and defect tagging through Jira or other project management software. Code complexity is a killer that can be mitigated by showing examples that transform poor to good architecture/logic.

Remember, the Pareto Principle is your friend – a minority of solutions will solve the majority of problems. Four dozen walkthroughs over the course of a year should bring a severely challenged team up to speed. For that matter, you might look to pair walkthroughs with your retrospectives.

Protect your investment – keep a recording of your online walkthroughs (OBS Studio, ScreenRec, BandiCam, etc.). These give your new hires something to watch while onboarding.

Did you like our content?

Spread the word

Subscribe to Our Newsletter

Don't miss our latest updates.
All About Software Engineering Best Practices, Productivity Measurement, Performance Analytics, Software Team Management and more.

Did you like our content?

Spread the word

Subscribe to Our Newsletter

Don't miss our latest updates. All About Software Engineering Best Practices, Productivity Measurement, Performance Analytics, Software Team Management and more.