Our Code Review Process: More than Quality Assurance

Engineering Insights

Pete Whiting
#
Min Read
Published On
March 13, 2025
Updated On
March 24, 2025
Our Code Review Process: More than Quality Assurance

Just like writers during the editing process, programmers review each other's work, provide feedback, and work together to create a more refined, higher-quality end product.

At a basic level, this process helps ensure code quality and brings potential defects to the surface. More than that, it helps developers transfer and grow knowledge, resulting in continuously better outputs, project after project.

At The Gnar, we aim to get the most out of our code review process, both for our clients and for our team.

Review Requests

The golden rule of code review might be an obvious one, but it's worth stating: the reviewer who provides final approval cannot be the author. In fact, in some cases here at The Gnar, the reviewer is not even working on the same project.

A fresh set of eyes enables reviewers to provide new ideas, identify problems the author hadn't considered, and contribute a perspective that boosts the value of the code overall. It also gives clients access to the whole team's collective knowledge and expertise.

All that being said, before even finding a reviewer, an author should carefully go through their own work through a "reviewer's lens," helping them catch small  items they overlooked.

Once ready to share with the broader team, the author opens a "pull request" or "PR" on a platform such as GitHub, GitLab, or BitBucket, where developers store, manage, track, and control code changes. The author then shares a link to the pull request in a dedicated, company Slack channel so that the whole team has the opportunity to review it. The post also includes the topic, the urgency, and sometimes a request for a specific developer to review it.

An Iterative Approach

Code review is a quality assurance practice programmers should never cut corners on, no matter the scale of work. That being said, it takes time, and speed is a factor we can't ignore.

To keep larger projects from stalling, we break them down into the smallest units possible, allowing us to review smaller pieces of code quickly and catch any mistakes or areas for improvement early in the project. This is aligned with our agile approach more generally, which focuses on the delivery of work in small, consumable, high-quality increments.

Prior to opening a pull request and asking others to review our code, authors save it incrementally in "commits," each with a description of what that piece of the overall code change is about. This is an opportunity to do some review of our own code before asking others to look at it, but also allows us to create a narrative for our reviewers, step by step, of how and why the code is changing.

Our Communication 'Commit'ment

While the structure of how we share and review work is important, communication is the piece that makes the whole system work. And since we keep most of our code review conversations within the GitHub platform, commit descriptions and reviewer comments are the primary ways we carry out that communication effectively.

Here are a few of our internal guidelines, which help structure this process:

Reviewers: Avoid Dogmatic Statements

Unless the changes are going to break something, every reviewer comment is meant as a suggestion, meaning authors aren't required to accept the reviewer's changes. Sometimes they are simply food for thought. So instead of saying "do this" or "do that," we stick to a more open, collaborative, brainstorming-style approach (i.e. "what do you think of this?").

Authors & Reviewers: Use as Much Detail as Possible

Detail from authors is critical, especially if someone from off the project is going to be reviewing. Not only does it give reviewers full context (exactly what is changing and why) but it also allows them to learn from the author's work. When it comes to reviewer comments, detail plays a role in helping the author to understand the purpose of the suggested change. For instance, seeing the reviewer comment "NIT" (as in nitpick) signals to the reviewer that the suggestion is minor and not quite as critical as say, "this approach will break X." In both cases, the reviewer should explain the "why" behind their suggestion.

Authors: Always Acknowledge Comments

While comments are suggestions and not rules, it's still important for the author to close the loop on communication and let the reviewer know if they have accepted a suggested change or not. This ensures all comments have been seen, addressed, and considered. It also encourages further conversation and knowledge transfer. While many acknowledgments don't have to be much (for minor things, an emoji, a polite sentence, or even a joke might suffice!), more crucial implementation changes may warrant a longer comment or discussion.

Reviewers: Celebrate Positive Code

For us, code review is less about catching mistakes and more about constantly improving and finding areas for growth (if each of our projects isn't better than the last, aren't we doing something wrong?). Celebrating each other's moments of brilliance, no matter how big or small, is validating, motivating, and inspiring. Positive comments like "TIL" (today I learned…) or "This is sweet! How does it work?" reinforce positive behavior, bring smiles to our faces, and reiterate the purpose behind the review process.

Engineers, Speaking Human

Here at The Gnar, we like to refer to ourselves as "engineers, speaking human." It's a nod both to our collaborative nature and our commitment to defying the dreaded "working with developers" stereotype.

Our code review process is not an afterthought, nor a bonus task, but an integral part of our process and culture. For us, it's yet another way we flex our collaboration muscles, practice effective communication, and relate to each other on a human level.

As we review and celebrate each other's code, we grow our sense of mutual responsibility and collective ownership over the work we do. We also take pride in our continued dedication to a job well done.

Learn more about The Gnar and our team.

Author headshot
Written by
Pete Whiting
Head of Growth and Client Service
, The Gnar Company

Pete Whiting is the Head of Growth and Client Service at The Gnar Company, where he leads business development, marketing, and client service activities to help companies build high-quality custom software. With over a decade of experience at the firm, Pete specializes in driving revenue growth and ensuring high utilization of development teams through strategic go-to-market and product marketing initiatives.

Prior to joining The Gnar Company, Pete held executive roles in operations and marketing at firms such as Dispatch and MeYou Health. He also spent five years at Vistaprint, where he served as Director of Product Marketing and Strategy for the Asia Pacific region, accelerating annual revenue and gross profit growth through data-driven planning and multi-channel marketing. Pete’s career began in engineering and management consulting, including seven years at Deloitte Consulting leading growth strategy and post-merger integration for global industrial and high-tech clients. He holds an MBA with honors from UCLA Anderson and both a Master’s and Bachelor’s degree in Materials Science and Engineering from Brown University.

Related Insights

See All Articles
Product Insights
How to Choose the Right Software Development Partner in 2026

How to Choose the Right Software Development Partner in 2026

Avoid project failure and costly delays. Learn how to choose the right software development partner in 2026 with our guide to vetting quality, teams, and warranties.
News
Expert Software Development Consulting Services

Expert Software Development Consulting Services

Been burned by agencies that over-promised and under-delivered? The Gnar offers guaranteed outcomes, fixed pricing, and a 12-month bug-free warranty. 100% US-based senior engineers.
Engineering Insights
Why Your AI Coding Agent Keeps Making Bad Decisions (And How to Fix It)

Why Your AI Coding Agent Keeps Making Bad Decisions (And How to Fix It)

AI coding agents making bad decisions? The frustration comes from two fixable problems: assumptions and code quality. Here's how to get consistently good results.
Previous
Next
See All Articles