My team at work has a system whereby after a task is completed and committed to source control, it is assigned to a developer unrelated to the task for a code review. The idea here is that fresh eyes on the problem may pick up something not considered by the original participant(s).

But what happens during a “code review”? There’s no single answer to this question and my experience is that it varies considerably based on a number of factors. After thinking for a little bit about what I do when code reviewing, I came up with a list of stages in a code review and figured I might just share it with everyone. The list below outlines the process I (personally) go through when reviewing a change. I don’t actively pursue these topics one by one but I noticed that I do consider them subconsciously.

  • What was the problem being solved?
  • Do I have a clear understanding of the problem?
  • Can I reproduce the problem?
  • What code was checked in?
  • Is the code change unit tested?
  • Does the code change fix the problem (functional test)?
  • Does the code change address the problem (code analysis)?
  • Does the change consider other possible related problems?
  • Does the code change introduce new bugs?
  • Are there any stylistic recommendations?

What was the problem being solved?
Coming in as a fresh set of eyes means that I don’t know the problem apart from the very high-level description I heard about during our stand-up meeting. If the change is trivial, or if there is enough information in the issue system, then I won’t bug the developer, otherwise I will ask them what the change was about and get them to describe the problem being solved.

Do I have a clear understanding of the problem?
For issues I don’t immediately understand, I tend to ask questions until I get an “aha” moment. With a solid understanding of what the enhancement or fix was, I’m better armed to think both inside and outside the box for any potential problems.

NB: Thinking outside the box is great, but if you can’t even think within the box, then you’ll struggle – ask as many questions as you need to until you completely get what’s going on.

Can I reproduce the problem?
This step is optional depending on the situation. I say that very carefully, because I’m of the opinion that if you can reproduce the problem then you should just to confirm it actually existed in the first place. When working with a junior developer once, he spent a few hours trying to fix a problem and during code review we discovered that it was an environmental specific setting on his machine, not a problem with the system. This is a good reason to reproduce the problem first. Depending on the seniority of the developer, I’ll make a judgmental call as to whether to try and repro or not, where more senior developers are more likely to be trusted in having identified the real cause of a problem.

One situation where you might not reproduce the problem at the start is based on your development workflow. If you perform reviews after the change has been made, then the only way you can reproduce is before you sync to the latest version (which contains the change) or you have to rollback the change. I don’t particularly like using this as an excuse though, and if i can comment out a few “key” lines in order to repro a problem, i will.

What code was checked in?
Source control is great. What was checked-in for the fix? Generated code files? Hand-made changes? modified related binaries?

If a file was committed because a modification was necessary, then (der) it had to be checked in. But what about some files which are modified but do not necessarily need to be checked in? One example in Delphi is when you modify a module, it makes changes to both the code file (.PAS) and the visual form file (.DFM). The PAS changes consist of your code, and unless you made specific UI changes, there is no reason why the designer should have changed the .DFM. It’s silly, it’s redundant and worse yet if it gets committed it adds unnecessary noise to your change history.

Another example more .NET pertinent would be autogenerating code from a DB or XML or whatever. If the schema hasn’t actually changed, then the only difference in your files will be an autogenerated comment of the date/time the file was modified. But the interfaces haven’t changed so who cares what date the file was last generated?

I hate making this mistake so i’m sure to point it out to anyone’s code I review.

Is the code change unit tested?
I wont get into the good vs evil of unit testing so much as to say that in most cases, I would expect to see some form of automated test checked in with the code. If it’s not tested, it’s impossible to prove the problem ever existed. I say “most cases” because on some projects (particularly the very brown/black-field ones), some testing is near impossible. I expect that new code which doesn’t interact a lot with old code to have a number of comprehensive tests.

Does the code change fix the problem (functional test)
This is pretty straightforward. Usually you will have a list of steps to reproduce the problem, or a way of examining the new feature. Follow these steps carefully and ensure that the base scenario works as expected. Then start trying alternatives in a methodical approach in order to determine that something immediately around the changed area hasn’t unintentionally been affected.

For instance if you’re testing that when a button is clicked and option A is turned on, the widget will somersault. Then check that when the button is pressed with only option B enabled, the widget does a pirouette. Once satisfied check that with both options, the widget still somersaults and also pirouettes at the end.

The number of permutations will obviously change and increase exponentially the more variables you introduce to the problem. You can improve this by using the theory behind Pair-wise testing

Does the code change address the problem (code analysis)
Okay so you’ve tested the change from the front-end and it appears to work. Now what was changed in the code? There might be plenty of ways to solve a problem and it’s entirely plausible that the worst one was taken.

For example assume the following bug: When you click button A, an exception dialog is shown claiming that the widget was null and the process couldn’t continue. The worst fix in this case, is to swallow the exception and do nothing about it, potentially leaving the system in an unstable state. A better solution would be to find out the cause of why the widget was null and prevent the situation from arising in the first place.

This is not a question about style, or design (anti)patterns or comments – this is about whether the implementation was appropriate for the problem.

Does the change consider other possible related areas?
Joe makes a fix to the CarpetWidget class and sends it to you for review. Has Joe considered that the same problem exists in the HardwoodWidget, or the TileWidget? Sure you might have functionally tested all relevant combinations, but tracing through the code in your head may expose other areas conditions which you previously may not have considered, or may have deemed irrelevant.

Does the code change introduce new bugs
So the developer has fixed the problem and they’ve covered all possible conditions – great! Now what’s wrong in THEIR code? New bugs can be easily introduced without realising and can be quite insidious. One example of this might be using an unmanaged resource and not disposing after use. Or catching an exception and not re-throwing it the correct way. Simple mistakes which may never show up, but if they did they could be considerably problematic.

Are there any stylistic/design recommendations?
This is by far the most abused by most people performing a code review. In my experience most developers generally fall into three camps. The first camp are developers who have suggestions to improve style and/or design but are too afraid or shy to come forward and express their opinion. These people are not helpful to a development team. They are short-circuiting the feedback cycle which doesn’t promote knowledge sharing. Everyone has an opinion. Some people have better ones than others. If you don’t communicate, one (or both) parties will never improve.

The second camp consists of developers who have nothing but petty and small suggestions. They’re approach to a code review is “My way would be different, so i’ll pick on all the small things which make me feel better”. For example, changing a “while” loop to a “do..until” loop because it might be semantically correct. Seriously, who cares? The while loop is a much more common looping paradigm than a do-until and therefore more likely to be understood first time by anyone else reading the code. In the absence of any technical reason why the loop should be changed, it’s an unnecessary waste of time and not productive to the end goal (delivering the product).

The third camp consists of developers who acknowledge the flaws in the other two camps and consciously balance what is important and what isn’t. These are people who have a fair number of years under their belt and appreciate the value of time as a commodity. Sure, changing that while loop to a do-until would be cute but the code is already written, it works, it’s tested and just because it’s not mine doesn’t mean it’s bad!

A lot of developers think that a “code review” involves only this one stage and fail to really consider many (if any) of the other points above. (“Yeah it looks good, that’ll do”). I once worked in a place where the code review process consisted of my manager scrolling up and down through the diff-comparison and checking that there looked to be enough lines modified. Needless to say some people are not cut out for the job.

And that’s how it works for me. Writing this post has helped me solidify the process a little more, even if only in my own head. I welcome your thoughts on the topic – maybe there’s something i’m missing?

I posted the following message on a developer list. I’m so frustrated by the lack of alternatives to nCover…..

We’ve been using the free version of NCover in a number of our projects (v1.5.8), and for the most part, we’re happy with what it does.

However we’ve recently come to realise that the free download for nCover doesn’t work on x64 machines (incompatibility when it tries to register the profiling DLL). This prompted me to have a look for another free/OSS product which would work in the same or similar manner as nCover….and i’m coming back empty-handed.

Am i the only person who has questioned why i should need to purchase the enterprise version of the product, just for x64 support? And more importantly, how come there is a complete drought of alternatives for nCover? I’ve found PartCover, but haven’t used it and it doesn’t look to be as complete as nCover….

I’m shocked, given that the OSS community seems mad about creating alternatives and derivatives for toolsets (think NUnit vs MBUNit vs xUnit or NMock vs Rhino Mocks vs Moq …). There seems to be a number of alternatives for other TDD tools, but as far as coverage goes, is nCover really the only thing out there?

If you know any, i’d like to hear about it….

Also known as a CodeSniffer, this tool analyses your source code and outputs statistics of whether the code fails key standards which should be followed across your entire codebase.

    Source Analysis is similar in many ways to Microsoft Code Analysis (specifically FxCop), but there are some important distinctions. FxCop performs its analysis on compiled binaries, while Source Analysis analyzes the source code directly. For this reason, Code Analysis focuses more on the design of the code, while Source Analysis focuses on layout, readability and documentation. Most of that information is stripped away during the compilation process, and thus cannot be analyzed by FxCop.

Sweet. I’m going to try and push it into one of our local projects here 🙂

[UPDATE]
….bugger. Looks like it is reliant on VS2005/.NET 2.0

why am i still stuck on this lousy 1.1 project? 🙁