Electronics-Related.com
Blogs

Lessons Learned from Embedded Code Reviews (Including Some Surprises)

Jason SachsAugust 16, 20152 comments

My software team recently finished a round of code reviews for some of our motor controller code. I learned a lot from the experience, most notably why you would want to have code reviews in the first place.

My background is originally from the medical device industry. In the United States, software in medical devices gets a lot of scrutiny from the Food and Drug Administration, and for good reason; it’s a place for complexity to hide latent bugs. (Can you say “Therac-25“?) Basically, the presence of software bumps up even the lowest risk (Class I) devices to require design control. No software? No problem — with a few exceptions, only Class II and Class III devices require design controls. Software? Aha — you need design controls, even if it’s a Class I device. Requirements, reviews, documentation, plans, verification, validation, blah blah blah.

At the time I worked mostly on circuit design, and although I wasn’t a software engineer, because I knew how to program in C and C++, I did attend a number of software design and code reviews as an independent reviewer. Essentially the technical leads would go around looking for someone and say “Hey, you’re not on the XYZ team, right? We need you as an independent reviewer.” And my schedule would get busier. At the time — this was in the late ‘90s — what we’d do for software reviews is read the software requirements and design documents (requirements documents told what the software needed to do; design documents told you in high level terms how it was going to do those things), and then print out a copy of the code. You heard right. An actual printout. On paper. Ahead of time we’d read the code and scribble notes and questions on the printouts and then at the review we’d go over each file and raise any concerns with the team members, and hand in our written comments. Because we didn’t have any software tools to let you comment on source code. You could add comments to a Microsoft Word document, that was easy. But not source code. I suppose we could have made a copy of the source code files, and added comments directly in the source code itself, like this:

if (foo = 5)         // jms: shouldn't this be "foo == 5"
   do_something();
   // jms: please enclose with {}

and then given the commented files back to the software developers… but instead we generally printed out C and C++ and assembly code. Assembly code was the worst, totally unreadable, and a lot longer.

Nowadays there is software that lets you add out-of-band comments as annotations separate from source code, specifically for review purposes. But we’ll get to that in a moment.

Anyway, here we are in 2015. My team is working on our motor control code base. And it’s a fast and furious effort. I’m working on state machines, someone else is doing fault detection, another guy is working on the hardware abstraction layer, all at the same time, and it’s long past due the time when we should give our source code a bit of scrutiny. So we agreed we should have code reviews. I had been through them before, from the medical device industry, but my manager and most of my teammates hadn’t. Well, of course the big question from the manager is: how long is this going to take? And of course I say, well, I don’t know, because the team hasn’t done them before. We started estimating… let’s see… 40 or 50 files, maybe 3 or 4 files at a time for a two-hour review… yikes, that’s a dozen reviews, if we do two a week that’s six weeks of code reviews! And then we have the heated discussion of, well, we could do more than two reviews a week… but then we’re going to really interrupt the work that we’re trying to get done in addition to the code reviews.

So we started out with a test review to see how things went. We did this ASAP, so that if there were any kinks in the system, we could work it out. And I picked the most stable, innocuous file we had in our whole codebase… let’s call it simple.h. We were still doing a lot of work on much of the rest of the codebase, so I didn’t want us to spend time reviewing code that we knew we were going to rewrite anyway.

We had access to an Atlassian product called Stash, so I fiddled around with it and figured out how we could do a code review. Stash is essentially intended to work like GitHub, but on your company’s servers instead of on the cloud. It hosts Git repositories, lets you browse code history online, and manages pull requests. No issue tracking in Stash, but we have JIRA and there’s good integration between JIRA and Stash. And there are commenting features, which is what I was interested in. Unfortunately they don’t really include formal code reviews, those are in another product from Atlassian called Crucible. But Stash does let you comment on changesets in a few places:

  • in the diffs of the changeset itself
  • as a general comment on any changed file in the changeset itself
  • in the diffs of a pull request
  • as a general comment on any changed file in the pull request
  • as a general comment on the pull request

But we needed to review the entire simple.h file. Stash doesn’t let you do that, everything has to be in the context of a diff. I got around this by creating an empty dummy branch in the Git repo on Stash, then created a pull request from the master branch to the dummy branch, so that everything counted as a diff. Problem solved! Or so I thought… more about this later.

Again, what Stash does is host Git repositories. But we’re still doing day-to-day work in SVN. Our team has experience working with SVN, but barely any with Git, so we’re going to be staying with SVN for a while. But that’s okay, I found a neat utility called subgit, which converts the entire history of any section of an SVN repository to a Git repository. It converts comments, handles svn:mergeinfo, handles translation of SVN branches (whether the usual trunk/branches/tags scheme, or anything custom – you can tell it which SVN paths to translate into Git branches). And it’s fast! Much quicker than git svn, which crawled hopelessly slowly over our department’s gargantuan 70,000+ revision repository. And subgit is free, at least for manual use. All I have to do is run it periodically to keep up with recent SVN commits and then push them to our Git mirror in Stash. There’s a premium version which will automatically handle the updates in real-time, in both directions, from SVN to Git, and from Git to SVN. Awesome! The only thing it doesn’t handle is SVN externals, which don’t have an exact equivalent in Git anyway. We have a few SVN externals, but nothing really critical. So the Git issue was all set, I just use subgit to mirror our SVN subtree into Stash.

I had to give our team instructions. I said we’re going to be reviewing simple.h, please add comments on any issues you feel are relevant; just click on the left-hand side of a line of code (Stash has a little + symbol that pops up when you hover over a line) and type in a comment; please do NOT comment on any other files at this time. This was something that initially got ignored; team members were exploring Stash and commenting on a few files in other changesets, but we corrected this. And the team is still used to plaintext; Stash uses Markdown comments, but I’m the only one who frequently uses Markdown formatting.

Quality Assurance: The usual reasons to have code reviews

Anyway, this article is about reasons to have code reviews, so I’ll cover the straightforward ones first. These I’ll file under the category of Quality Assurance (QA):

  • Because our process says we have to. This is probably the most common. You may not have a choice in the matter; it’s part of your company’s standard operating procedures. A less formal variant of this reason is “Because our manager says so.”

  • To find bugs. Hopefully you don’t have any bugs (yeah, right), but sometimes programmers make them, and with 10 other people looking over code, you can pick up on things the programmer missed.

  • To identify concerns. Even if there aren’t bugs, sometimes there are special unit tests or integration tests that should be run, and this gives the team a good opportunity to propose those kinds of tests. Other concerns are “code smells”, which are technically correct but suspicious software practices, like using a variable named temp16 for several different purposes within one function:

    int32_t shimmer76(const Data *data, int16_t n)
    {
      int16_t temp16;
      int32_t result = 0;
      // temp16 is a floor wax!
      for (temp16 = 0; temp16 < n; ++temp16)
      {
         result += tweedledee(data, temp16);
      }
      // and a dessert topping!
      temp16 = tweedledum(data);
      result += ((int32_t)(result >> 16)) * temp16;
      return result;
    }
    

    That way we save memory, right? Because the compiler would have to put two separate variables into different memory locations, right? Think again.

  • To make sure we’re adhering to coding standards. We have a coding standard, so that our coding style is ideally uniform among the whole team. In practice everyone has their own stylistic quirks, but at least this makes things more uniform than they’d be if we each did our own thing.

  • To make sure code is readable and adequately commented. This is really crucial, especially for the code my team is working on, because it’s published on the company’s website, and customers use it as a starting point for their motor controller development. So if something in the source code is cryptic or confusing, it’s not acceptable and needs to be addressed. But let’s say you’re working on proprietary source code that no one else is going to see. It still needs to be readable, because even if you write it and understand it, someone else on the team may have to work on it later. Or you may no longer work there and your knowledge of the code may be lost. Or a few months goes by, and even though you’re still the one working on it, now you can’t remember why you did what you did, in that flash of clever coding fury.

  • To act as a formal checkpoint for code merges. Once a set of code changes completes review and all the relevant action items have been addressed, the changeset is deemed acceptable to merge into the main branch of source code.

So we had our first code review, and it took most of the two-hour slot I had reserved. And we found all sorts of little things! Mostly they were minor formatting issues, but the team gave “simple.h” a lot of scrutiny. In some ways, this was good for quality assurance, but we were going to be in trouble if each file took two hours. One issue I’ve heard why people don’t think code reviews are useful, is that you tend to focus on formatting and style issues rather than the semantics of the source code. It’s kind of like inspecting a car and getting all upset because there are a few paint chips and scratches and rust spots on the outside and some dog hair on the inside — but ignoring that the transmission skips directly from 2nd to 4th gear and back, and that the brakes occasionally lose hydraulic pressure at random with no warning. You know this kind of thing — failing to see the forest for the trees. We fell into this trap ourselves in the first code review. So I talked with my manager and we decided to be a little bit stricter in the remaining code reviews: all of the code reviewed had to meet the following criteria, at least two business days prior to the review, so the team had time to read it:

  • source code compiles with no errors or warnings
  • a quick informal test of the program is successful (this was some motor control code, so we turn it on and see if it spins the way we expect)
  • source code is checked into SVN
  • identifiers in the source code meet our team’s coding standards
  • all declarations of exported functions (those used in other parts of the program) included a documentation comment
  • supplementary documentation made available to the team
  • source code passed static-checking tests with vera++

This last point needs some elaboration. I wanted a way to make sure we addressed the bulk of coding style issues before the review. And the only way to do this and be nitpicky about it, without getting personally involved and going over every file with its author before the review, was to ensure we had an objective mechanism for testing code style that each team member could easily run in advance to determine if the source code was okay. I looked around for free software that could do this, and settled on a program called vera++, which has customizable rules to scan C/C++ code and check if there were rule violations. It wasn’t the best solution, but it’s what I could get working quickly, and I could include rules covering spacing around operators or brace positioning, or spacing between the comment delimiters e.g. /* test */ was ok, but /*test*/ was not. (Maybe I’ll write more about this in another article.)

This actually worked very well, once I got the team up to speed in running vera++. The rest of our code reviews spent very little time on formatting issues, and we focused on the content of our source code rather than style. We finished up reviewing the codebase in nine reviews taking an average of around 90-100 minutes each.

Some surprises

Okay, so I’ve listed six reasons to have code reviews under the QA umbrella. The other reasons were ones that I had not anticipated, and were a pleasant surprise to me. I picked up fairly quickly on them, since they brought a number of major advantages to our team’s software effort.

There are a few reasons that I would categorize as “threat of QA”:

  • Upcoming scrutiny reminds authors that their code is going to be looked at. In our case, the fact that we were now conducting code reviews encouraged team members to do their own review ahead of time to polish up their code, and in some cases we ended up rewriting portions of our software purely because team members felt that the source code wasn’t ready for review. I ended up rescheduling some of the reviews to let the rewritten modules go last, so there was time to get them in good shape.

  • Readiness for a code review gives authors a milestone beyond functional requirements. It’s not enough that the code works. Our team now understands that cleaning up code and creating documentation is part of the software process, and we have a better idea of when our code is really “ready”.

And the last reasons for having code reviews are those that fall under the category of team training. This is something I’m really happy about. Our team consists of application engineers: none of us are software engineers in the traditional sense; instead, we have backgrounds in circuit design, embedded systems, microcontrollers, signal processing, control systems, and motor control, with enough experience in C and C++ to write firmware as necessary. We do plenty of other things besides software engineering. So some of our team members are still learning some of the practical skills of software design. Here are the other reasons I’d give for having code reviews:

  • To allow team members to learn how other parts of our codebase work. Each of us owns some of the software modules. When we are working on software design, we tend to spend most of our time focusing on the modules we own, and we don’t usually get involved in the other modules. The code review process allowed us to re-familiarize ourselves with these other modules.

  • To facilitate learning good/bad coding practices. This was an opportunity for the less experienced team members to learn what works well and what doesn’t work well, and improve the quality of the software they write.

  • To facilitate a convergence rather than divergence of styles. This is something I don’t completely understand, but I guess it’s a subtle form of peer pressure. Software, like written and spoken languages, involves idioms, which are stylistic flourishes that have acquired certain standardized meanings after repeated use. You can look at a piece of software and when you notice the idiom used correctly, it’s a cue for understanding what the code is trying to do, like, “Oh, he’s iterating over an array” or “Oh, she’s doing a 16 × 16 multiply.” Our team started copying each other’s idioms in our source code, not because we asked each other to do so, but because it just felt more natural. Even though I consider myself a fairly experienced C/C++ programmer, some of the styles other people used in our source code were more well-received by the team, and as a result I began to use those styles rather than my own. I’m curious about the psychology of this — why should peer pressure, which I’ve always associated with rebellious or silly teenage fads, like wearing certain clothes or daring each other to engage in risky behavior, apply to a rational aspect of engineering?

In any case I was very impressed by the level of professionalism our team showed during these interviews. We have sometimes had heated disagreements about which motor control algorithms are better for which circumstances, and I was afraid we might get into arguments about the way code should be written. But the comments in our reviews were all very civilized, and represented well-thought-out suggestions or questions for the author of the source code.

Code review software

Facilitating a series of code reviews can be a lot of work, especially when time is of the essence. I had to schedule all the reviews, estimating how much we could get done in two hours. I picked two hours, because in my experience, attendees of design reviews start to get noticeably tired after meetings go on longer than two hours, and productivity really drops. Besides, we had about a two-hour window anyway, since our team is international and that’s about the maximum overlap we can get before one of us has to get up insanely early, or leave work very late. I tried to pick enough related files in each review so that we’d hopefully be able to cover most, but not all of them — if I overestimated how much time they would take, we’d finish early, and the remainder of the time slot would be wasted. When we finished one review, I’d adjust the next set of files based on what was left and whether the source code was ready. Two days before each review I’d assess what was really ready, and I’d publish the agenda for the next review, with a list of which files we were going to review and pointers to any background information.

I suppose we could have done the reviews completely asynchronously, but there’s value in real-time discussion, which I’ve written about in another article.

I mentioned we used Stash’s pull request mechanism to conduct the review. About halfway through the reviews, we started running into a problem. I went back to check the comments we’d made in the first few reviews, to see if they’d been addressed. And I couldn’t find some of them. Then I realized that the pull request mechanism in Stash allows updates. If you modify a file that was part of the changes in the source branch of the pull request, Stash will show you the updated file, and if you had comments in lines that were modified, those comments will not be shown. Poof! They disappeared. They were still available in the “activity stream” of a pull request, but it was very cumbersome to do so: you had to keep scrolling down and searching for them, and we had over 100 different comments, so it took quite a while, and my confidence started to drop. I was no longer certain that I could go back and verify all of the comments.

So I talked to some of our server admins and they set up a trial of Crucible for our team.

Crucible, unlike Stash, is directly aimed at code reviews. So there are many different methods for selecting the material to be reviewed. You can select whole files, you can select changesets, it’s very easy to search and identify what you want to include in a review. Now, instead of telling people “Use pull request #1, but don’t comment on anything except files simple.h, tweedledee.h, and tweedledee.c”, I could just set up a separate code review for those specific files.

I liked this part of Crucible – but their user interface is a bit “clunky”. Unlike Stash, which is modern and friendly and Markdown-oriented, Crucible uses the same quirky markup language as JIRA, and the user interface needs polishing. In Crucible you can select several lines on which to comment, and a window opens up to comment. Unfortunately, if you want to cut-and-paste content from a file, it’s a pain. You have to remember to hold down the Alt key, and the selection includes line numbers, so you have to edit those line numbers out manually in the copied code. Like little garden rakes scattered all over the back yard, getting in your way. I’d like to expect more from Atlassian, but I’ve gotten used to seeing stumbling blocks in Atlassian software that persist for years, even though lots of people have commented and voted on items in the Atlassian issue database. So although I’m sure Crucible’s interface will improve, I have no idea how long that will take, and I’m not particularly optimistic.

But we did perform the last three code reviews using Crucible. Both Stash and Crucible were very usable; the team members were quickly able to figure out how to comment using both software tools.

The other option I investigated is Upsource, from JetBrains. JetBrains is the Czech company with the funny name that is well-known for its IDEs like IntelliJ/IDEA and PyCharm, which have all sorts of clever interfaces for code completion and refactoring. I haven’t tried IntelliJ for Java yet, but I’ve used PyCharm for Python development, and I found it much easier to get things done than PyDev in Eclipse. Upsource seems to have a much better user interface than either of the Atlassian tools I tried. For example, your comments can be placed on any selected text range within a file (vs. Stash, which can only comment on one specific line, and Crucible, which can only comment on a consecutive range of lines) — so if you add a comment on one short identifier or expression in a long line, your colleagues can see exactly what you are commenting on. The code selection features aren’t as straightforward as in Crucible: Upsource is still changeset-oriented, and does not yet give you an option to select individual files (please help by voting for issue UP-4753), but you can add a comment on a part of a file that is outside the changeset, and that file is added to the review. (Example: suppose my latest changes in a review are only in foo.c; I can look in Upsource at the other files that were present at the same repository revision, and comment on any of them, so if I comment on bar.c and baz.c, those will now be visible in the review for anyone to comment on.)

So of the three, I liked Upsource the most. I looked at some other software tools for code review, but they were much pricier and I did not investigate them further… except for Phabricator, which is free and open source, but requires Linux + Apache + MySQL + PHP setup, and this posed a maintenance problem for us. (Crucible, Stash, and Upsource are all turnkey Java packages.) Upsource also is free for small teams (up to 10 people); Stash is free for up to 10 people and Crucible free up to a team of 5.

If you’ve got an open-source project that you can make public, GitHub is another option; it’s done a really nice job with user interface, and is free for public projects. The private projects are a bit pricey though, especially if you need to keep your information inside your company’s firewall rather than in the cloud.

Summary

Code reviews are not just drudgery tasks thrown at you by the QA team to slow you down. They have legitimate purposes in improving code quality, and when combined with automated tools to check code style, you can make sure they don’t get bogged down in arguments about spaces or braces. They also have some unexpected benefits, like helping beginners in your team learn from the coding techniques of more senior members, and helping the whole team to converge rather than diverge in its individual stylistic habits.

Software tools that facilitate code reviews are available. If you have a small team or are working on a public project, you can use these tools for free; for larger private projects they are available at various price points. I prefer Upsource more than Stash or Crucible.

If you haven’t had a code review on your project, I definitely recommend doing it!


© 2015 Jason M. Sachs, all rights reserved.



[ - ]
Comment by Pj8August 16, 2015
You liked github bit didn't look at any clones like gitbucket or gitlab or gitorious? Worth checking out, perhaps.

Also, read http://smartbear.com/SmartBear/media/pdfs/best-kept-secrets-of-peer-code-review.pdf if you haven't... All but the last chapter is generally applicable, and they list all you findings and more.

Disclaimer: I used to work for Smartbear, when the whole company could fit in a minivan to go to lunch.
[ - ]
Comment by AndreNeumannNovember 18, 2019
I'd like to add one more tool for code review that is also worth to be mentioned here

Review Assistant

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: