Yet another Slashdot comment.

Where someone asks my thoughts about maintaining comments or code documentation:

Comments, like many things in software development, are really a matter of personal style–though they do serve the purpose of making it clear what your code is supposed to do, so the guy who maintains it later can have an understanding of what is supposed to be going on.

So I tend to lean towards a block comment per non-trivial (10 lines or more) function or method–though sometimes the block comment may be at most a sentence. (I don’t think the name of the method or function is sufficient commentary enough simply because cultural differences may make a non-native English speaker unable to translate your personal naming conventions.) I also tend to lean towards describing how complex sections of code work if they appear “non-trivial” to me–meaning if I’m doing more than just calling a handful of other methods in sequence or looping through a bunch of stuff which was clearly noted elsewhere.

I also tend to write code like I write English: a single whitespace used to separate two blocks of code, and moving functionality that is associated into their own blocks (where shifting functionality does not impair the app) is preferable. I also tend to block chunks of methods in a class which are related with each other; the Apple C/C++/Objective C compiler provides the #pragma mark directive to help provide a one line comment to help identify the functionality of those groups of methods.

If I’m doing something really non-trivial–for example, implementing a non-trivial algorithm out of a book (such as a triangulation algorithm for converting a polygon into a triangle) I will start the block comment with a proper bibliographic citation to the book, the edition of the book, the chapter, page number, and (if the book provides) algorithm number of the algorithm I’ve implemented. (I won’t copy the book; the reference should be sufficient.) If the algorithm is available on a web page I’ll cite the web page. And at times when I’ve developed my own algorithm, I will often write it up in a blog post detailing the algorithm and refer to that in my code. (When done as a contractor I’ll look for their internal wiki or, if necessary, check in a PDF to the source kit.)


Back when I started doing this a long time ago, technical writers were sometimes used to help document APIs and the project requirements of the project. The reason for having a separate technical writer was because by having a person embedded in a project who is not intimately familiar with the project, they know the right questions to ask to help document the project to an outsider, being an outsider themselves. Developers cannot do this. Developers, being too close to the code they work on, will often run the code down the “happy path” not thinking of the edge cases to run, and they won’t describe things they think is obvious (like “of course you must call API endpoint 416 before calling API endpoint 397; the output of 416 must be transformed via the function blazsplat before calling 397, and how do you get the input to blazsplat?). A really good technical writer could also help to distill the “gestalt” of the thing, helping to find the “why” which then makes code or project specifications incredibly clear. (You call endpoint 416 because it’s the thing that generates the authentication tokens.)

Back when I was required to train for CISSP certification (I never took the test), one section discussed the three states of knowledge–later used by Donald Rumsfeld in a rather famous quote he was lambasted for. There are things we know we know. For example, I know how to code in C. There are things we know we don’t know. For example, I know I don’t know how to code in SNOBOL. But then there are also things we don’t know we don’t know: this is why, for example, why often speakers never get any takers when he asks for questions: because his audience doesn’t know what questions they should ask. They don’t know what they don’t know.

I think there is a fourth valid state here–and we see the computer industry absolutely riddled with them: the things we don’t know we know. These are the things we’ve learned, but then we think somehow “they’re obvious” or “everyone knows them.” They’re the things we don’t question; the things we don’t think need to be taught, because we don’t know they need to be taught. And the problem is, developers–having invented the code they need to describe–don’t know that they need to describe it to people who haven’t spent weeks or months or years studying the problem and working towards a solution.

I know I suffer from it, I know others who suffer from it. It’s why I think we need people checking our work and helping us to document our interfaces. And it’s why any answer I give for how to comment code will be insufficient: because of the knowledge problem that I don’t know what I should be telling you if you want to understand my code.

And it’s why I think we often don’t comment enough: because we don’t know what we know, so we don’t know what we need to tell other people.

7 thoughts on “Yet another Slashdot comment.

  1. Unpacking this post, this was my take-away: the practice of commenting involves determining what to say, how much to say, when to say it, and avoidance of blind-spots induced by familiarity. I take these ideas a few steps further.

    Generally speaking, I also tend to write code like I write text: in paragraphs. Deciding what constitutes a paragraph is a bit of an art and deciding what increments of program work need a description requires thought. For me, that thought goes along with deciding what to code. After a while one develops a feel for what somebody would need to know if they returned to the code in question a year later or if they had no idea what was happening and they were seeing the code for the first time. When in doubt, err in the direction of safety at the risk of over explaining.

    I’ve worked with lots of people over the years with several excuses for not writing very much in the way of commentary into their code. Ignoring the lamest of these (“I haven’t got the time, read the code”), one very common one is the (generally mistaken) belief that well written code should be self explanatory. I’ll get to the flaw in that line of reasoning in a moment. Another excuse is code often gets changed and the comments aren’t, causing them to be misleading and obscure the purpose or meaning of the program text itself. This too is the result of something faulty: process.

    The “well written code is sufficient” line of thinking suffers from the lack of an accessible big-picture. I mean this both ways: the holder of such an opinion does not realize the understanding of those that come after to extend and do maintenance would be greatly aided by commentary that provides a big-picture view—a roadmap—of the system in question. Program text can be written well and in such a way as to exhibit structure that is homomorphic (very similar or parallel) to that of the logical and technical architectures of the programmed solution. This may still not help the understanding of someone that had little to do with the development of the program because they were not around when all the important questions were asked and the answers encoded in the static architecture and expected dynamics of program execution. This is why structural commentary that aligns with the decomposition of the problem becomes critical. And differing levels of abstraction may require more or less dense commentary depending on the complexity and volatility of the material being described.

    The “comments don’t stay synchronized” excuse emerges from a process flaw: the failure to prioritize keeping the comments up to date as the program text is changed. Managers and individual contributors have a duty to be diligent in the upkeep of the code for which they are responsible, be they creators or maintainers. The notion that it is ok to let comments slide or delete them because somebody doesn’t think they matter is the product of just plain laziness. Obviously, when code becomes obsolete and needs to change, any extant commentary needs to be examined for usefulness and updated or extended as necessary, but vandalism emergent from sloth is something to guard against. This is where code reviews can be useful in a well formulated process.

    The problem of “the things we don’t know we know” can be overcome. Writing clear, well structured code requires discipline and practice. Understanding what is actually in the finished product and knowing what is significant to disclose involves being able to adjust one’s perspective and ask the question: what would one need to know if they had no or very little idea, perhaps because they had forgotten, what was going on? Here again program structure and commentary should align, and explication of the problem decomposition reflected in the code goes in comments that act as mileposts on the roadmap of the program.

    Like

    • Just to toss in another thought before I turn in:

      Part of the problem of deciding where and when to write comments in code is that not all code is the same. Some code is essentially plumbing: “Put this view in the upper left.” “When this button is pressed, fire the login dialog.” “When the user submits a password, check to see if the password is long enough and has punctuation.” A lot of plumbing code is pretty self-explanatory: if you’ve thought your naming conventions through even the tiniest bit, the following code really doesn’t need a lot of explanatory comments:

      - (void)doSignupClick:(id)sender {
      if ([self isPasswordValid:self.passwordField.text]) {
      [self submitForSignup];
      } else {
      [MyErrorAlert showError:@"Password needs to be at least 8 characters long and contain a punctuation mark.];
      }
      }

      Heck, you don’t even need to know Objective C to guess what the code above does.

      Some code is a little more complicated; that sort of code requires perhaps a block comment at the top:

      // Verifies passwords are 8 characters long and contains valid punctuation
      - (BOOL)isPasswordValid:(NSString *)str
      {
      ... blah blah blah
      }

      That way if someone comes along and notices the function doesn’t test the length of the string, at some level cognitive dissonance should help alert him to the fact that maybe, just maybe, there’s a bug here.

      Now I suspect the vast majority of the code most professional writers create is just like the stuff above: basic plumbing or dirt-simple algorithms. And for that sort of code, I suspect comments is probably not worth the trouble.

      (Side note: Notice that this sort of code is also exactly the sort of code which is not worth testing under Test Driven Development–and for exactly the same reason: the cost to maintain the comments or the test framework is substantially higher than the cost to maintain the code in an environment where the UI is expected to change every time a Product Manager has a brainwave. And while comments getting out of sync is just sloppy, TDD with an out of sync framework means broken code and builds which don’t work.)

      However, there is another class of code which fewer and fewer of us write, which really deserves a test driven framework and well-maintained comments, and that is more detailed algorithmic code which does something fairly non-trivial. In my own career, for example, I’ve been asked to write code which removes red-eye from photographs–which led me to writing a fairly complex image processing filter for that company. This motivated a lengthy blog post describing a boundary walking algorithm, and a more detailed PDF checked into the customer’s source kit which described the process for calculating the areas which were red which deserved desaturation treatment to remove red-eye.

      That sort of code, in my opinion, requires detailed commentary so people can maintain the code–or even have a fighting chance to figure out what’s going on. (I was converting RGB to HSV in order to calculate the range of red values to filter, for example.) This sort of code also requires a test framework so it can be tested in isolation–so if someone later comes along and breaks something, they can see it immediately. (Bugs in the image filter can be quite subtle, since it’s not as simple as “it works or it throws an exception.”)

      This sort of commented, TDD-developed code is fairly rare–though I love that sort of stuff when I encounter it. But when it does happen, it requires completely different treatment than “when this button is pressed close the alert” sort of stuff we typically write nowadays.

      I guess the long and the short of my comment is that not all code is the same, so we really cannot treat it with a “one-size fits all” sort of strategy.

      Like

      • Indeed, “one size fits all” does not work in practice. Some things are just more complex than others and deserve deep care that simpler things don’t. I refer to such objects as “engine components”. Some libraries that are used extensively and are required to have good performance and correctness characteristics fall into this class. A few good for-instances are allocators, generational scavengers, and global compactors for pure OO systems that support fully automatic (instantiate, use and forget) memory semantics. These are the parts of a runtime or virtual machine (among several others) that are complex and must be perfect and performant.

        A lot of plumbing code is pretty self-explanatory: if you’ve thought your naming conventions through even the tiniest bit….

        Well, yes, and naming is one of the most critical things we do as designers and knowledge workers. Giving things meaningful names and using those names consistently is a critical part of the process of design and implementation. In fact, the process of developing fluency in domain vocabularies is usually essential to communicating with the business and users of systems we produce. The importance of well conceived and properly used process, design, and domain vocabulary is often underappreciated.

        As you point out, there is a lot of code that just serves the purposes to tying together larger, more complex system components. That “plumbing” itself can be simple and easy to understand. The subsystems that plumbing connects may be at articulation points in designs, and though the pipes may be simple, they also may represent a nexus where a number of complex subsystems come together and interact. These boundaries deserve contextualization and a few remarks that do not describe the code per se but do describe what is being brought together might at times be helpful. As you say, one size does not fit all.

        I think what you describe in your red-eye example clarifies an important consideration that needs to be made more explicit in methodologies. Specifically, a complexity classification of components along some kind of continuum based on the level of scrutiny, care in design, testing, and external documentation they require. Of course, to be useful such an element of methodology must not become burdensome beyond its value to agility, product soundness, and maintainability. No matter what though, developing the necessary intuition for what should be done requires experience and a shop culture to support it. Furthermore, if complexity is evaluated after code is written, opportunities to do better design or more efficient, thorough, maintainable, and methodical testing may be missed.

        The external document you mention is indeed a rare thing these days. It’s been a long time since I have worked on a commercial system with extensive external documentation of system design and internal implementation details. Production of external documents should accompany reusable components like libraries and complex engine components. Design documentation really should be considered essential for even medium-sized systems. A collection of well written design and internals documents can easily pay back the cost of their production for larger systems which tend to persist for decades.

        Like

    • The problem of “the things we don’t know we know” can be overcome.

      The problem is that as we become indoctrinated, as we spend time working on a problem, as we dive deep into our own particular problem set, we forget the learning curve we went through–and it becomes easy to forget the effort we went through to learn what we know, and just how much effort it will take the next poor son-of-a-bitch who follows us down the rabbit hole to figure out what’s going on.

      And that has no real solution, outside of testing–which in the case of code, means testing to see if your comments make sense by asking the maintenance developer who winds up with your code if he understands, and by having compassion if he doesn’t (rather than treating him with derision, as many of my colleges are wont to do).

      Like

      • I understand the problem, and for the most part agree with the fundamentals of your observation.

        My contention is that it is possible to habituate oneself to comment during the learning process and doing so reduces the “I forgot what I didn’t know but figured out later and maybe should explain” problem. I think the key is to get the abstract ideas down during the process and sketch (in comment text) ahead of time what the code should be. I’m still not going to remember all the small insights I had that drove some of the individual code choices I made, but my design decisions are more durable, arrived at earlier in the process, and are more useful for building understanding in the present and future. It is those that should be captured along with their motivations and will be the most helpful along the timeline.

        Testing how well one does this is a really good idea and it helps to refine one’s individual commentary and coding style. Definitely something I think more people should do and evaluate during code review.

        Like

  2. This is a quick test of tags. I don’t know what works here and determining that will help me more artfully craft my comments: Bold, underline, italics, deleted.

    Like

Leave a reply to William Woody Cancel reply