Yet another Slashdot comment.

Where someone asks my thoughts about maintaining comments or code documentation:

Comments, like many things in software development, are really a matter of personal style–though they do serve the purpose of making it clear what your code is supposed to do, so the guy who maintains it later can have an understanding of what is supposed to be going on.

So I tend to lean towards a block comment per non-trivial (10 lines or more) function or method–though sometimes the block comment may be at most a sentence. (I don’t think the name of the method or function is sufficient commentary enough simply because cultural differences may make a non-native English speaker unable to translate your personal naming conventions.) I also tend to lean towards describing how complex sections of code work if they appear “non-trivial” to me–meaning if I’m doing more than just calling a handful of other methods in sequence or looping through a bunch of stuff which was clearly noted elsewhere.

I also tend to write code like I write English: a single whitespace used to separate two blocks of code, and moving functionality that is associated into their own blocks (where shifting functionality does not impair the app) is preferable. I also tend to block chunks of methods in a class which are related with each other; the Apple C/C++/Objective C compiler provides the #pragma mark directive to help provide a one line comment to help identify the functionality of those groups of methods.

If I’m doing something really non-trivial–for example, implementing a non-trivial algorithm out of a book (such as a triangulation algorithm for converting a polygon into a triangle) I will start the block comment with a proper bibliographic citation to the book, the edition of the book, the chapter, page number, and (if the book provides) algorithm number of the algorithm I’ve implemented. (I won’t copy the book; the reference should be sufficient.) If the algorithm is available on a web page I’ll cite the web page. And at times when I’ve developed my own algorithm, I will often write it up in a blog post detailing the algorithm and refer to that in my code. (When done as a contractor I’ll look for their internal wiki or, if necessary, check in a PDF to the source kit.)


Back when I started doing this a long time ago, technical writers were sometimes used to help document APIs and the project requirements of the project. The reason for having a separate technical writer was because by having a person embedded in a project who is not intimately familiar with the project, they know the right questions to ask to help document the project to an outsider, being an outsider themselves. Developers cannot do this. Developers, being too close to the code they work on, will often run the code down the “happy path” not thinking of the edge cases to run, and they won’t describe things they think is obvious (like “of course you must call API endpoint 416 before calling API endpoint 397; the output of 416 must be transformed via the function blazsplat before calling 397, and how do you get the input to blazsplat?). A really good technical writer could also help to distill the “gestalt” of the thing, helping to find the “why” which then makes code or project specifications incredibly clear. (You call endpoint 416 because it’s the thing that generates the authentication tokens.)

Back when I was required to train for CISSP certification (I never took the test), one section discussed the three states of knowledge–later used by Donald Rumsfeld in a rather famous quote he was lambasted for. There are things we know we know. For example, I know how to code in C. There are things we know we don’t know. For example, I know I don’t know how to code in SNOBOL. But then there are also things we don’t know we don’t know: this is why, for example, why often speakers never get any takers when he asks for questions: because his audience doesn’t know what questions they should ask. They don’t know what they don’t know.

I think there is a fourth valid state here–and we see the computer industry absolutely riddled with them: the things we don’t know we know. These are the things we’ve learned, but then we think somehow “they’re obvious” or “everyone knows them.” They’re the things we don’t question; the things we don’t think need to be taught, because we don’t know they need to be taught. And the problem is, developers–having invented the code they need to describe–don’t know that they need to describe it to people who haven’t spent weeks or months or years studying the problem and working towards a solution.

I know I suffer from it, I know others who suffer from it. It’s why I think we need people checking our work and helping us to document our interfaces. And it’s why any answer I give for how to comment code will be insufficient: because of the knowledge problem that I don’t know what I should be telling you if you want to understand my code.

And it’s why I think we often don’t comment enough: because we don’t know what we know, so we don’t know what we need to tell other people.

Took, what? 20 years for someone else to say this?

58% of high-performance employees say they need more quiet work spaces.

Our open industrial spaces are frustrating our best people and likely impacting our end products.

I mean, this would be obvious to anyone who had read The Mythical Man-Month and considered the consequences of some of that book’s observations. But apparently we’re too smart to make such observations…

A comment left on Slashdot.

The problem is that our industry, unlike every other single industry except acting and modeling (and note neither are known for “intelligence”) worship at the altar of youth. I don’t know the number of people I’ve encountered who tell me that by being older, my experience is worthless since all the stuff I’ve learned has become obsolete.

This, despite the fact that the dominant operating systems used in most systems is based on an operating system that is nearly 50 years old, the “new” features being added to many “modern” languages are really concepts from languages that are between 50 and 60 years old or older, and most of the concepts we bandy about as cutting edge were developed from 20 to 50 years ago.

It also doesn’t help that the youth whose accomplishments we worship usually get concepts wrong. I don’t know the number of times I’ve seen someone claim code was refactored along some new-fangled “improvement” over an “outdated” design pattern who wrote objects that bare no resemblance to the pattern they claim to be following. (In the case above, the classes they used included “modules” and “models”, neither which are part of the VIPER backronym.) And when I indicate that the “massive view controller” problem often represents a misunderstanding as to what constitutes a model and what constitutes a view, I’m told that I have no idea what I’m talking about–despite having more experience than the critic has been alive, and despite graduating from Caltech–meaning I’m probably not a complete idiot.)

Our industry is rife with arrogance, and often the arrogance of the young and inexperienced. Our industry seems to value “cowboys” despite doing everything it can (with the management technique “flavor of the month”) to stop “cowboys.” Our industry is agist, sexist, one where the blind leads the blind, and seminal works attempting to understand the problem of development go ignored.

How many of you have seen code which seems developed using “design pattern” roulette? Don’t know what you’re doing? Spin the wheel!

Ours is also one of the fewest industries based on scientific research which blatantly ignores the research, unless it is popularized in shallow books which rarely explore anything in depth. We have a constant churn of technologies which are often pointless, introducing new languages using extreme hype which is often unwarranted as those languages seldom expand beyond a basic domain representing a subset of LISP. I can’t think of a single developer I’ve met professionally who belong to the ACM or to IEEE, and when they run into an interesting problem tend to search Github or Stack Overflow, even when it is a basic algorithm problem. (I’ve met programmers with years of experience who couldn’t write code to maintain a linked list.)

So what do we do?

Beats the hell out of me. You cannot teach if your audience revels in its ignorance and doesn’t think all that “old junk” has any value. You cannot teach if your students have no respect for experience and knowledge. You cannot teach if your audience is both unaware of their ignorance and disinterested in learning anything not hyped in “The Churn.”

Sometimes there are a rare few out there who do want to learn; for those it is worth spending your time. It’s been my experience that most software developers who don’t bother to develop their skills and who are not interested in learning from those with experience often burn out after a few years. In today’s current mobile development expansion, there is still more demand than supply of programmers, but like that will change, as it did with the dot-com bubble, and a lot of those who have no interest in honing their skills (either out of arrogance or ignorance) will find themselves in serious trouble.

Ultimately, as an individual I don’t know if there is anything those of us who have been around for a while can do very much of anything, except offer our wisdom and experience to whomever may want to learn. As someone who has been around for a while it is also incumbent on us to continue to learn and hone our skills; just this past few years I picked up another couple of programming languages and have been playing around with a new operating system.

And personally I have little hope. Sure, there is a lot of cutting edge stuff taking place, but as an industry we’re also producing a lot of crap. We’ve created working environments that are hostile (and I see sexism as the canary in the coal-mine of a much deeper cultural problem), and we are creating software which is increasingly hostile to its users, despite decades of research showing us alternatives. We are increasingly ignoring team structures that worked well in the 1980’s and 1990’s: back then we saw support staff (such as QA and QAE and tech writers) who worked alongside software developers; today in the teams I’ve worked for I’m hard pressed to find more than one or two QA alongside teams of a dozen or more developers. I haven’t seen a technical writer embedded in a development team (helping to document API interfaces and internal workings) for at least 20 years. And we are increasingly being seen as line workers in a factory rather than technically creative workers.

I’m not bitter; I just believe we’ve fallen into a bunch of bad habits in our industry which need a good recession and some creative destruction to weed out what is limping along. And I hope that others will eventually realize what we’ve lost and where we’re failing.

But for me; I’ll just keep plugging along; a 50+ year old developer in an industry where 30 is considered “old”, writing software and quietly fixing flaws created by those who can’t be bothered to understand that “modules” are not part of VIPER or that MVC can include database access methods in the “model” and who believe all code is “self-documenting.”

And complaining about the problems when asked.

You’re doing it wrong.

If you violate a basic principle of software development in order to implement your design pattern, you’re doing it wrong.

Take, for example, Model-View-Controller or its variants, from MVVM to VIPER.

None of these patterns for assembling a user interface application require that you violate the basic principles of object-oriented design, which is the construction of your code using reusable objects defined with will-defined public interfaces and privately maintained internal structures. Meaning if you create a view object where the “private” elements of that object are publicly manipulated or publicly instantiated, don’t use your design pattern as an excuse for violating principles of encapsulation.

You’re just a bad programmer who needs to brush up on the basics.

“Code Smells” and other bad ideas.

One of the worst ideas I have ever encountered in the literature is this notion that we should just “know” when we’ve written bad code. This idea of “code smell” certainly has its place with experienced developers, but to use this idea when teaching inexperienced developers is like telling a first year spanish student who asks how to construct proper sentences “just say what sounds right to you.”

Literate Programming and Server APIs

The more I work on mobile applications and the more I work on my own projects, the more I wish there was a language (based on the principles of Literate Programming) which would help automatically generate both the client code, the server code and the documentation for API endpoints.

Yes, I know existing techniques allow this sort of thing to be documented, but literate programming concentrates more on producing documentation that generates code than it concentrates on code with comments.

In today’s era of “self-documenting code” (and the push towards having code somehow “self-documenting” and away from commenting code), I have no hope that we will ever get decent documentation for client-server interfaces.

It’s just one more step down the rabbit hole away from an era where technical writers were embedded with project teams to help them communicate within and with other teams.

Design for Manufacturability.

While playing with the mini-lathe, mini-mill and 3d printer, I came across an interesting concept called design for manufacturability.

The essence of the problem is this: it’s not enough to design a thing: a computer, a hammer, a pencil, a power drill. You also need to design the object so that it can be manufactured: so that it can be moulded, milled, and assembled in an efficient way.

Here’s the thing: with every process used to make a thing, there are limits on the shape, size, and materials that are used in that manufacturing process. For example, consider plastic injected moulding. Typically two mold halves are pressed together, plastic is injected into the mold, the material cools, and the halves are separated and the molded part is pushed out.

Well, there are a couple of limits on what you can create using plastic injected molding. For example, you must use plastic. You also must design the part in such a way so that the part can slide out of the mold. This means the part must be, in a sense, pyramidal shape: when the half pulls apart it should pull in a way so that the surface doesn’t slide along an edge. It also means the part has to push out: holes must be parallel to the direction the mold halves pull apart.

There are other design limits on plastic injected molding. For example, element sizes cannot be too small or too thin; otherwise, the mold may not fully fill or the part may pull apart as the mold halves are separated.

There are limits on other manufacturing processes as well. 3D printing cannot print elements suspended in space; they must be “self-supporting” as the part is “sliced”. CNC milling can only remove so much material at a single pass. Sand casting results in components that must be milled or polished.

And each manufacturing process takes time, effort and equipment. So minimizing the number of manufacturing steps and components that must be assembled helps reduce costs and reduce material required to manufacture a thing.


This is quite a complex field and I cannot pretend to begin to understand it. I do have some experience with embedded design, however, and part of embedded design is determining ways to create the software necessary for your product which reduces the total component count on a circuit board: if you can use an embedded processor with built-in I/O ports and fit your application in the memory of that embedded processor, your (essentially) 1 chip design will be far cheaper to manufacture than a design which relies on a separate microprocessor, EPROM chip, RAM chip and discrete decoders and A/D encoders, each of which requires a separate chip.


And this brings me to something which irritates the crap out of me regarding user interface design and the current way we approach designing mobile user interfaces.

Have you seen a project where there seems to be a dozen custom views for every view controller or activity? Where every single button is independently coded, and then there is a lot of work trying to unify the color scheme because every control is a slightly different shade of blue?

This is because the UI designers who worked on that project are thinking like artists rather than like designers. They are thinking in terms of how to make things look nice, rather than designing a unified design. They are thinking about adding new pages without considering the previous pages, and considering how to lay out elements without considering previous elements–other than in the artistic sense.

They are not designing for manufacturability.


There is far more to consider when designing the user interface of an application beyond simply making the application look pretty.

You must consider how people interact with machines, including the limitations of how we interact with computers, including the limits on our short term memory or our limitations on recognizing patterns.

You must consider the problem set that the application must resolve. If you’re building an accounting application, you must understand how accounting works–and that requires a much deeper level of understanding than a casual user may have balancing their checkbook. (Meaning if you’re building an accounting application, you may wish to talk to a CPA.)

You must consider how the application will be put together. As a UI designer you don’t need to know how to write code, but you do need to understand how objects get reused: if you intend to have a button which means “submit”, using the exact same button (same color, same frame, same label) means the developer can use the exact same code to create the submit button even though it shows up in several places. If you have a feature which requires the user to enter their location and that feature is used in a handful of places, you may consider having a separate screen which allows the user to select their location–and reuse that screen everywhere a location is called for.

And that means rethinking how you design applications beyond just creating a collection of screens.

That means considering a unified design for layout, for colors, fonts, buttons, and all the other major components of your application. That means thinking less like a page designer and more like a branding designer: creating unified rules covering everything from color to font choices to spacing of key elements which can then be reused as new screens are required. And it means sticking to these rules once developed, and communicating those rules to your developers so they can build the components out of which the application is built.

It means thinking like the designers of the Google Material Design, except on a smaller and more limited scope: thinking how your application looks as an assembly of various elements, each of which were separately designed and assembled according to the rules you create.

It means not thinking about an application as just a collection of screens, but as a collection of designed components.

Designed for manufacturability.