Design for Manufacturability.

While playing with the mini-lathe, mini-mill and 3d printer, I came across an interesting concept called design for manufacturability.

The essence of the problem is this: it’s not enough to design a thing: a computer, a hammer, a pencil, a power drill. You also need to design the object so that it can be manufactured: so that it can be moulded, milled, and assembled in an efficient way.

Here’s the thing: with every process used to make a thing, there are limits on the shape, size, and materials that are used in that manufacturing process. For example, consider plastic injected moulding. Typically two mold halves are pressed together, plastic is injected into the mold, the material cools, and the halves are separated and the molded part is pushed out.

Well, there are a couple of limits on what you can create using plastic injected molding. For example, you must use plastic. You also must design the part in such a way so that the part can slide out of the mold. This means the part must be, in a sense, pyramidal shape: when the half pulls apart it should pull in a way so that the surface doesn’t slide along an edge. It also means the part has to push out: holes must be parallel to the direction the mold halves pull apart.

There are other design limits on plastic injected molding. For example, element sizes cannot be too small or too thin; otherwise, the mold may not fully fill or the part may pull apart as the mold halves are separated.

There are limits on other manufacturing processes as well. 3D printing cannot print elements suspended in space; they must be “self-supporting” as the part is “sliced”. CNC milling can only remove so much material at a single pass. Sand casting results in components that must be milled or polished.

And each manufacturing process takes time, effort and equipment. So minimizing the number of manufacturing steps and components that must be assembled helps reduce costs and reduce material required to manufacture a thing.

This is quite a complex field and I cannot pretend to begin to understand it. I do have some experience with embedded design, however, and part of embedded design is determining ways to create the software necessary for your product which reduces the total component count on a circuit board: if you can use an embedded processor with built-in I/O ports and fit your application in the memory of that embedded processor, your (essentially) 1 chip design will be far cheaper to manufacture than a design which relies on a separate microprocessor, EPROM chip, RAM chip and discrete decoders and A/D encoders, each of which requires a separate chip.

And this brings me to something which irritates the crap out of me regarding user interface design and the current way we approach designing mobile user interfaces.

Have you seen a project where there seems to be a dozen custom views for every view controller or activity? Where every single button is independently coded, and then there is a lot of work trying to unify the color scheme because every control is a slightly different shade of blue?

This is because the UI designers who worked on that project are thinking like artists rather than like designers. They are thinking in terms of how to make things look nice, rather than designing a unified design. They are thinking about adding new pages without considering the previous pages, and considering how to lay out elements without considering previous elements–other than in the artistic sense.

They are not designing for manufacturability.

There is far more to consider when designing the user interface of an application beyond simply making the application look pretty.

You must consider how people interact with machines, including the limitations of how we interact with computers, including the limits on our short term memory or our limitations on recognizing patterns.

You must consider the problem set that the application must resolve. If you’re building an accounting application, you must understand how accounting works–and that requires a much deeper level of understanding than a casual user may have balancing their checkbook. (Meaning if you’re building an accounting application, you may wish to talk to a CPA.)

You must consider how the application will be put together. As a UI designer you don’t need to know how to write code, but you do need to understand how objects get reused: if you intend to have a button which means “submit”, using the exact same button (same color, same frame, same label) means the developer can use the exact same code to create the submit button even though it shows up in several places. If you have a feature which requires the user to enter their location and that feature is used in a handful of places, you may consider having a separate screen which allows the user to select their location–and reuse that screen everywhere a location is called for.

And that means rethinking how you design applications beyond just creating a collection of screens.

That means considering a unified design for layout, for colors, fonts, buttons, and all the other major components of your application. That means thinking less like a page designer and more like a branding designer: creating unified rules covering everything from color to font choices to spacing of key elements which can then be reused as new screens are required. And it means sticking to these rules once developed, and communicating those rules to your developers so they can build the components out of which the application is built.

It means thinking like the designers of the Google Material Design, except on a smaller and more limited scope: thinking how your application looks as an assembly of various elements, each of which were separately designed and assembled according to the rules you create.

It means not thinking about an application as just a collection of screens, but as a collection of designed components.

Designed for manufacturability.

Twelve Principles of Good Software Design: An Introduction.


I started my series of posts about my 12 principles of good software design in part as a criticism of the various software practices I’ve seen while writing software, and in part as a prescription for a possible answer to help developers write better code.

I know this sounds arrogant. But after nearly 28 years of writing software, I’ve seen quite a bit. I’ve seen a software team where developers were encouraged to “refactor” their code, which featured what should have been three or four methods turned into a 20-class library, with layers on layers of layers of “refactoring” piled on top. (My refactoring? Spending a week understanding the code, then rewriting it into a single four method class.)

I’ve seen code which looked like it was assembled by throwing darts at a design pattern dart board, with patterns which made absolutely no sense for the context in which it was used. (As an example, I’ve seen publish-subscribe to tie views to the view controller–an extra bit of machinery which makes it harder to figure out the relationship between views and the view controller which manages it.)

I’ve seen over-engineered applications created by people who were testing their own pet theories, such as trying to “offload” functionality out of a view controller by using a “mediator” and an “interactor” class which only serve to obscure the functionality of a single view controller behind two additional classes which serve no real purpose.

I’ve seen people create applications using a misunderstanding of certain design patterns. For example, I’ve seen someone implement a user interface using “Model-View-Presenter” in conjunction with a fourth class that implemented a Mediator pattern which tied models directly to views. The “mediator” was nearly a no-op: once the model and the views were tied together it did little but pass data back and forth without interpretation. And when I pointed out that several hundred lines of code could be removed from his code by eliminating the useless “mediator” and tying the views directly to the model (effectively implementing Model-View-Controller), I was told that MVC had been discredited as a design pattern and MVP was a superior design pattern.

It’s not just limited to stuff seen “on the ground” by random developers. I’ve also seen people fall on the Model-View-ViewModel bandwagon without understanding what MVVM is, how to best implement it, or even why they’re implementing it, other than it somehow makes your software better and stronger. Like Popeye eating spinach: shut up and eat it, because it’s good for you.

And even in the justifications for MVVM in various tutorials for MVVM, you find nonsense like this:

The definition of MVC – the one that Apple uses – states that all objects can be classified as either a model, a view, or a controller. All of ‘em. So where do you put network code? Where does the code to communicate with an API live?

But if you read the original papers covering MVC, you find a few things. First, you find the obvious answer to the above question: network code lives in the model. You also find there are no requirements on how the model be structured. And while asynchronous calls can be tricky, hiding asynchronous calls through key-value observing does not suddenly alleviate you from the need to understand the asynchronous nature of your application; it simply moves the icky parts “off-stage” where you can’t easily find them.

(Its not to suggest MVVM does not make sense; certainly in Microsoft’s development environment, MVVM is a preferred design pattern given the way the development tools have been designed. But hoisting MVVM into a generic environment requires building a lot of extra machinery which leads to confusion, rather than clarity. And in many cases it attempts to “solve problems” which only exist because of a misunderstanding of the original design patterns you presume are broken.)

At some point, in all this nonsense, it’s hard not to snap, and perhaps try to impart whatever little scraps of wisdom you’ve picked up over the past three decades.

If this series of blog posts is completely worthless to you, then my apologies for wasting your time. But I believe my twelve principles, if allowed to inform your approach to writing and simplifying software, may help you write better code.

By “better”, of course, I specifically mean code that is easier to understand, easier to maintain, and which can be handed off to another software developer without undue consternation. Now, naturally all software handed to another developer is generally met with derision; a lack of understanding can make even the clearest code seem poorly written and badly put together. But at the very least you can try to make the task of the next developer who has to look at your code a little easier.

These principles, by the way, are not really design “patterns”, but more guidelines that one can use when deciding how to put together your application.

The tyranny of choice

When you think about the problem of software development in general, you come across a curious thing. Unlike most professions where how you do your job is rigidly defined by legal constraints, software developers have the unique problem of creating projects almost entirely out of whole cloth, without any real constraints whatsoever.

Take, for example, the problem of building a house.

In the United States, in order to build a house, you must first draw up plans which take into account the location of the property, the legal limits of the property (setbacks, zoning limits, orientation, altitude, latitude, environmental surroundings), and submit a set of plans describing how the house will be built to a government building commission who reviews the plans to verify it meets the legal requirements for constructing the house. While many of those legal limits are established from underlying engineering principles, from a practical perspective house plans are constrained by legal limits rather than engineering limits.

Then a number of subcontractors are hired, each of whom have legal authority (through a licensing board) to perform specific tasks, which are completed in a given order: the lot is cleared (and rubble disposed of according to a legal framework established by the local municipality, state and Federal government), the foundation framed (and inspected by a government inspector), poured (and inspected by a government inspector).

The attachment points on the foundation are then used to frame the house (constructed in strict accordance to the requirements on the plans, and verified by a government inspector). Then the plumbing and wiring is drawn through the frame (and inspected by a government inspector), the walls are filled with insulation and covered with sheetrock or with an outer covering (like stucco or siding, all engineered according to government regulations, and inspected by a government inspector). Roofing material is also added (and inspected), and the house is finished (and inspected), and if the job was done correctly according to rigidly defined limits established by law, a government official signs off on the house, stating that the house was constructed according to these strict government limits and is therefore suitable for habitation.

And this is true of most professions and most of the things in our lives. Cars are designed by law to have a hump or void in the front hood which is designed to absorb the impact of a pedestrian, to have crumple zones which cushion the passengers from a front-end impact with another car, and in the future to have backup cameras so drivers have better visibility when the put their car in reverse. Furniture makers are required to use approved non-flammable materials, to make sure their furniture fits through the opening of a front-door (whose size is defined by building code), to make sure it contains no small bits which can break off and represent a choking hazard. Even designers laying out the cubicles for a floor plan must submit their plans for review to the fire department in many municipalities to make sure the building can be evacuated according to government regulation.

Software developers, however, have no such constraints.

Sure, we are often told what sort of application a project manager wants. We may be told the target platform (desktop, mobile, embedded), and that may define the development tools we can use and the programming language we are using. But in a world where embedded processors have more computational power than high performance workstations did 20 to 30 years ago, even embedded development generally is not constrained by processing power.

And within these rather loose constraints we are free to do whatever we wish.

The tyranny of choice.

From whole cloth how do we decide where to start? How do we decide how to start designing and building our application?

Design Patterns

A design pattern is defined as a general repeatable solution to a commonly occurring problem in software design. In general software design patterns describe a fixed and well established solution to a common problem which, in conjunction with other design patterns, can be used as fundamental building blocks to creating a computer program.

The concept of design patterns dates back to the early 1980’s, but did not really gain popularity until the publication of the book “Design Patterns: Elements of Reusable Object-Oriented Design” was published in 1994. In it, the so called “Gang of Four” outlined a number of notable “design patterns” commonly seen in software development, each solving a well-defined problem and providing a fixed solution to that problem.

Take the problem of one component in your application notifying another component in your software of a state change. For example, you may have a word processing program where a chunk of code responsible for saving your file needs to notify another chunk of code responsible for updating the window’s appearance that your file successfully saved—so the window’s title can be changed to show the file was written to disk.

A specific solution to this problem is the Publish-subscribe pattern. In this pattern, an object is created which holds a list of interested subscribers who wish to receive notifications about a given topic. Publishers can then send out notifications by handing them off to this object, which is then responsible for sending the notifications out to all interested subscribers.

This design pattern has the advantage that a publisher does not need to know about the subscribers; it simply sends a notice. And a subscriber does not need to know anything about the publishers; it can simply listen for notices.

So in our example, we can define a new notification, called “file saved”, which then indicates as part of the notice the file that was saved.

Our window UI can then subscribe for all “file saved” notices, and when it receives a notice that the file it is responsible for is saved, it can update the window’s state.

Likewise, our file system code, when it is done saving a file, simply sends a “file saved” notice with an indication of which file was actually saved.

This design pattern helps solve a specific problem. It also has the nice property that, in the future, if our window code changes (such as, for example, instead of showing if the file is saved by updating the window’s name, we instead display a checkbox inside the window’s content area), the file save code never needs to change. We can, instead, create a view object with a checkbox that listens for notifications, and shows or hides the checkbox as appropriate.

This separation of concerns helps make our code easier to understand, by allowing us to build our code in reusable chunks, and by having those chunks wired up in well-defined ways. By separating out the part of the code which saves the file from the part of the code that shows if the file is saved, we can change how we show how our file is saved without having to dive deep into the file save code. We just hook up a new subscriber to our publish-subscriber system.

Yet somehow, even within all these patterns, within all of these methods which allow us to construct our applications out of whole cloth, somehow projects still fail. They are still hard to maintain. Our projects go over budget. Deadlines are missed. Even if our design patterns are properly understood and properly designed, somehow we still are writing code that is hard to maintain, and sometimes we’re writing code that simply does not work.

All this failure suggests to me that perhaps we’ve lost sight of the real problem. Perhaps we need a different point of view.

Programming Idioms

Because the term “design pattern” is so well known, I’d like to use a different term to capture something a little more general. The intent is to capture the idea of how we assemble programs—not as a rigid solution to a fixed problem, but more the “style” of the code and the ways we express ourselves in software.

For this, I’m using the phrase programming idiom. Like a language idiom (a way a language is spoken in a given area or a style that is characteristic of a particular subject), a programming idiom is intended to capture the idea of how we organize the code we write. This not only applies to rigid solutions to fixed problems (such as with design patterns), but also applies to how we think of grouping statements into a function, how we insert white spaces between groups of statements (to show functionality), and how we choose the names of variables or functions or classes.

But we’re not really talking about style; to me, programming style is the overall umbrella; it is how we may write a book or organize the chapters or the paragraphs. An idiom, on the other hand, implies something looser than a design pattern, but not as broad as a programming style.

“Design Patterns” as Programming Idioms

I contend that it is useful to separate out design patterns which represent perhaps a fixed way to solve a given problem, and programming idioms which represent a more general way we organize code.

And I further contend that not all design patterns are really design patterns, but are instead programming idioms.

For example, a design pattern (a fixed solution to a given problem) would be the singleton pattern. A singleton could be expressed in a variety of different ways (naturally), but the general pattern of the solution is the same: a single entry point returns our singleton object, creating it if necessary. In C++, for example, we could write:

MyThing *MyThing::Get()
    static MyThing *singleton;
    if (singleton == NULL) {
        singleton = new MyThing;
    return singleton;

The particulars of the implementation could, of course, change: we could use the pthread library to make sure our method is not re-entrant. We could declare the method as a function instead of as a class function. We could use a variety of other names for the method. We can even construct our object in advance rather than construct the object inside this method. But the essence of the pattern is exactly the same: we call something which returns the only object in our system.

The same can be said of the Publish-Subscribe pattern: we could have a single notification system (such as Apple MacOS X’s NSNotificationCenter object), or multiple notification objects. But the essence of the pattern remains the same: a subscriber subscribes for notifications, a publisher delivers notifications to the object the subscribers subscribed to for notifications.

Some “design patterns”, however, are not as clean as this. For example, take the Decorator pattern. The motivation of this pattern comes from user interface design, whereby individual components can have additional “decorations” added to them without affecting the way the underlying component operates. For example, windows in your user interface may not have scroll bars by default; one way to allow your system to add scroll bars is by having a mechanism by which the interface of the window can be extended; a decorator then provides a way to add scroll bars to the window. The decorator mechanism can then allow multiple other decorations to be added, such as a border or a title bar. This keeps your basic window simple: it’s just a region of the screen. And for each new feature you want to add to your window, you add a decorator rather than make your window code more complex.

Unlike the singleton example, however, there are multiple ways to approach the problem of creating a window which allows decorators to be attached.

Consider also the model-view-controller pattern. There are as many ways to write views (the components in a user interface) as there are user interface libraries, and different libraries provide different degrees of support for controller code (from none at all, as was the case in Windows 95, to a very rich and expressive system such as on Apple’s iOS), and many libraries attempt to provide tools to make creating the model code (the code which represents the data in your application), from basic objects (such as Apple’s MacOS X’s NSDocument class) to the more complex.

I would contend these are not “design patterns” at all, as a design pattern represents a more rigid solution to a fixed problem. These architectural “idioms” represent a more complex way to view code than the fixed solutions given in a design pattern.

Idioms may also concern themselves the smaller chunks of code than architectural design. For example, take the way we group chunks of code into reusable functions. There is no fixed solution to how we group statements into functions or methods. Some books attempt to give advise based on folklore; I’ve read one book which suggested functions should be no longer than 24 lines long, representing the most code that can be displayed on a VT-100 terminal.

Thankfully it is not 1985 anymore. But that doesn’t alleviate the problem.

Programming Idioms as ways we express ourselves.

At the bottom of the stack, programming is essentially as much expressive art as it is engineering. While programming has at its core the requirement that code compile, execute and perform the tasks we desire in a correct and reliable fashion, that really is no different than technical writing. Our goal is to express an idea in a way which is correct, and which communicates our intent clearly. 

Programming also has other goals, of course. We should be able to hand our code off to other people so they can fix problems that may come up later. We should be able to cooperate with other people in writing more complex systems. We should be able to go back to code we wrote months or years ago and understand what we were doing. We should be able to find problems in the code we wrote during development.

But ultimately this is an art. It may be more akin to industrial design (such as designing a vacuum cleaner or a chair) than it is to more artistic endeavors (such as painting); the product of our work has a clear function it must satisfy. But that doesn’t change the fact that we are creating something against a blank slate using phrasing that is familiar to us, and which helps express to the next developer what it is we are doing.

And, I think, it is important to separate out “design patterns” from “programming idioms” so we can remember that not all problems are rigidly defined or have only one possible solution. We also need to keep from falling into the trap that previous idioms are somehow fixed design patterns which we must discard because our problem doesn’t fit the rigid definition we learned from someone else. Sometimes that can cause us to write the very complicated spaghetti code we are striving to avoid.

With this idea of programming idioms in mind, and with the problem framed: that what we really need is a new point of view if we are to write better software and escape ruts that were never really fixed over the past 30 years, we can move onto what makes for a good programming idiom.

What makes for good development style.

12 Principles of Good Software Design.

The 12 Principles of Good Software Design: Conclusions

This is a continuation of the blog post 12 Principles of Good Software Design.


Unlike most professions, software developers face an interesting and rather unique conundrum. Our job fundamentally boils down to realizing a vision of a product–often defined by someone else (such as a project manager or by business needs), entirely out of thin air, creating out of whole cloth something sophisticated and advanced with few constraints except for the desired results.

And because we are creating out of whole cloth with few constraints–at most we may be told our software has to run on an iPhone or be deployed to an existing server cluster, or run on an embedded processor with far more memory and computing power than an advanced workstation from 20 years ago–we are completely free to choose how we express ourselves in code. So long as the results work, and can be maintained by a future generation of software developers, we can do anything we want.

It’s heady stuff.

The problem, however, is that despite our freedom of choice, despite a lack of constraints on how we do our job, despite a lack of prescriptions on how to design or build our projects that other professions face, the failure rates in the software development industry are amazingly high. Some studies suggest as many as half of all software projects will fail–either by far exceeding the original budgets or being canceled outright. Even in the more limited field of Enterprise Resource Planning, where software packages and best practices are well defined and the project is simply adopting existing tools to an existing business, 51% of companies viewed their ERP implementation as unsuccessful. Worse, 31% of projects will be canceled before they are ever completed, while only 16% of projects (from the Chaos Report) are completed on-time and on-budget.

These are older reports and undoubtedly success rates have increased as we see a shift to more well established software practices. But not by much; my own personal experience is that the larger the project (as measured by programmers working on a project), the more likely it is that project will fail and code will never see the light of day.

“Failing is OK. Failing can even be desirable.” But we are not learning from our failures, and even practices put into place in order to “learn” from our failures (such as project postmortems and team introspection) offer us little insight, simply because the people making the mistakes are seldom the right people to ask about their mistakes. (It’s why so many postmortems of failed projects I’ve been on either turn into a lot of hand-wringing over “unforeseeable” errors, or turn into massive blame sessions and finger-pointing. Because human psychology being what it is, the most frightening thing any of us can face is the prospect of being wrong.)

I propose that the real problem is the fact that software developers face few constraints in how we build our code. So long as it works, we are theoretically free to build code any way we wish, testing our own pet theories or writing code we are comfortable with–or just banging out something that meets the requirements and makes our bosses happy (and gets us that promotion) without considering what it is we’re writing.

And I propose that our industry is slowly coming around to this view, and attempting to fix the lack of constraints with recommendations that attempt to impose constraints in the name of “clean code” or “best practices.”

But because as software developers we don’t like to face the prospect that our failures are being caused by too much freedom, people who propose best software practices face a delicate balancing act. Which is why so many discussions about software development describe “patterns” and treat them as if we are discovering fundamental truths buried in the aether, rather than building constraints which keep programmers from going off into the weeds.

Unfortunately because of this–because most people describing constraints to software development practices as if they are uncovering truths rather than laying out constraints–most of us feel free to “discover” (read: “invent”) our own “truths”, and strike out on our own, testing our own pet theories and pet methodologies on software projects. It’s as if we are the first to discover the Higgs Boson and are now using it to build our own anti-matter machines, not realizing that it’s a figment of our imagination and to outsiders we sound like a crazed hermit banging out screeds against modernity on our mechanical typewriters.

In other words, because we haven’t addressed the real problem–too much freedom–we’ve created an industry of lecturers who go around beating their own drum about best practices, which often either exacerbate the problem, or at the very least do nothing to address the failures.

And our projects continue to fail; they continue to be harder to maintain, continue to run over budget, continue to be canceled outright after wasting billions in our economy.

Research shows, by the way, that people who feel their choices are constrained are in fact happier than those whose freedoms are not similarly constrained. We do know that religious conservatives in the United States are in fact happier (despite being seen as stick in the muds incapable of happiness), in large part because their moral framework reduces the viable choices they see themselves facing in life.

I personally suspect the reason why people with fewer choices are happier is for the same reason that Conservatives are happier: not because we don’t swim in an endless sea of choices, but because a guiding principle reduces those choices from an overwhelming array to a manageable framework. By that same logic one would expect other groups employing the same strategy, constraining their choices to an acceptable array by using some core consistent philosophy, would see greater happiness regardless of political orientation. And it would suggest that moral relativists–people who do not believe morality is an absolute but is a choice relative to circumstances–would have greater levels of unhappiness because they have no framework to winnow down choices to a manageable set.

Certainly we also see this constraining of choices being used in other areas, such as by certain restaurants. It’s why fast food restaurants have resorted to allowing people to order meals by the numbers: reducing the size of the menu makes ordering from the menu less stressful. Higher-end restaurants (especially those which feature “farm to table” or “locally grown” choices) also effectively reduce choice to its patrons, but in the process increases satisfaction. We also see this constraining of choice at certain high-end gourmet food stores. Limiting choice to “organic” or “non-GMO” or “locally sourced” reduces the bewildering array of decisions shoppers face to an acceptable subset–even as those stores introduce unique products or flavors not seen elsewhere.

It’s not to suggest that outside forces should restrain choice for us, by the way. When our choices are constrained for us, we become increasingly unhappy as the choices we would like to make for ourselves are taken away, and our own internal agency is denied. Certainly restaurants who reduce the size of their menus don’t constrain our overall choice–simply because we could eat elsewhere, or go home and make food our ourselves and our friends.

But by having a framework for choice–a framework whereby we winnow down the external pressures shoving a million offerings down our throat to a handful of decisions that we find acceptable–we make our selves happy by reducing the stress that unlimited choice threatens to impose.

And I propose the same can be said about the unlimited choices faced by software developers on how we create our own applications.

For me, the goals of this series of blog posts was to propose a set of principles for code “idioms”–ways we write code–which make it easier on us, by constraining our choices to things that I personally found have worked for me.

Those “idioms” are based on the idea of constraining our choices according to certain ideas that I suspect most would find true–that software should be kept simple, understandable, and written in such a way so that teammates and future maintenance developers can understand and maintain the code.

Of course, we can’t always do that. Time pressures make us sloppy, and sometimes algorithms we need to implement are rather specialized or requires specialized understanding. But we certainly can strive to make our code less sloppy and to help future developers understand what is supposed to be hard (because of algorithmic complexity) and what should have been simple (but was over-engineered or improperly architected).

Certainly we can realize that these rules are not “discoveries” which express an underlying truth, such as we see in mathematics which seems like a blank slate but in fact describes the physical world. And because they’re not discoveries, perhaps we can realize that when we try to find our own “truths” and seek out our own “discoveries”–new design patterns, for example, or new ways to organize our code–we haven’t discovered anything.

We’ve simply gone off the rails.

Thus my “12 principles.” Because they’re less about design patterns and more about the goals we should have when we write code.

Keep it simple. Keep it concise. Keep it clear. Keep it easy to read. Don’t be “clever.” Don’t go off the rails.

Ultimately, it’s about recognizing our real problem: too much choice. And suggesting by keeping to a set of simple principles we can reduce choice–and reduce job stress, reduce uncertainty and reduce decision complexity in a way which allows us to increase our chances of success.

If suddenly your job becomes too simple because you stick to a set of simple, concise and clear expressions that you develop an itch to explore the frontiers of your knowledge–there are plenty of open source projects out there looking for volunteers.

The Twelfth Principle: An idiom should allow a new developer to understand the code.

This is a continuation of the blog post 12 Principles of Good Software Design.

12. An idiom should allow a new developer to understand the code.

Of course up until now all of the idioms–the ways we express ourselves in software–have focused on clarity and simplicity over complexity and over-engineered solutions which just make our life complicated. We’ve focused on idioms which keep functionality together, which make a particular module obvious, which breaks functionality into well-defined components.

But it’s not enough to simply use patterns which emphasize simplicity. We also need to think about how we build our code in a way which makes what we’re attempting to do understandable to another developer who is taking responsibility for your code, by using a uniform approach which provides consistency and clarity to the overall project.

My own approach is to consider the “big picture.” For most mobile applications which talk to a back-end API, I consider two things: how does data flow through the system, and how does the user interact with the system. For a mobile application with a back end service, the flow is rather simple:


We store our back-end data in a database. We use a database I/O collection of classes (such as JDBC) to obtain and store data in the database. Business logic verifies the data is stored in the back end correctly and maintains integrity of the data. Business logic also handles complex logic such as logging in, database access and the like. The servlet logic handles API requests made by the model code on the mobile app. Then a system of views and controllers help present the data.

Data flows up, from the database to the views. Changes in the data flows down, from the view to the database.

Most applications we build for mobile applications follow the same flow of data up from the database to the views, down from the views to the database.

The flow of the individual screens within an application (either an iOS or Android application) tend to be determined by the specific domain of the application. But even there some generalities can be made: a login view controller flows to a “forgot password” screen and a “sign up” screen. (One of the advantages of modern iOS development are storyboards which, when used properly, allow a graphical relationship between the screens of a section of the application to be maintained.)

There are other types of applications which we could build which have a different overall structure. For example, a compiler/parser tool which takes an input file and produces an output file would generally consist of code which manages the input files being read, a lexical tokener which converts a character stream into parser tokens, a parser which converts the language into a parse tree, and optimization/conversion steps which manipulate the parse tree according to the various optimization or conversion steps required for our language. A file writer then handles converting our parse tree into our desired output file. Data flows from the top down, converted at each step according to the rules of our language.


Not all applications have the same structure, because not all applications are mobile applications talking to a back end server architecture.

But I do personally consider how data flows: when it is created, when it is manipulated, when it is presented and when it is saved. I may consider the format of the data, how components of the data interact, what the business rules governing the data are, and what the data represents.

And that helps us with the structure of the application: how the model component works (if we’re writing an MVC style application), how the data is loaded, how the data moves–and that then drives the overall design of the rest of the application.

Of course how we structure our application also depends on a series of best practices–and it is those best practices which allow us to decide where to put the components of our application.

For example, one best practice is that business logic which constrains the integrity of data in a database needs to reside as close to the database as possible–that way, violations of business logic doesn’t leak in from bugs or security loopholes in other places of the application. This is why I’m a huge believer in putting business logic in a client/server application on the server side: because a hacker who hacks the API of a client/server application cannot do more to the back end than a user of the application which talks to the back end.

Once you realize that the business logic belongs in a particular place, then it is easy to use the other idioms to create code that is easy for another programmer to follow, using design patterns which are easy to follow. The logic lands as a chunk of business logic between the code which parses incoming HTTP requests and the database access code which gets and saves data to the database.

Besides considering the global structure of the application and being consistent in our use of the global structure as we assemble our applications, we can also make use of notes–both in the form of comments and in separate documentation, which gives the overall structural ‘gestalt’ of the application.

There are a number of books which eschew documentation, either in the form of comments and in the form of documentation, which litter the scene. For example, in chapter 4 of Clean Code: A Handbook of Agile Software Craftmanship, we see the remark:

Every time you express yourself in code, you should pat yourself on the back. Every time you write a comment, you should grimace and feel the failure of your ability of expression.

And certainly there is some value to this idea, at least when it comes to writing most code. After all,

if ((row.record == VALID_SYMBOL) && ( && !(row.error)) {

would be better written:

if (IsRowValid(row)) {

with the logic in a self-contained function call. (This also has the advantage of concentrating the logic for validating a row in one location rather than allowing that logic to be scattered everywhere.)

However, there are times when it is necessary to use a comment and to include documentation to allow another developer understand roughly where all the pieces are.

Comments are also necessary when writing complex code which implements a particular algorithm, so a new developer understands what is being implemented. Even if the comment is simply a reference to the textbook or paper from which the algorithm is pulled, since you cannot count on every developer looking at your code having a Ph.D. in Computer Science.

For example, how self-evident is the following code?

struct node *grandparent(struct node *n)
 if ((n != NULL) && (n->parent != NULL))
  return n->parent->parent;
  return NULL;

struct node *uncle(struct node *n)
 struct node *g = grandparent(n);
 if (g == NULL)
  return NULL;
 if (n->parent == g->left)
  return g->right;
  return g->left;

void insert_case1(struct node *n)
 if (n->parent == NULL)
  n->color = BLACK;

void insert_case2(struct node *n)
 if (n->parent->color == BLACK)


But if we were to include the following comment at the top:

/*  insert.c
 *      Red/Black Tree Insert Code. (From–black_tree#Insertion)

now everything we’re doing a lot more obvious.

The same can be said about the larger architecture used by your application. Just a simple one-page comment stating the rough layout of all the components can go a long ways in helping a new developer to the team have an understanding as to where all the code lies.

Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones. – U.S. Secretary of Defense Donald Rumsfeld

Despite being ridiculed for his statement, he does allude to a fundamental principle of security, one I first encountered when studying for the CISSP certification.

But I contend there is a fourth combination: unknown knowns–things we are not aware that we know. And one of the biggest problems new developers to a project have is that they don’t even know where to start–and the other senior developers on the team don’t realize the knowledge they haven’t passed forward.

And until that knowledge is passed, a new developer, not wanting to disturb the structure of the existing application, tend to the Lava Flow Antipattern.

In many ways, if we are writing code for a team, we need to think about how that code can survive our own authorship. To do this, we need to consider how we structure our code so that our intention is clear. But more importantly, we need to give clues to new developers how they are expected to extend and modify our code, where fixes should be made, and where new functionality should be added to prevent various antipatterns from forming, and to reduce technical debt.

The Eleventh Principle: An idiom should only implement the functionality needed.

This is a continuation of the blog post 12 Principles of Good Software Design.

11. An idiom should only implement the functionality needed.

One of the coolest things about writing software is that we are creating something out of virtually nothing but our imagination and our ability to design something that works. And so it allows us to test our pet theories about architectural design, about the best way to solve problems with designs that are extensible and which can be generalized to solve even larger problems.

The temptation to do this exists on all projects. Even though we are warned constantly about premature optimization, we still do the equivalent when it comes to solving the immediate problems we have in front of us by extending them to more generalized solutions which we expect will help us in the future. But they almost never do: instead of providing us an easily extensible architecture which can solve future problems, we instead create something too complicated to maintain. And the new problems we face don’t fit into the solution we previously built.

Years ago I wrote an essay on how not to write Factorial in Java.

Another example of solving problems we don’t have yet, only to make things confusing: suppose we have a table view which only shows a subset of the items in the table depending on a bit of business logic. This business logic essentially filters out the results of the table according to a function which takes an object from our database and determines if it should be visible or not.

In the process we decide that what we really need here is considerable flexibility in determining if rows are visible or not.

So we decide to use the Specification Pattern, which provides us nearly unlimited flexibility in assembling simple conditional statements into complex logic classes. We build our specification protocol:

@class ModelObject;

@protocol Specification <NSObject>
- (BOOL)isVisible:(ModelObject *)obj;

and we build our classes which handle the various boolean operations:

#import "Specification.h"

@interface AndSpecification : NSObject <Specification>
- (id)initWithLeft:(id<Specification>)left right:(id<Specification>)right;

@interface OrSpecification : NSObject <Specification>
- (id)initWithLeft:(id<Specification>)left right:(id<Specification>)right;

@interface NotSpecification : NSObject <Specification>
- (id)initWithSpecification:(id<Specification>)spec;

And of course comes our test classes which test potential propositions:

@interface HasNameSpecification : NSObject<Specification>
- (BOOL)isVisible:(ModelObject *)obj;

@interface HasAddressSpecification : NSObject<Specification>
- (BOOL)isVisible:(ModelObject *)obj;

Our View Controller class then runs our filter as we refresh the contents:

@interface ViewController () <UITableViewDelegate, UITableViewDataSource>
@property (strong, nonatomic) id<Specification> filter;
@property (strong, nonatomic) NSMutableArray<ModelObject *> *allObjects;
@property (strong, nonatomic) NSMutableArray<ModelObject *> *filtered;
... (other fields) ...


- (void)refreshContents
	[self.filtered removeAllObjects];
	for (ModelObject *obj in self.allObjects) {
		if ([self.filter isVisible:obj]) {
			[self.filtered addObject:obj];
	[self.tableView reloadData];

For now, of course, we only have one filter–so we construct that by hand in the viewDidLoad method:

- (void)viewDidLoad {
	[super viewDidLoad];
	self.filter = [[AndSpecification alloc] initWithLeft:[[HasNameSpecification alloc] init] right:[[HasAddressSpecification alloc] init]];

Nearly a half-dozen classes, a hundred lines of glue code or so, but we are now prepared for that future request we know that is coming, of a server which may send over an arbitrary filter specification to our client which alters the behavior of what items we show on the table view.

This design model is conceptually relatively easy, too: if we have other predicates we may filter against, such as if the zip code is well formed or if the phone number is present, we simply create new classes for each test–then assemble them into a tree hierarchy for evaluation.

And when the back end sends us a promised XML file which contains the new filter specification, such as:

    <filter name="HasName"/>
    <filter name="HasAddress"/>

we can easily parse that using our specification classes and reload the filter used in our code.

A beautiful architectural generality to solve the present problem of filtering items if they have a name and address against a general purpose structure that can form any filter we can imagine from an arbitrary specification.

But will we ever use this generalized solution?

Well, in order to support this filter method, the back end needs to send an XML specification for the new filter. The XML needs to be authored–which implies a back-end page which administrators can go to in order to create new filters. The page needs to be created and maintained; the database entries need to be built and tested, the end-user administrators need to be trained, and the new filters need to be validated before they are deployed.

That’s a lot of back-end work.

Instead of sending this XML, it would be far easier to simply send a single keyword or index: “filterNameAddress” for our present filter, or “filterNameOrAddress” for an Or operation. Or even better, that could be represented by a magic number.

Dozens of lines of code in our application, and in the end, this is how we’d wind up using our classes:

- (void)viewDidLoad {
	[super viewDidLoad];
	switch (self.filterIndex) {
			self.filter = [[AndSpecification alloc] initWithLeft:[[HasNameSpecification alloc] init] right:[[HasAddressSpecification alloc] init]];
			self.filter = [[OrSpecification alloc] initWithLeft:[[HasNameSpecification alloc] init] right:[[HasAddressSpecification alloc] init]];

Meanwhile we could have eliminated the nearly half dozen classes with two methods and a switch statement instead:

- (void)refreshContents
	[self.filtered removeAllObjects];
	for (ModelObject *obj in self.allObjects) {
		BOOL addFlag;
		switch (self.filterIndex) {
				addFlag = [self testNameAddress:obj];
				addFlag = [self testNameOrAddress:obj];
		if (addFlag) {
			[self.filtered addObject:obj];
	[self.tableView reloadData];

Worse, this is how our code would evolve. In the present, we haven’t reached that point. Our present problem is to filter against one pre-existing filter, so why couldn’t have we written this instead?

- (void)refreshContents
	[self.filtered removeAllObjects];
	for (ModelObject *obj in self.allObjects) {
		if ( && obj.address) {
			[self.filtered addObject:obj];
	[self.tableView reloadData];

Dozens of lines of specification code scattered across 5 separate classes against a protocol interface which is hard for a programmer to understand–literally replaced by one line of code:

if ( && obj.address) ...

And that’s nearly a half-dozen classes and dozens of lines of code we now have to maintain, explain, and explain why we’re doing things the hard way–for a future that never came.

Life is hard enough as a software developer for us to solve problems we don’t currently have.

The Tenth Principle: An idiom should not require extraordinary steps to compile or break the existing development tool set.

This is a continuation of the blog post 12 Principles of Good Software Design.

10. An idiom should not require extraordinary steps to compile or break the existing development tool set.

I guarantee this will be my most controversial comment, given how common it is for developers to want to customize their build processes to supposedly make things easier for them. And when we work by ourselves, it’s enticing to do things our own ways, to customize our environments to add new and time saving processes.

But when we work with a team, or when we work on products for a larger organization, those custom tools can create more problems than they solve. One person’s time saving plugin is another person’s confusing plugin, one person’s custom method to build a product is another person’s tool breaking dependency.

As an example, suppose you wish to integrate Maven with Xcode to automatically handle checking out and building a product. According to this article on Stack Overflow, you would need to downgrade to an earlier version of Maven. Yet when we downgrade Maven on a computer, we potentially create other dependency issues if–for example–we’re building Java on the same computer.

We also see similar integration problems with CocoaPods, as features in Xcode represent a moving target that CocoaPods sometimes fails to keep up with.

It’s not to suggest there is a problem with either of these projects. But because they are not part of the standard tool chain, sometimes the standard tool chain can break, creating problems for the development team. And those breaks must be weighed against the benefit provided by these tools. For example, project requirements may force the use of CocoaPods for third party library integration, and you just have to deal with the occasional problems that arise.

But if you are integrating just one library, the extra machinery introduced by CocoaPods may not be worth the trouble of simply downloading the library and linking directly to it.

Other problems can arise when a team doesn’t know or doesn’t use the tool chains for a particular language in a standard way.

For example, it is possible to build a Java application which is built using make, using tools such as emacs to edit the code. One could build an Android application this way, using the command-line tools in the Android application. One can–with the application of the right command line arguments–even ignore the standard hierarchical directory structure that usually accompanies a Java application.

For example, suppose we have a very simple Java program which prints “Hello World”:

package com.chaosinmotion.testbuild;

public class Main
	public static void main(String[] args)
		System.out.println("Hello world.");

and we wish to build a jar file which can be executed using java -jar test.jar.

So we create our manifest file:

Main-Class: com.chaosinmotion.testbuild.Main

And we build our Makefile:

	javac $< -d .

test.jar: com/chaosinmotion/testbuild/Main.class
	jar cmf manifest test.jar com/chaosinmotion/testbuild/Main.class

Our file directory looks like:


And when we run make on the command line we get the jar file we want.

Did I just offend all the experienced Java programmers here? 😈

Yes, you can build Java applications this way. You can even build Android applications this way: all of the steps in the Android build process ultimately triggers command-line tools that can just as easily be fired from a Makefile.

But you wouldn’t want to, because it subverts the common Java development paradigms used by most Java development tools.

By not maintaining the source kit directory structure that most Java development kits expect, you cannot move your code to Android Studio or Eclipse. You can’t even easily move your code to using ant. (This was a simple one-source file example. Suppose, however, you had hundreds of java files scattered across multiple directories, directories which do not reflect the package structure declared in the source kit.)

And by not being able to use the standard tools, you miss out on a variety of features these tools bring to the table which can help accelerate your development process. For example, that make file that builds hundreds of sources means you don’t get the benefits of an incremental build and debug process you’d get in Eclipse or Android Studio. It means you wind up spending a considerably longer time to wait for a build. It means you may not even be able to leverage a source debugger tool, leaving you to using “print” statements to debug your code.

The problem with abusing development tools, especially in a large group project, is that you create all sorts of obstacles to efficiency. Now some of the plugins or additional tools we really need to use: as I noted, the occasional cost of a broken Xcode build process may outweigh the necessity of coordinating a dozen or more plugins provided and managed by CocoaPods. But think carefully before wandering away from a development environment’s “happy path”, because you may create more problems than you solve.

The Ninth Principle: An idiom should be debuggable.

This is a continuation of the blog post 12 Principles of Good Software Design.

9. An idiom should be debuggable.

As software developers sometimes we become distracted by shiny bobbles and design patterns that represent a theoretical ideal that we think will help solve our problems. But in the process we don’t consider the fundamental process of writing software, we break our tools and we inadvertently create inefficiencies in the name of that ideal. Some developers even go so far as to think that if the tools are broken in order to fulfill their ideal, that it’s the tools fault–not considering that regardless of fault, they now have to write software with one hand tied behind their back.

The fundamental process of writing software is the Edit-Compile-Test cycle: we write code. We compile code. We test, find bugs and fix them by editing the code. And the faster we can perform this cycle, the more productive we can be.

Now a considerable amount of attention has been paid to the edit part of our process: IDEs, code refactoring, autocompletion; they all help us create and change code faster. We also have spent considerable amount of time on the compile process, even going so far as to advocate computer languages (such as Javascript or Ruby) which skip the compile step entirely. But we seem to take testing for granted, outside of unit testing–which is such a small subset of the entire “test” cycle above it’s almost not worth mentioning here.

And by disregarding testing (except for “unit tests”), we make our lives considerably more difficult.

For example, let’s assume we have a model class which we wish to provide a mock of, for testing purposes. The model class itself contains a lot of sophisticated logic, but it’s not ready yet, so we wish to use a mock object to test our user interface. Our standard mechanism for constructing our model is a ModelFactory:

@class Model;

@interface ModelFactory : NSObject

+ (ModelFactory *)shared;
- (Model *)create;


In order to allow us to create our mock object, we use the NSProxy object, which takes all method calls to it and encapsulates them as an invocation which can then be forwarded to the appropriate object for execution.

So we create our protocol to define the procedures used by our model:

@protocol ModelProtocol <NSObject>

- (NSInteger)getItemCount;
- (NSDictionary *)getItemForIndex:(NSInteger)index;


And we then define our proxy with the protocol:

#import "ModelProtocol.h"

@interface Model : NSProxy <ModelProtocol>

- (id)initWithObject:(id)object;


For this example, our proxy simply wraps an object which provides the implementation, though in practice this could be considerably more complex:

@interface Model ()
@property (strong, nonatomic) NSObject <ModelProtocol> *forward;


@implementation Model

- (id)initWithObject:(id)object
	self.forward = object;
	return self;

- (NSMethodSignature *)methodSignatureForSelector:(SEL)sel
	return [self.forward methodSignatureForSelector:sel];

- (void)forwardInvocation:(NSInvocation *)invocation
	[invocation invokeWithTarget:self.forward];


Our mock object, of course, is relatively simple:

#import "ModelMock.h"

@implementation ModelMock

- (NSInteger)getItemCount
	return 3;

- (NSDictionary *)getItemForIndex:(NSInteger)index
	return @{ @"Index": @(index) };


Now we have the machinery in place to allow us to create a Model object, and since we’re building our Model from a factory singleton, we have a single place where we can produce our model (or Mock model) quite easily.

At this point we can obtain an instance of our proxy through the model factory, and execute methods in our proxy, switching between our mock object and our production project in our factory.

So, in our View Controller:

@interface ViewController ()
@property (strong, nonatomic) Model *model;

@implementation ViewController

- (void)viewDidLoad {
	[super viewDidLoad];
	// Do any additional setup after loading the view, typically from a nib.

	self.model = [[ModelFactory shared] create];
	NSInteger index = [self.model getItemCount]; // Test our call

	(other initialization here)

We create our model from the model factory, and we can invoke a method within our proxy which then is forwarded to our mock object.

So now we need to debug this. And so we set a breakpoint on the line where our method call is made to our proxy:


And we step into our method. Or rather, we try to step into our method. Result?


We don’t step into the method.

The current version of Xcode (Version 3.7.1 (7D1014) as of this writing) cannot step into a proxy. That’s because the process wraps the proxy into an invocation and then calls the invocation elsewhere.

Now for our simple mock object, it may not matter–but if we need to step into the actual model code, we’re screwed.

We cannot debug our code anymore.

And it didn’t have to be this way if we didn’t fall victim to the shiny bobbles. We already have the pieces in place to allow us an easier implementation–one which doesn’t have all the shiny coolness of NSProxy, but a more pedestrian implementation which we can actually debug.

Step 1: Get rid of the Model object entirely. We don’t need it.

Step 2: Change our ModelFactory to return the protocol representation of our object:

@protocol ModelProtocol;

@interface ModelFactory : NSObject

+ (ModelFactory *)shared;
- (id)create;


Our ModelFactory then directly returns the raw mock model object or the regular model object.

Our View Controller also needs to change:

@interface ViewController ()
@property (strong, nonatomic) id model;

@implementation ViewController

- (void)viewDidLoad {
	[super viewDidLoad];
	// Do any additional setup after loading the view, typically from a nib.

	self.model = [[ModelFactory shared] create];
	NSInteger index = [self.model getItemCount];

But the changes are not significant; just the actual declaration of the model property.

Now we set our breakpoint as before:


And we step in, and amazingly enough, we actually manage to step into our mock object:


There are so many patterns out there that seem cool and interesting, but which make debugging our code significantly harder. We saw this in a previous post where we examined a method swizzling technique on a category of UITableView, where a bug in our endless scrolling code could potentially break every single UITableView and not just the ones that use endless scrolling–and without leaving us a clue as to how things went wrong.

Proxies and delayed invocation objects are also quite nice and can be used to solve very specific problems–but in general there are often other methods which don’t require more esoteric expressions which a modern debugger cannot handle.

An idiom should be debuggable, because if it is not, we’re left being lost, forced to do our job with one arm tied behind our back.