Turning on a metal lathe

So I bought a bunch of brass and aluminum to go along with my metal working lathe, and today I tried to turn a plumb bob.

And here’s what I learned.


(1) Make sure you have all the parts in the box. I took delivery of a 4 jawed chuck, only to find the screws were missing. (I went ahead and simply ordered the replacement screws; the theory being that at some point I’ll lose the existing screws anyway.)

(2) You can achieve some amazing precision on these lathes. Even a cheap lathe can cut objects to a tolerance of perhaps 0.001 inches. However, you get these precisions only with a lot of effort and thinking about how you are going to cut the part. For example, it doesn’t matter if you have tools that are accurate to 0.0001 inches, if the metal you’re cutting deflects 0.1 inches in the middle.

(3) It is impossible with three jawed chuck (or even a four jawed chuck) to flip the part and turn the length and have it all work out. (See the line in the middle of my picture above? That’s where I tried to flip the part.) Instead, you should minimize the number of times you remove the part from the chuck–and if that means turning the part down the entire length and then cutting the piece as a last step, there you go.

(4) You can, however, face the part after you cut it.

(5) I also learned I suck at cutting the part off. But I bought one of these metal cutting band saws, so it’s moot anyway.

(6) The amount of effort to cut a thread is just absurd. I think I want to put together a tool which can be used to mount the threading tools and turn them using the threading feature of the lathe. The other option is to develop more upper body strength.

(7) Brass is damned pretty.

(8) Also, the auto-feed lever is damned cool. When you think the cut is smooth, reset the cutting tool to the same depth and take another pass–and you find more material gets cut off. Wash, rinse and repeat 3 times and you get a nice finish.

I made a whole bunch of mistakes this morning, but I’m starting to appreciate both the capabilities and limitations of the lathe. I have a bunch of aluminimum stock on order from these guys so I can practice some more, and I also have a milling machine on order which I intend to practice with as well.

Ultimately my goal is to build a clock–but before I start building a clock I need to learn how to use the tools.

Evidence based data

I just saw an ad in my inbox which describes how your mouse is slowing you down, and you need to switch to using a text editor/IDE which can be completely driven by the keyboard.

And it bugged me.

It bugs me the same way all the other Shiny New Things bugs me, the way they bug Uncle Bob.

Further, it bugs me because we have some concrete evidence that this simply is not true. Meaning we have concrete evidence that (a) selecting a menu item is faster than typing a command, (b) finding a menu item is faster than using a keyboard shortcut if you haven’t committed that keyboard shortcut to muscle memory, (c) it takes a nontrivial amount of time and effort to learn the keyboard shortcut, which means that while common commands will be committed to muscle memory quickly (i.e., cut, copy and paste), less common commands are faster found on a properly structured hierarchical menu.

But no-one gives a fuck about evidence-based data anymore.

Because it gets in the way of The Churn.™

And it gets in the way of a thousand developers trying to make a name for themselves by positioning themselves as the inventor of The Next New Shiny Thing.

So do us a favor: if you design user interfaces or are thinking of The Next New Shiny Thing, get Designing With The Mind In Mind and understand how and why our brains work the way they do.

And stop treating your users like “lusers” or like Stockholm syndrom sufferers, so badly tortured by your poor design that they engage in traumatic bonding through an “ongoing cycle of abuse in which the intermittent reinforcement of reward and punishment creates powerful emotional bonds that are resistant to change.”

So at some point I decided I needed to set up a machine shop…

I’ve been making more and more things with the Form 2 printer, and I find making things quite addictive. I’ve worked some more on some gear design software for the Macintosh, and now I’m finding to make the thing I’d like to ultimately make, I need some odd-shaped parts that don’t quite fit in the printer or would be better made from other materials. (Simply buying 2mm by 100mm pins isn’t working out as well as I’d like, simply because sometimes I need different lengths or I need different shapes for the gears to properly rotate or to be properly moved or set.)

I’m not giving up on the 3D printed parts, of course, but being able to build custom cut arbors using metal for the 3d printed wheels (gears) to rotate on would be extremely helpful. And eventually doing all of this in brass and steel using the 3d printed parts as prototype shapes would be very cool.

So I ordered my first shop tool.

And it’s not what you’d expect.

I didn’t order a mini-lathe, though that is first on my list of things I want to buy. (And to be honest I’m on the fence as to if I buy a micro-lathe or something larger first. The advantage of the former: I’d be able to shape the small arbors and custom screws that I need to make my gadget with ease. The advantage of the latter: I’d be able to shape a full range of parts, including some larger components I’ll eventually want to build, including those brass gears. Eventually, of course, I’d want both.)

I didn’t order a mini-mill. (This would allow me to make many of the brackets and holders and supports my gadget will eventually need, though for the time being I can 3D print many of them.)

I didn’t even order a bandsaw, though I intend to order one when I order whatever lathe I intend to eventually get.

No, I bought one of these.

Now I’m sure you’re asking yourself “why the hell did you order a shop crane?”


Did you see the shipping weight on some of the other gadgets that will eventually need to be positioned on my workbenches?

The HiTorque 8.5 x 20 Bench Lathe for example, has a shipping weight of 281 pounds; the machine itself is 220 pounds. Mills are worse: the Grizzly G0463 Mill/Drill has a crated weight of 445 pounds.

So the question becomes “how do you swing a 500 pound object and place it and attach it to a workbench” (rated at 3,000 pounds so we’re good there)?


Of course I’m relying on Home Depot to have the straps and hydraulic fluid which are necessary to run this crane.

One thing I plan to use this for–while I’m not using it to swing machines around the garage, of course–is to help hold stock metal in place for cutting in the bandsaw. Sure, it may be overkill for this use, but remember: a 3 inch diameter brass rod that is 48 inches long weighs 105 pounds. And frankly I’d rather swing that in place with a crane than lift it by hand. I’m not a young man anymore.

Machine #2

My second attempt at building a gear system.

Individual gears are 1mm pitch, 4mm thick gears, and the whole mechanism steps down from 6 rpm down to 1 revolution per day (with the 80-tooth gear I purchased off of SparkFun). (Without that gear, the last gear in the chain revolves 4 times per day.) This means the last gear in the chain actually is a pinion with a 1mm pitch gear connected to a 32p (32 teeth/diametrical inch pitch) tooth gear.

Things I learned:

I think 3mm is probably the thinnest practical gear I can print. (I’m sure I can print smaller.) Also, my height calculations are not dead on correct; I need to sort out why I have what appears to be about a 1 mm gap. (Though I suspect it’s the washers I’m using between layers; the washers measure only 0.5mm thick, and were advertised as 0.7mm. Five of those puppies with a 0.2mm error gives 1mm.

Meaning “depthing” is something I’m just going to have to do, no matter how well I think everything is measured.

IMG 4458

I also need to sort out how I’m going to disengage the gear train for adjusting the time. (Small battery-powered clocks seem to do the trick by overpowering the motor. OTOH, watches appear to use a mechanism which engages or disengages a clutch which then allows the time to be set by rotating the watch stem.) I’m inclined to use something simple, such as a pin which pulls up a set of gears and allows them to rotate freely. (I did a prototype of this here, though it’s poorly executed.)

Machine #1

Purchased one of these synchronous motors which rotate at 6 RPM. (A synchronized motor synchronizes against the line current, which is kept at an accurate 60hz rate to allow its use to synchronize against time keeping devices.)

Built my first gear machine using my gear generation software.

IMG 4456

This attaches to the synchronous motor using a specially designed 12 tooth 1mm pitch gear that attaches to the synchronous motor, and uses single-tooth counting gears to step down the rotation rate, so the last gear rotates at 1 revolution per day.

First attempt at designing a gear chain for a clock.

This helped me to resolve a number of bugs with the gear generation software, including bugs with the generation of certain gear shapes, and the generation of the STL files for the spacers that are used to hold the gears in place.

Observations on the FormLabs’ Form 2 printer.

Some observations on the FormLab’s Form 2 printer:

First, the resolution of the objects that are printed is simply amazing. I remember putting together the original Makerbot; this thing prints objects that are absolutely amazing compared to the extruded noodle of the original Makerbot print objects.

For example, I’m able to print 2 millimeter holes through a design and have them come out exactly 2 millimeters in size. I’m printing 32p gears; these are gears with 10 teeth per circumference inch and have them come out perfectly. (I tried the same thing with the original Makerbot; there is no way 32p gears could possibly have worked. Of course the current generation of Makerbots are light years ahead of the box of do-it-yourself wooden parts and gears of the original Makerbot printer.

Here are some parts I printed; they look imperfect because I roughly sanded them using 320 grit sand paper to see how well things would hold up to sanding. (You can get a much better result if you follow FormLab’s instructions.)

IMG 2825

I’m currently writing a Macintosh app which can be used to design gear chain mechanisms using spur gears; the mechanism is the top and bottom of a simple gear chain which performs a clock motion work, stacking two gears with a 12 to 1 ratio, so when one gear (attached to a minute hand) turns one full revolution, the other stacked gear (attached to an hour hand) would turn 1/2th of a revolution.

This is a test mechanism; ultimately I’m looking to design a clock, and I may write more about it in later posts.

The pins are 2mm by 25mm long pins; I was able to specify the holes to an amazing degree of accuracy. I was able to specify the holes in the corners of the frame to a 1/10th of a mm larger than the hole of a standard M3 screw. (Right now I’m waiting for some washers I ordered to insert between the gears.)

It’s stunning how great the quality of the parts printed are.

Of course this resolution and quality has a cost: time. The two frames that I printed to hold the gears (a test of the software that generates Delaunay triangles to triangulate the axle locations for my gears; the resulting frame was bigger than I wanted, so I need to tweak it) took 7 hours to print.

Yes, you read that right. Seven hours.

The four gears inside took around 4 hours to print.

But then, it makes sense. The Form 2 printer takes care to agitate and heat the resin to a consistent state; this allowed me to print some amazing items pretty much right out of the box. After each layer is printed (and each layer is 50 microns thick; about the thickness of a very thin piece of paper), the resin is agitated and the area cleared for the next layer. Needless to say for an object 3 inches tall, building up the part 50 microns at a time takes a while.

The resulting parts come out tacky. Even after bathing the parts in isopropyl alcohol for 20 minutes, the resin is not set or cured; you must allow the parts to finish curing–either in a UV light (mine is on order), or let the part rest for about a day.

Another thing to note is that while the results of using automatic scaffolding helps make the parts print correctly, sometimes with heavy objects the scaffolding can fail, causing print errors. (I tried printing a model of a manatee for my wife, only to have the flippers and head separate from the rest of the body, as the body was too heavy to hold rigid during printing. I fixed the model by gluing the pieces back together.)

Further, the automatic scaffolding algorithm can be a pain; you need to review the parts and edit the scaffolding locations. For example, the above photographed gears have a problem meshing; bits of the scaffolding got stuck between the teeth. So if you have surfaces that must be perfectly shaped and clean, you may need to edit how the scaffolding gets applied.

At some point when I get the gear design software working, I’ll publish a note here.

Constructing a cyclic polygon given the edge lengths.

So I have a problem: given the length of the edges L = {l0, l1, … lN} of an N-sided irregular polygon with N > 3, I need to construct the values for the radius R and the angles A = {a0, a1, … aN} such that the points P = { R, ai } form a closed polygon with the lengths given.

I searched through the internet and found all sorts of articles on the subject, but a day of searching and I was unable to find a way that I could construct the values R and A. I’m sure it’s out there, but I figured it’d be a good exercise to do it myself.

Based on my reading apparently there is no analytic solution to the problem–so I went ahead and built a simple algorithm to construct the values. The idea is outlined below.

Before we continue, we assume the longest length of our polygon is l0. We can rotate the edges of the polygon if this is not the case.

Observation 1: The vertices P = { p0, p1, … pN } of our cyclic polygon may be described by the values pi = ( R, ai ) in polar coordinates:


By observation, the sum of the angles A is 360°.

Hypothesis 1: For any set of edge lengths L, there is only one set of values for R and A which create a cyclic polygon.

Sketch of proof: By construction.

Hypothesis 2: In constructing a polygon with lengths L, we have two cases for the centroid of the circumscribed circle that encompasses the polygon. Either the centroid is inside the polygon:


Or the centroid is outside the polygon.


If the centroid of the circumscribed circle is outside the polygon, it is outside the edge with the largest length, and it is only outside one of the edges.

Sketch of proof: If the centroid of the circumscribed circle is outside of one of the edges, the edge has the property that the associated angle ai has an angle greater than 180°. If the centroid is outside two edges, this implies the sum of the angles exceeds 360°, violating Observation 1 above.

Hypothesis 3: The value of R must be greater than l0/2, where l0 is the length of the longest edge.

Sketch of proof: The lower bound of R is determined by an edge which passes directly through the center of the circle. If the edge does not pass through the center of the circle, then the diameter must be larger than the specified edge.

Observation 2: Given an edge in a circumscribed polygon with length li and a radius R, the angle associated with the edge ai can be calculated as:


For an edge where the center of the circumscribed circle is outside the edge, the angle a is between 180° and 360°.

Sketch of proof: Law of cosines.

With these observations we could construct our polygon by searching for the proper value of R.

The essence of our algorithm is to search for a value R such that the following function is zero:


One issue we need to deal with is calculating the angle on the longest line segment of our polygon. We can handle this case by starting with the angle a0; the idea is that we know a0 is in the range (0,360). From the angle a0 we can calculate r using the formula:


Sketch of proof: Geometric observation.

We now can construct a function F(a) = Σai, by calculating the angles of all of the edges in our polygon for a proposed radius r, calculated using the value a as the angle a0. (If the angle is greater than 180°, this fits hypothesis 2 above.)

Hypothesis 5: The function F(a) is monotonic on the value a.

Sketch of proof: A geometric argument; as R increases, the angles decrease, while as R decreases, the angles increase. The increase or decrease in angles are monotonic, so the sum is monotonic. The function R relating to a is also monotonic.

Hypothesis 6: The function F(a) – 360 has one zero.

Sketch of proof: Trivial observation based on the above hypothesis.

We can now sketch the code which searches for the zero given a value a. We can use a number of computational methods to search for the zero of F(a) – 360; in our code below we use a simple binary subdivision search to search for a with 0 < a < 360°.

Preliminary functions

Given an object with the following declaration:

#define EPSILON		1e-6			/* A very small value. */

@interface KTCyclicPolygon ()
    NSInteger n;            // # edges
    NSInteger *edges;       // length of edge

    NSInteger ledge;        // index of maximum length edge

We can implement our algorithm by first, finding the edge that has the maximum length:

- (NSInteger)largestEdge
    NSInteger i = 0;
    NSInteger len = edges[0];

    for (NSInteger j = 1; j < n; ++j) {
        if (edges[j] > len) {
            len = edges[j];
            i = j;
    return i;

We need a way to calculate the angle of an edge given a radius:

- (double)angleAtIndex:(NSInteger)index radius:(double)r
    // Calculate a from observation 2

    double l = edges[index];
    double c = 1 - (l*l)/(2*r*r);
    return acos(c);         // output from 0 to M_PI

Then we can calculate f(a) above:

- (double)f:(double)a
     *  Calculate r from a.

    double l = edges[ledge];
    double m = l / 2;
    double r = m / sin(a/2);

     *  Calculate the angles and sum from 1 to N

    double sum = a;

     *  Sum the angles from 1 to N

    for (NSInteger i = 0; i < n; ++i) {
        if (i == ledge) continue;
        sum += [self angleAtIndex:i radius:r];

     *  Now subtract from our zero.
     *      If the angle is small (meaning we selected too small a start angle)
     *  then this value will be negative. If we picked too large a start
     *  angle (meaning a was too big), then the sum will be large and the
     *  value will be positive.

    return sum - 2*M_PI;                // Return value is angle gap to close.

At this point we can easily calculate a value a between 0 and 360°:

- (BOOL)calculateCyclicPolygon
    if (n <= 3) return NO;      // Need a 4 sided object or larger

     *  Step 1: Find the largest edge

    ledge = [self largestEdge];

     *  Step 2: given a min/max value of 0, 360, find a between those values
     *  which approach our error

    double mina = 0;
    double maxa = M_PI*2;
    double mida;

    for (;;) {
        mida = (mina + maxa)/2;

        double f = [self f:mida];

        if (fabs(f) < EPSILON) break;       // a solves our solution
        if (f < 0) {
            // Angle too small; we need a larger angle
            mina = mida;
        } else {
            // Angle too large; we need a smaller angle
            maxa = mida;

    double a = mida;                        // to within epsilon.

At this point the angle a has been found to within the error specified in our constant EPSILON.

We can validate the correctness of our algorithm by calculating the radius, and the angles for the rest of the items. We can then calculate the polar coordinates of each of the edges and compare the lengths with the stored edge lengths:

    double l = edges[ledge];
    double m = l / 2;
    double r = m / sin(a/2);

    double x = r;
    double y = 0;
    double angle = 0;
    for (NSInteger i = 0; i < n; ++i) {
        if (i == ledge) {
            angle += a;
        } else {
            angle += [self angleAtIndex:i radius:r];
        double xn = cos(angle) * r;
        double yn = sin(angle) * r;

        double dx = xn - x;
        double dy = yn - y;
        double d = sqrt(dx * dx + dy * dy);

        printf("Vertex: %g %g\n",x,y);
        printf("  Edge %d: %d == %g\n",(int)i,(int)edges[i],d);

        x = xn;
        y = yn;
    printf("Vertex: %g %g\n",x,y);

When run this will find the initial angle a which converges to our solution. Running with input values 1, 2, 3 and 4, we get the output:

Vertex: 2.0026 0
  Edge 0: 1 == 1
Vertex: 1.75293 0.96833
  Edge 1: 2 == 2
Vertex: 0.0408695 2.00219
  Edge 2: 3 == 3
Vertex: -1.9922 -0.203859
  Edge 3: 4 == 4
Vertex: 2.0026 -3.40378e-07

This shows our vertices define a closed cyclic polygon with edges of lengths 1, 2, 3 and 4–to within our error tolerance.

Twelve Principles of Good Software Design: An Introduction.


I started my series of posts about my 12 principles of good software design in part as a criticism of the various software practices I’ve seen while writing software, and in part as a prescription for a possible answer to help developers write better code.

I know this sounds arrogant. But after nearly 28 years of writing software, I’ve seen quite a bit. I’ve seen a software team where developers were encouraged to “refactor” their code, which featured what should have been three or four methods turned into a 20-class library, with layers on layers of layers of “refactoring” piled on top. (My refactoring? Spending a week understanding the code, then rewriting it into a single four method class.)

I’ve seen code which looked like it was assembled by throwing darts at a design pattern dart board, with patterns which made absolutely no sense for the context in which it was used. (As an example, I’ve seen publish-subscribe to tie views to the view controller–an extra bit of machinery which makes it harder to figure out the relationship between views and the view controller which manages it.)

I’ve seen over-engineered applications created by people who were testing their own pet theories, such as trying to “offload” functionality out of a view controller by using a “mediator” and an “interactor” class which only serve to obscure the functionality of a single view controller behind two additional classes which serve no real purpose.

I’ve seen people create applications using a misunderstanding of certain design patterns. For example, I’ve seen someone implement a user interface using “Model-View-Presenter” in conjunction with a fourth class that implemented a Mediator pattern which tied models directly to views. The “mediator” was nearly a no-op: once the model and the views were tied together it did little but pass data back and forth without interpretation. And when I pointed out that several hundred lines of code could be removed from his code by eliminating the useless “mediator” and tying the views directly to the model (effectively implementing Model-View-Controller), I was told that MVC had been discredited as a design pattern and MVP was a superior design pattern.

It’s not just limited to stuff seen “on the ground” by random developers. I’ve also seen people fall on the Model-View-ViewModel bandwagon without understanding what MVVM is, how to best implement it, or even why they’re implementing it, other than it somehow makes your software better and stronger. Like Popeye eating spinach: shut up and eat it, because it’s good for you.

And even in the justifications for MVVM in various tutorials for MVVM, you find nonsense like this:

The definition of MVC – the one that Apple uses – states that all objects can be classified as either a model, a view, or a controller. All of ‘em. So where do you put network code? Where does the code to communicate with an API live?

But if you read the original papers covering MVC, you find a few things. First, you find the obvious answer to the above question: network code lives in the model. You also find there are no requirements on how the model be structured. And while asynchronous calls can be tricky, hiding asynchronous calls through key-value observing does not suddenly alleviate you from the need to understand the asynchronous nature of your application; it simply moves the icky parts “off-stage” where you can’t easily find them.

(Its not to suggest MVVM does not make sense; certainly in Microsoft’s development environment, MVVM is a preferred design pattern given the way the development tools have been designed. But hoisting MVVM into a generic environment requires building a lot of extra machinery which leads to confusion, rather than clarity. And in many cases it attempts to “solve problems” which only exist because of a misunderstanding of the original design patterns you presume are broken.)

At some point, in all this nonsense, it’s hard not to snap, and perhaps try to impart whatever little scraps of wisdom you’ve picked up over the past three decades.

If this series of blog posts is completely worthless to you, then my apologies for wasting your time. But I believe my twelve principles, if allowed to inform your approach to writing and simplifying software, may help you write better code.

By “better”, of course, I specifically mean code that is easier to understand, easier to maintain, and which can be handed off to another software developer without undue consternation. Now, naturally all software handed to another developer is generally met with derision; a lack of understanding can make even the clearest code seem poorly written and badly put together. But at the very least you can try to make the task of the next developer who has to look at your code a little easier.

These principles, by the way, are not really design “patterns”, but more guidelines that one can use when deciding how to put together your application.

The tyranny of choice

When you think about the problem of software development in general, you come across a curious thing. Unlike most professions where how you do your job is rigidly defined by legal constraints, software developers have the unique problem of creating projects almost entirely out of whole cloth, without any real constraints whatsoever.

Take, for example, the problem of building a house.

In the United States, in order to build a house, you must first draw up plans which take into account the location of the property, the legal limits of the property (setbacks, zoning limits, orientation, altitude, latitude, environmental surroundings), and submit a set of plans describing how the house will be built to a government building commission who reviews the plans to verify it meets the legal requirements for constructing the house. While many of those legal limits are established from underlying engineering principles, from a practical perspective house plans are constrained by legal limits rather than engineering limits.

Then a number of subcontractors are hired, each of whom have legal authority (through a licensing board) to perform specific tasks, which are completed in a given order: the lot is cleared (and rubble disposed of according to a legal framework established by the local municipality, state and Federal government), the foundation framed (and inspected by a government inspector), poured (and inspected by a government inspector).

The attachment points on the foundation are then used to frame the house (constructed in strict accordance to the requirements on the plans, and verified by a government inspector). Then the plumbing and wiring is drawn through the frame (and inspected by a government inspector), the walls are filled with insulation and covered with sheetrock or with an outer covering (like stucco or siding, all engineered according to government regulations, and inspected by a government inspector). Roofing material is also added (and inspected), and the house is finished (and inspected), and if the job was done correctly according to rigidly defined limits established by law, a government official signs off on the house, stating that the house was constructed according to these strict government limits and is therefore suitable for habitation.

And this is true of most professions and most of the things in our lives. Cars are designed by law to have a hump or void in the front hood which is designed to absorb the impact of a pedestrian, to have crumple zones which cushion the passengers from a front-end impact with another car, and in the future to have backup cameras so drivers have better visibility when the put their car in reverse. Furniture makers are required to use approved non-flammable materials, to make sure their furniture fits through the opening of a front-door (whose size is defined by building code), to make sure it contains no small bits which can break off and represent a choking hazard. Even designers laying out the cubicles for a floor plan must submit their plans for review to the fire department in many municipalities to make sure the building can be evacuated according to government regulation.

Software developers, however, have no such constraints.

Sure, we are often told what sort of application a project manager wants. We may be told the target platform (desktop, mobile, embedded), and that may define the development tools we can use and the programming language we are using. But in a world where embedded processors have more computational power than high performance workstations did 20 to 30 years ago, even embedded development generally is not constrained by processing power.

And within these rather loose constraints we are free to do whatever we wish.

The tyranny of choice.

From whole cloth how do we decide where to start? How do we decide how to start designing and building our application?

Design Patterns

A design pattern is defined as a general repeatable solution to a commonly occurring problem in software design. In general software design patterns describe a fixed and well established solution to a common problem which, in conjunction with other design patterns, can be used as fundamental building blocks to creating a computer program.

The concept of design patterns dates back to the early 1980’s, but did not really gain popularity until the publication of the book “Design Patterns: Elements of Reusable Object-Oriented Design” was published in 1994. In it, the so called “Gang of Four” outlined a number of notable “design patterns” commonly seen in software development, each solving a well-defined problem and providing a fixed solution to that problem.

Take the problem of one component in your application notifying another component in your software of a state change. For example, you may have a word processing program where a chunk of code responsible for saving your file needs to notify another chunk of code responsible for updating the window’s appearance that your file successfully saved—so the window’s title can be changed to show the file was written to disk.

A specific solution to this problem is the Publish-subscribe pattern. In this pattern, an object is created which holds a list of interested subscribers who wish to receive notifications about a given topic. Publishers can then send out notifications by handing them off to this object, which is then responsible for sending the notifications out to all interested subscribers.

This design pattern has the advantage that a publisher does not need to know about the subscribers; it simply sends a notice. And a subscriber does not need to know anything about the publishers; it can simply listen for notices.

So in our example, we can define a new notification, called “file saved”, which then indicates as part of the notice the file that was saved.

Our window UI can then subscribe for all “file saved” notices, and when it receives a notice that the file it is responsible for is saved, it can update the window’s state.

Likewise, our file system code, when it is done saving a file, simply sends a “file saved” notice with an indication of which file was actually saved.

This design pattern helps solve a specific problem. It also has the nice property that, in the future, if our window code changes (such as, for example, instead of showing if the file is saved by updating the window’s name, we instead display a checkbox inside the window’s content area), the file save code never needs to change. We can, instead, create a view object with a checkbox that listens for notifications, and shows or hides the checkbox as appropriate.

This separation of concerns helps make our code easier to understand, by allowing us to build our code in reusable chunks, and by having those chunks wired up in well-defined ways. By separating out the part of the code which saves the file from the part of the code that shows if the file is saved, we can change how we show how our file is saved without having to dive deep into the file save code. We just hook up a new subscriber to our publish-subscriber system.

Yet somehow, even within all these patterns, within all of these methods which allow us to construct our applications out of whole cloth, somehow projects still fail. They are still hard to maintain. Our projects go over budget. Deadlines are missed. Even if our design patterns are properly understood and properly designed, somehow we still are writing code that is hard to maintain, and sometimes we’re writing code that simply does not work.

All this failure suggests to me that perhaps we’ve lost sight of the real problem. Perhaps we need a different point of view.

Programming Idioms

Because the term “design pattern” is so well known, I’d like to use a different term to capture something a little more general. The intent is to capture the idea of how we assemble programs—not as a rigid solution to a fixed problem, but more the “style” of the code and the ways we express ourselves in software.

For this, I’m using the phrase programming idiom. Like a language idiom (a way a language is spoken in a given area or a style that is characteristic of a particular subject), a programming idiom is intended to capture the idea of how we organize the code we write. This not only applies to rigid solutions to fixed problems (such as with design patterns), but also applies to how we think of grouping statements into a function, how we insert white spaces between groups of statements (to show functionality), and how we choose the names of variables or functions or classes.

But we’re not really talking about style; to me, programming style is the overall umbrella; it is how we may write a book or organize the chapters or the paragraphs. An idiom, on the other hand, implies something looser than a design pattern, but not as broad as a programming style.

“Design Patterns” as Programming Idioms

I contend that it is useful to separate out design patterns which represent perhaps a fixed way to solve a given problem, and programming idioms which represent a more general way we organize code.

And I further contend that not all design patterns are really design patterns, but are instead programming idioms.

For example, a design pattern (a fixed solution to a given problem) would be the singleton pattern. A singleton could be expressed in a variety of different ways (naturally), but the general pattern of the solution is the same: a single entry point returns our singleton object, creating it if necessary. In C++, for example, we could write:

MyThing *MyThing::Get()
    static MyThing *singleton;
    if (singleton == NULL) {
        singleton = new MyThing;
    return singleton;

The particulars of the implementation could, of course, change: we could use the pthread library to make sure our method is not re-entrant. We could declare the method as a function instead of as a class function. We could use a variety of other names for the method. We can even construct our object in advance rather than construct the object inside this method. But the essence of the pattern is exactly the same: we call something which returns the only object in our system.

The same can be said of the Publish-Subscribe pattern: we could have a single notification system (such as Apple MacOS X’s NSNotificationCenter object), or multiple notification objects. But the essence of the pattern remains the same: a subscriber subscribes for notifications, a publisher delivers notifications to the object the subscribers subscribed to for notifications.

Some “design patterns”, however, are not as clean as this. For example, take the Decorator pattern. The motivation of this pattern comes from user interface design, whereby individual components can have additional “decorations” added to them without affecting the way the underlying component operates. For example, windows in your user interface may not have scroll bars by default; one way to allow your system to add scroll bars is by having a mechanism by which the interface of the window can be extended; a decorator then provides a way to add scroll bars to the window. The decorator mechanism can then allow multiple other decorations to be added, such as a border or a title bar. This keeps your basic window simple: it’s just a region of the screen. And for each new feature you want to add to your window, you add a decorator rather than make your window code more complex.

Unlike the singleton example, however, there are multiple ways to approach the problem of creating a window which allows decorators to be attached.

Consider also the model-view-controller pattern. There are as many ways to write views (the components in a user interface) as there are user interface libraries, and different libraries provide different degrees of support for controller code (from none at all, as was the case in Windows 95, to a very rich and expressive system such as on Apple’s iOS), and many libraries attempt to provide tools to make creating the model code (the code which represents the data in your application), from basic objects (such as Apple’s MacOS X’s NSDocument class) to the more complex.

I would contend these are not “design patterns” at all, as a design pattern represents a more rigid solution to a fixed problem. These architectural “idioms” represent a more complex way to view code than the fixed solutions given in a design pattern.

Idioms may also concern themselves the smaller chunks of code than architectural design. For example, take the way we group chunks of code into reusable functions. There is no fixed solution to how we group statements into functions or methods. Some books attempt to give advise based on folklore; I’ve read one book which suggested functions should be no longer than 24 lines long, representing the most code that can be displayed on a VT-100 terminal.

Thankfully it is not 1985 anymore. But that doesn’t alleviate the problem.

Programming Idioms as ways we express ourselves.

At the bottom of the stack, programming is essentially as much expressive art as it is engineering. While programming has at its core the requirement that code compile, execute and perform the tasks we desire in a correct and reliable fashion, that really is no different than technical writing. Our goal is to express an idea in a way which is correct, and which communicates our intent clearly. 

Programming also has other goals, of course. We should be able to hand our code off to other people so they can fix problems that may come up later. We should be able to cooperate with other people in writing more complex systems. We should be able to go back to code we wrote months or years ago and understand what we were doing. We should be able to find problems in the code we wrote during development.

But ultimately this is an art. It may be more akin to industrial design (such as designing a vacuum cleaner or a chair) than it is to more artistic endeavors (such as painting); the product of our work has a clear function it must satisfy. But that doesn’t change the fact that we are creating something against a blank slate using phrasing that is familiar to us, and which helps express to the next developer what it is we are doing.

And, I think, it is important to separate out “design patterns” from “programming idioms” so we can remember that not all problems are rigidly defined or have only one possible solution. We also need to keep from falling into the trap that previous idioms are somehow fixed design patterns which we must discard because our problem doesn’t fit the rigid definition we learned from someone else. Sometimes that can cause us to write the very complicated spaghetti code we are striving to avoid.

With this idea of programming idioms in mind, and with the problem framed: that what we really need is a new point of view if we are to write better software and escape ruts that were never really fixed over the past 30 years, we can move onto what makes for a good programming idiom.

What makes for good development style.

12 Principles of Good Software Design.

The 12 Principles of Good Software Design: Conclusions

This is a continuation of the blog post 12 Principles of Good Software Design.


Unlike most professions, software developers face an interesting and rather unique conundrum. Our job fundamentally boils down to realizing a vision of a product–often defined by someone else (such as a project manager or by business needs), entirely out of thin air, creating out of whole cloth something sophisticated and advanced with few constraints except for the desired results.

And because we are creating out of whole cloth with few constraints–at most we may be told our software has to run on an iPhone or be deployed to an existing server cluster, or run on an embedded processor with far more memory and computing power than an advanced workstation from 20 years ago–we are completely free to choose how we express ourselves in code. So long as the results work, and can be maintained by a future generation of software developers, we can do anything we want.

It’s heady stuff.

The problem, however, is that despite our freedom of choice, despite a lack of constraints on how we do our job, despite a lack of prescriptions on how to design or build our projects that other professions face, the failure rates in the software development industry are amazingly high. Some studies suggest as many as half of all software projects will fail–either by far exceeding the original budgets or being canceled outright. Even in the more limited field of Enterprise Resource Planning, where software packages and best practices are well defined and the project is simply adopting existing tools to an existing business, 51% of companies viewed their ERP implementation as unsuccessful. Worse, 31% of projects will be canceled before they are ever completed, while only 16% of projects (from the Chaos Report) are completed on-time and on-budget.

These are older reports and undoubtedly success rates have increased as we see a shift to more well established software practices. But not by much; my own personal experience is that the larger the project (as measured by programmers working on a project), the more likely it is that project will fail and code will never see the light of day.

“Failing is OK. Failing can even be desirable.” But we are not learning from our failures, and even practices put into place in order to “learn” from our failures (such as project postmortems and team introspection) offer us little insight, simply because the people making the mistakes are seldom the right people to ask about their mistakes. (It’s why so many postmortems of failed projects I’ve been on either turn into a lot of hand-wringing over “unforeseeable” errors, or turn into massive blame sessions and finger-pointing. Because human psychology being what it is, the most frightening thing any of us can face is the prospect of being wrong.)

I propose that the real problem is the fact that software developers face few constraints in how we build our code. So long as it works, we are theoretically free to build code any way we wish, testing our own pet theories or writing code we are comfortable with–or just banging out something that meets the requirements and makes our bosses happy (and gets us that promotion) without considering what it is we’re writing.

And I propose that our industry is slowly coming around to this view, and attempting to fix the lack of constraints with recommendations that attempt to impose constraints in the name of “clean code” or “best practices.”

But because as software developers we don’t like to face the prospect that our failures are being caused by too much freedom, people who propose best software practices face a delicate balancing act. Which is why so many discussions about software development describe “patterns” and treat them as if we are discovering fundamental truths buried in the aether, rather than building constraints which keep programmers from going off into the weeds.

Unfortunately because of this–because most people describing constraints to software development practices as if they are uncovering truths rather than laying out constraints–most of us feel free to “discover” (read: “invent”) our own “truths”, and strike out on our own, testing our own pet theories and pet methodologies on software projects. It’s as if we are the first to discover the Higgs Boson and are now using it to build our own anti-matter machines, not realizing that it’s a figment of our imagination and to outsiders we sound like a crazed hermit banging out screeds against modernity on our mechanical typewriters.

In other words, because we haven’t addressed the real problem–too much freedom–we’ve created an industry of lecturers who go around beating their own drum about best practices, which often either exacerbate the problem, or at the very least do nothing to address the failures.

And our projects continue to fail; they continue to be harder to maintain, continue to run over budget, continue to be canceled outright after wasting billions in our economy.

Research shows, by the way, that people who feel their choices are constrained are in fact happier than those whose freedoms are not similarly constrained. We do know that religious conservatives in the United States are in fact happier (despite being seen as stick in the muds incapable of happiness), in large part because their moral framework reduces the viable choices they see themselves facing in life.

I personally suspect the reason why people with fewer choices are happier is for the same reason that Conservatives are happier: not because we don’t swim in an endless sea of choices, but because a guiding principle reduces those choices from an overwhelming array to a manageable framework. By that same logic one would expect other groups employing the same strategy, constraining their choices to an acceptable array by using some core consistent philosophy, would see greater happiness regardless of political orientation. And it would suggest that moral relativists–people who do not believe morality is an absolute but is a choice relative to circumstances–would have greater levels of unhappiness because they have no framework to winnow down choices to a manageable set.

Certainly we also see this constraining of choices being used in other areas, such as by certain restaurants. It’s why fast food restaurants have resorted to allowing people to order meals by the numbers: reducing the size of the menu makes ordering from the menu less stressful. Higher-end restaurants (especially those which feature “farm to table” or “locally grown” choices) also effectively reduce choice to its patrons, but in the process increases satisfaction. We also see this constraining of choice at certain high-end gourmet food stores. Limiting choice to “organic” or “non-GMO” or “locally sourced” reduces the bewildering array of decisions shoppers face to an acceptable subset–even as those stores introduce unique products or flavors not seen elsewhere.

It’s not to suggest that outside forces should restrain choice for us, by the way. When our choices are constrained for us, we become increasingly unhappy as the choices we would like to make for ourselves are taken away, and our own internal agency is denied. Certainly restaurants who reduce the size of their menus don’t constrain our overall choice–simply because we could eat elsewhere, or go home and make food our ourselves and our friends.

But by having a framework for choice–a framework whereby we winnow down the external pressures shoving a million offerings down our throat to a handful of decisions that we find acceptable–we make our selves happy by reducing the stress that unlimited choice threatens to impose.

And I propose the same can be said about the unlimited choices faced by software developers on how we create our own applications.

For me, the goals of this series of blog posts was to propose a set of principles for code “idioms”–ways we write code–which make it easier on us, by constraining our choices to things that I personally found have worked for me.

Those “idioms” are based on the idea of constraining our choices according to certain ideas that I suspect most would find true–that software should be kept simple, understandable, and written in such a way so that teammates and future maintenance developers can understand and maintain the code.

Of course, we can’t always do that. Time pressures make us sloppy, and sometimes algorithms we need to implement are rather specialized or requires specialized understanding. But we certainly can strive to make our code less sloppy and to help future developers understand what is supposed to be hard (because of algorithmic complexity) and what should have been simple (but was over-engineered or improperly architected).

Certainly we can realize that these rules are not “discoveries” which express an underlying truth, such as we see in mathematics which seems like a blank slate but in fact describes the physical world. And because they’re not discoveries, perhaps we can realize that when we try to find our own “truths” and seek out our own “discoveries”–new design patterns, for example, or new ways to organize our code–we haven’t discovered anything.

We’ve simply gone off the rails.

Thus my “12 principles.” Because they’re less about design patterns and more about the goals we should have when we write code.

Keep it simple. Keep it concise. Keep it clear. Keep it easy to read. Don’t be “clever.” Don’t go off the rails.

Ultimately, it’s about recognizing our real problem: too much choice. And suggesting by keeping to a set of simple principles we can reduce choice–and reduce job stress, reduce uncertainty and reduce decision complexity in a way which allows us to increase our chances of success.

If suddenly your job becomes too simple because you stick to a set of simple, concise and clear expressions that you develop an itch to explore the frontiers of your knowledge–there are plenty of open source projects out there looking for volunteers.

The Twelfth Principle: An idiom should allow a new developer to understand the code.

This is a continuation of the blog post 12 Principles of Good Software Design.

12. An idiom should allow a new developer to understand the code.

Of course up until now all of the idioms–the ways we express ourselves in software–have focused on clarity and simplicity over complexity and over-engineered solutions which just make our life complicated. We’ve focused on idioms which keep functionality together, which make a particular module obvious, which breaks functionality into well-defined components.

But it’s not enough to simply use patterns which emphasize simplicity. We also need to think about how we build our code in a way which makes what we’re attempting to do understandable to another developer who is taking responsibility for your code, by using a uniform approach which provides consistency and clarity to the overall project.

My own approach is to consider the “big picture.” For most mobile applications which talk to a back-end API, I consider two things: how does data flow through the system, and how does the user interact with the system. For a mobile application with a back end service, the flow is rather simple:


We store our back-end data in a database. We use a database I/O collection of classes (such as JDBC) to obtain and store data in the database. Business logic verifies the data is stored in the back end correctly and maintains integrity of the data. Business logic also handles complex logic such as logging in, database access and the like. The servlet logic handles API requests made by the model code on the mobile app. Then a system of views and controllers help present the data.

Data flows up, from the database to the views. Changes in the data flows down, from the view to the database.

Most applications we build for mobile applications follow the same flow of data up from the database to the views, down from the views to the database.

The flow of the individual screens within an application (either an iOS or Android application) tend to be determined by the specific domain of the application. But even there some generalities can be made: a login view controller flows to a “forgot password” screen and a “sign up” screen. (One of the advantages of modern iOS development are storyboards which, when used properly, allow a graphical relationship between the screens of a section of the application to be maintained.)

There are other types of applications which we could build which have a different overall structure. For example, a compiler/parser tool which takes an input file and produces an output file would generally consist of code which manages the input files being read, a lexical tokener which converts a character stream into parser tokens, a parser which converts the language into a parse tree, and optimization/conversion steps which manipulate the parse tree according to the various optimization or conversion steps required for our language. A file writer then handles converting our parse tree into our desired output file. Data flows from the top down, converted at each step according to the rules of our language.


Not all applications have the same structure, because not all applications are mobile applications talking to a back end server architecture.

But I do personally consider how data flows: when it is created, when it is manipulated, when it is presented and when it is saved. I may consider the format of the data, how components of the data interact, what the business rules governing the data are, and what the data represents.

And that helps us with the structure of the application: how the model component works (if we’re writing an MVC style application), how the data is loaded, how the data moves–and that then drives the overall design of the rest of the application.

Of course how we structure our application also depends on a series of best practices–and it is those best practices which allow us to decide where to put the components of our application.

For example, one best practice is that business logic which constrains the integrity of data in a database needs to reside as close to the database as possible–that way, violations of business logic doesn’t leak in from bugs or security loopholes in other places of the application. This is why I’m a huge believer in putting business logic in a client/server application on the server side: because a hacker who hacks the API of a client/server application cannot do more to the back end than a user of the application which talks to the back end.

Once you realize that the business logic belongs in a particular place, then it is easy to use the other idioms to create code that is easy for another programmer to follow, using design patterns which are easy to follow. The logic lands as a chunk of business logic between the code which parses incoming HTTP requests and the database access code which gets and saves data to the database.

Besides considering the global structure of the application and being consistent in our use of the global structure as we assemble our applications, we can also make use of notes–both in the form of comments and in separate documentation, which gives the overall structural ‘gestalt’ of the application.

There are a number of books which eschew documentation, either in the form of comments and in the form of documentation, which litter the scene. For example, in chapter 4 of Clean Code: A Handbook of Agile Software Craftmanship, we see the remark:

Every time you express yourself in code, you should pat yourself on the back. Every time you write a comment, you should grimace and feel the failure of your ability of expression.

And certainly there is some value to this idea, at least when it comes to writing most code. After all,

if ((row.record == VALID_SYMBOL) && (row.name) && !(row.error)) {

would be better written:

if (IsRowValid(row)) {

with the logic in a self-contained function call. (This also has the advantage of concentrating the logic for validating a row in one location rather than allowing that logic to be scattered everywhere.)

However, there are times when it is necessary to use a comment and to include documentation to allow another developer understand roughly where all the pieces are.

Comments are also necessary when writing complex code which implements a particular algorithm, so a new developer understands what is being implemented. Even if the comment is simply a reference to the textbook or paper from which the algorithm is pulled, since you cannot count on every developer looking at your code having a Ph.D. in Computer Science.

For example, how self-evident is the following code?

struct node *grandparent(struct node *n)
 if ((n != NULL) && (n->parent != NULL))
  return n->parent->parent;
  return NULL;

struct node *uncle(struct node *n)
 struct node *g = grandparent(n);
 if (g == NULL)
  return NULL;
 if (n->parent == g->left)
  return g->right;
  return g->left;

void insert_case1(struct node *n)
 if (n->parent == NULL)
  n->color = BLACK;

void insert_case2(struct node *n)
 if (n->parent->color == BLACK)


But if we were to include the following comment at the top:

/*  insert.c
 *      Red/Black Tree Insert Code. (From https://en.wikipedia.org/wiki/Red–black_tree#Insertion)

now everything we’re doing a lot more obvious.

The same can be said about the larger architecture used by your application. Just a simple one-page comment stating the rough layout of all the components can go a long ways in helping a new developer to the team have an understanding as to where all the code lies.

Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones. – U.S. Secretary of Defense Donald Rumsfeld

Despite being ridiculed for his statement, he does allude to a fundamental principle of security, one I first encountered when studying for the CISSP certification.

But I contend there is a fourth combination: unknown knowns–things we are not aware that we know. And one of the biggest problems new developers to a project have is that they don’t even know where to start–and the other senior developers on the team don’t realize the knowledge they haven’t passed forward.

And until that knowledge is passed, a new developer, not wanting to disturb the structure of the existing application, tend to the Lava Flow Antipattern.

In many ways, if we are writing code for a team, we need to think about how that code can survive our own authorship. To do this, we need to consider how we structure our code so that our intention is clear. But more importantly, we need to give clues to new developers how they are expected to extend and modify our code, where fixes should be made, and where new functionality should be added to prevent various antipatterns from forming, and to reduce technical debt.