And now for something completely different.

When I started this blog, I wanted to have a place where I could put my technical rantings. If you knew me in person, you’d realize that I’m rather passionate (in a “meet my little friend” beat people to death with a baseball bat and bury the remains in the back yard under the flowers sort of way) about quality software development.

This is a completely different rant.

I have a friend, an elder gentleman who was in the process of having all of his savings squirreled away by an unscrupulous fellow who served as his “power of attorney” and was going to ship my friend off to a hell-hole in some God-forsaken part of the country to die.

And so I stepped up to help.

Not to go into the details, but we’re slowly unraveling the problems he had in his life, and we’re in the process of putting his life back together. My wife is going to help clean up his apartment; we’ll get him a new television set and a new bed, get everything organized, just the way he wants it–so when he goes back home, everything is just perfect.

Bottom line–and I cannot possibly feel more strongly about this: If you are unwilling to be an advocate for someone who finds themselves in trouble, then you have absolutely no business asking for an advocate when you find yourself in trouble.

You mean people don’t do this?

Paul Graham has, as usual, an excellent article on the art of software development: Holding a Program in One’s Head

I have only one question. In his article he writes:

You can magnify the effect of a powerful language by using a style called bottom-up programming, where you write programs in multiple layers, the lower ones acting as programming languages for those above. If you do this right, you only have to keep the topmost layer in your head.

You mean people don’t do this?

In my mind this is the first Design Pattern; the most important one from which all other Design Patterns derive from and pay homage to. It was the most important thing I learned in college when studying networking models. It was the key to my understanding when I started writing software for the Macintosh under System 1. (Yes, I’m that old.) It helped me crack object oriented design. This design pattern: that you develop software in layers, which each layer supporting the one on top of it–this pattern is so ingrained in my mind that I cannot think of writing software any other way.

You can see a beautiful example of this approach in the TCP/IP networking stack: each layer of that stack from the hardware to the application layer is designed to do a very simple and predictable job, and to support the layer on top. The magic of sending a packet of data reliably across the world in an error-prone environment works because of a layer of components, where each component is designed to exactly one thing and do that one thing well–and each component relies upon the services provided by the service below to do that thing well.

My first parser was built this way: I wrote a layer of software that did nothing but tokenization, on top of a layer that did nothing but identify comments and stripped them out of the input stream, on top of a layer that did nothing but read bytes and track the current line number. And those layers sit on top of an operating system call that itself is built in layers: from a layer that does nothing but read blocks from the disk and spools them out in chunks at the request of the application to a layer which does nothing but communicate with the hard disk via a simple protocol, to a layer in the hard disk responsible for stepping the read head motor and interpret the magnetic pulses on the platter.

And on top of this tokenization layer was another layer which did nothing but read in and interpret the tokens representing the top level constructs of the language, which then called subroutines which did nothing but handle the language constructs for small pieces of the language.

I wrote (and have on my web site) an ASN.1 BER parser; it does nothing but handle reading and writing BER encoded byte streams. A man (whose name escapes me now because I’m at work–me culpa) has been in contact with me about writing an ASN.1 parser which uses my BER parser to handle tokenization; I’ve advised him to write code that handles the parsing, validation and interpretation of an ASN.1 specification. And in some cases I’ve said that essentially the BER parser layer isn’t the right place to handle certain features in an ASN.1 parser, such as validation of the input data stream.

See, part of the art of programming is also understanding what belongs in a layer, and what belongs in a new layer. Just as much as the art of programming is about understanding what pieces are part of the same functionality–and should therefore be part of the same layer.

At its root, each layer should be simple, self-contained, and do exactly one thing and do that one thing well. So if the code you’re working on doesn’t do this, then get up from your computer, go for a walk, and think about how to break it up into the proper layers.

Why Observable/Observer Should DIE!

I’m not talking about the design model here, but the java.util class and interface. From a development standpoint the interfaces should die a painful death and here’s why:

(1) It violates the rule of discovery–that a newbie to the code should be able to discover easily what is going on. If you’re maintaining some code and you need to change the object you’re passing to notifyObservers, because the object being passed is a generic Object, you have no good way to discover which Observers would be affected. Two different Observers will have the exact same interface–but one may be affected while the other is not. You have no good way to figure out through simple search which observers are observing which observables.

(2) Like the dreaded ‘GOTO’, it makes it easy to write spaghetti code. One of the bad habits in Java is for people to create inline anonymous classes to handle events and state changes–and while this makes the original author’s life easy, it makes figuring out what little ‘classlet’ or snippet of code functionally belongs to another chunk of code. You may have a toolbar with five different buttons which seem functionally related–but the code which processes the contents may be scattered throughout your code.

This ability to create little anonymous inline classlets of functionality is Java’s equivalent to the GOTO: easily abused, it creates spaghetti code real quick. And observers simply encourage this practice.

(3) It potentially introduces event order dependencies. While an observer should preferably be used to update an unrelated element when a state changes–such as highlighting the ‘window dirty’ icon in a document window, in many cases I’ve encountered the observer object is used as a sort of “cross-cutting” functionality enabler–with cross-cut functionality added in a way which creates hard to understand dependencies.

I’ve got one class here where the notification creates a state change which later code in the observable depends upon. In other words, the observable depends upon the state of the observer to change before the observable can continue: fail to register the observer, and the observable fails.

It’s the dreaded GOTO all over again, except worse: it’s data driven GOTOs…

Like perpetual motion machines and anti-gravity…

After reading this article on Slashdot: Hiring Programmers and The High Cost of Low Quality, and simultaneously attending an internal meeting where they discussed an internal reorganization and the promise to “execute” on a new project using “agile development methodologies”, I just had an epiphany.

Software development methodologies, such as “Agile” and “XP” and “Scrum” and the like are all attempts to build a perpetual motion machines: they are all attempts to reverse the laws of physics discovered fourty years ago, especially the observation that the productivity difference between an excellent engineer and a poor engineer can be more than a factor of 25. Each of them promise management that they can overcome the barriers in finding good programming talent by imposing some sort of order. And each of them ultimately either fail–or at best manage to allow a company to limp along.

Anti-Patterns: Roll Your Own

Not quite the class inheritance error I had originally planned to write about, but something that is currently biting me in the ass.

The project I’m working on uses a Java Swing table. The way they use the table is, um, “unique” in that “Oh My God What Were They Thinking!” sort of a way. The fundamental problem here is that rather than using the existing Swing class and interface hierarchy in order to create a new table model to feed a JTable object, they instead rolled their own code that sort of kinda uses the JTable stuff, but in their own unique way.

They have their own ‘model’ object, for example, which doesn’t have anything to do with the TableModel interface in Swing, and which uses their own ColumnSpecification class–which again has nothing to dow with the TableColumnModel interface in Swing. Perhaps once I wade through this pile of code I’ll find that their way is more streamlines and more efficient to the task at hand. But right now, none of the stuff makes any sense. I even found one case where they appear to be using a table model object as a ‘flyweight’ table object–which makes no sense, as there are only two tables here.

At home I’ve been tinkering with a piece of code which creates a database connection to a remote SQL server and runs a whole bunch of queries in parallel. I came up with what I thought was an extremely clever way to queue requests up into a pooled connection manager which would pull my requests off and run the request by translating it into a SQL query in one of several pooled background threads.

I plan to rip it all apart.

Why? Because there is an accepted way to do J2EE cached connections and cached prepared statements–and while my design is probably fairly clever, it also doesn’t do things the way it’s normally done in J2EE world with a DataSource object obtaining connections from the database.

And that’s the anti-pattern: rather than sort your way through a whole bunch of puzzling interfaces and use the accepted pattern that the designers of a class expected you to use when writing SQL queries or creating JTables in Swing, you instead decide to roll your own. While your creation may be quite solid and impressive–because it’s not the standard way to do things the next maintenance developer to come along will bang his head against a brick wall trying to sort out what you did while you move on to create new ‘self-rolled’ anti-patterns elsewhere.

For me, what should have been a 30 minute change to their source base–adding two new columns which can be dynamically turned on and off as needed–has turned into a three day nightmare of wading through foreign (and completely unnecessary) class hierarchies and odd-ball problems, such as the apparent abuse of the fireTableStructureChanged() method to indicate a request to change the structure rather than a notification that the structure has changed.

Many years ago Apple had a technical note describing this very problem. They called the problem a failure to understand not just a particular interface, but also a problem to understand the “Gestalt” of that interface. You may be using the interface in a way which accomplishes what you want–but if you don’t use the interface in the way it was intended to be used (because you don’t understand the gestalt of that interface)–you may be screwed heavily later on down the road.

Sun provides a number of Java tutorials on the different Swing components–including tutorials on the JTable class. What these tutorials are good for is not in learning how to build a table–someone could in theory roll their own table class without ever touching the JTable class–but in learning the gestalt of the existing table implementation.

And even though you may not run into dire problems by not understanding the gestalt of an interface (Apple’s tech note was dealing more with interface abuse, such as reaching into a memory handle and fiddling the bits there in a way which was documented as private–but documented nonetheless), you will make the maintenance programmer’s (or you, six months or a year later) life a living hell.