It’s not done until you document your code.

I remember the original “Inside Macintosh.” I actually still have the loose-leaf binder version of “Inside Macintosh” that shipped with System v1.

The original Inside Macintosh documented the “Macintosh Toolkit” (the acronym “API” wasn’t in common use then), and, aside from two introductory chapters–one which described the OS and one which documented a sample application–each chapter followed the same formula. The first part of the chapter, consisting of from 1 to a dozen pages, would provide an overview of that toolkit. For example, the “Resource Manager” overview describes what resources are, how resource are important, and how resources are stored on disk. The second part of the chapter would always be “Using the XXX Manager”–giving examples which generally followed the pattern of how you initialized that manager, how to create or manipulate fundamental objects, how to dispose of fundamental objects, and how to shut the manager down. This would consist of the bulk of the chapter. And the end of the chapter would be a summary–generally a summary of the header file for that manager.

It always struck me that such a model was a great way to handle documentation. Start with a 1 to 3 page introduction to whatever fundamental module you are documenting–a “module” consisting of a logical unit of functionality, which could consist of several classes. Then launch into a 20 page document showing how you use that module: how you start it up, how you use the major features, how you configure the major features, how you shut it down. And give snippets of code showing how these are done.

And the summary should point to either the generated JavaDocs or HeaderDoc generated documentation giving the specific calls and specific parameters for each call.

What is interesting about such a model is that it should be fairly easy to write for the technical person creating the toolset: he knows how he wants his class set to be used, so he should be able to craft documentation describing how to use it. For a class which presents a table, for example, the developer has a mental model of how tables are displayed: a delegate or data source is created which responds to certain calls and is attached to the table.

It has always been my opinion that writing code is simply one part of a multi-step process which ultimately results in your code being used. After all, isn’t the whole point of creating code is getting people to use your code? Developers rail against “lusers” who refuse to learn how to use their computers–but I suspect it’s because developers know their fellow developers are more likely to work through the source kit than the average person, and it allows them the excuse not to write the documentation they should write.

Your code isn’t finished until it is well documented. And complaining about people who are confused without good documentation is simply shifting the blame.

Come on, guys; spell check!

Just saw a résumé cross my desk today, with “JavaScript” spelled “JAVA Scripts.” (Yes, from the context of the sentence, the guy was clearly referring to JavaScript.)

Naturally I bounced it.

I’m a terrible speller. My english skills are at best “okay.” But come-on: if you’re going to send off a résumé to impress a hiring manager in order to get a higher-paying job, at least spell the damned technology correctly!

Women in Computer Science

“Typical” computer science workspaces off-putting to women

This is something that has concerned me greatly. In part, because my wife (who graduated from Caltech and who is smarter than I am at mathematics) would do quite well as a software developer–and she won’t touch the industry with a 10 foot pole. And in part because, underneath it all, there is a definite culture which I do not like, which I tolerate because it is the price I have to pay in order to do the work I love.

The study quoted in Ars only covers the outward displays of a culture: science fiction memorabilia, snack food. But I suspect the problem runs much deeper than the outward signs.

The bottom line is that I have yet to work for any computer software development firm which didn’t have a strong wiff of adolescent teenage college boy frat-house hanging in the air–both in terms of interaction of team members, in terms of project planning (and putting out fires), and even in terms of the way projects are managed (like “scrum”, a term that comes out of Rugby).

No one thing, of course, is at fault: I suspect if it was just the matter of one too many models of the Enterprise sitting in the corner or someone using baseball terms to discuss a project, most women would be just as happy to ignore the occasional infraction on good taste. But it’s the overall culture that creates the problem.

And I don’t know how you change it.

I will note, however, that it’s not a lack of intelligence or a difference in education or something intrinsic about women–outside of a lack of willingness to put up with an adolescent male frat club culture: the majority of software developers writing the software for the Space Shuttle are women. I strongly suspect three things that make working on the Shuttle appealing are (a) a lack of “cowboy” programmers (with their ‘dick length’ contests), (b) complete predictability in the workday (and no “burn the midnight oil” rush sessions, since rushing kills astronauts), and (c) a culture of review and oversight that looks at fault as a fault in process, rather than seeking to assign blame to individuals who fail to be “cowboy” enough.

My goal with my own development group is to lead through teaching and establishing an example–and the example I want to set is one of predictability (through proper prior planning) and one where no-one is expected to “rush.” Next week I’m putting together essentially “course material” on how to write code within our project, on the theory that anyone who is interested in technology and who can write Java and use Eclipse can “turn the crank” without having to burn the midnight oil or be a self-directed “cowboy.”

We’ll see how this theory works in practice.

But I do know I completely despise the frat-house atmosphere at most software development companies. (It’s why I hated college: I loved the classes, I hated the frat-house college culture.) And if I can play a small role in banishing part of it in my own little corner of the universe, I will be a happy person.

User Interface Design Anti-Pattern: Pecked to death by ducks.

Here’s a user interface design anti-pattern I just ran into.

So I’m trying to complete my company’s on-line sexual harassment training program. I guess they teach you how to be better at engaging in sexual harassment; I dunno. And the reason why I don’t know is because when I logged into the program using Safari on the Macintosh, I got the error message:

“We’re sorry but your browser is not supported. We support Firefox and Internet Explorer.”

So I called up my copy of Firefox on the Macintosh, and got:

“We’re sorry, but your operating system is not supported. We only run on Windows XP, Windows 2000 or Windows Vista.”

Okay, so I called up my trusty copy of Parallels, launched Windows XP, and got:

“We’re sorry, but the training program requires RealPlayer or Windows Media Player.”

Which I’m now installing.

Each error landing page I got did not mention the subsequent limitations. Instead, it was just a terse error message, undoubtedly arrived with pseudocode like:

if (browser not in approved list) then
    print error "browser not in approved list"
    exit
else if (operating system not in approved list) then
    print error "operating system not in approved list"
    exit
else if (player plugin not installed) then
    print error "plugin not installed"
    exit
else ...

How stupid is this anti-pattern? I get to get pecked to death by ducks–making the whole on-line experience quite painful.

And it would have been so easy instead to write something like:

list missing = new list;
if (browser not in approved list) then
    add "browser not in approved list" to missing
end
if (operating system not in approved list) then
    add "operating system not in approved list" to missing
end
if (player plugin not installed) then
    add "plugin not installed" to missing
end
...
if (missing is not the empty list) then
    show missing on error page
    exit
end

That way you can display all of the missing problems to the user all at once. It’s not like each test couldn’t have been run in quick succession, and then the user warned about the missing requirements all at once.

Better, land the person on a succinct “requirements” page which lists, in simplified bullet form, the specific requirements of your product, after listing the stuff that is missing.

The rules for the pattern that should be applied here are:

  • Make what the user is missing clear
  • Make the requirements obvious
  • List the corrective actions the user should take immediately obvious.

And don’t peck your users to death.

Two Views of Design

After watching Objectified again a few days ago I started formulating an idea of the concept of “Design”.

It seems to me there are two separate concepts of “Design” that have become mashed together.

The first concept, which to me is interesting but somewhat useless, is the idea of design as art or as artistic expression. There are plenty of designs that go by on Dornob which is art going for design. Many of the “designs” featured there are either making statements (such as the Nuclear War Shelving Units) or demonstrating technique (such as the laser-cut custom wall shelves) which are ultimately–if not useless, are at least impractical. Don’t get me wrong: I love these things. I love the floating bed concept, and the articles on vintage futurism is always very cool.

The second concept of design is a functional one. It is the idea of design as thinking through a problem obsessively–using various design patterns to decompose a problem (which is inherently a blank slate) and construct and test solutions which work even on the edge cases.

This sort of design is explained with the story in Objectified with the story of the Japanese-inspired toothpick, with the description of various design approaches (formal decomposition of a problem, the cultural or contextual symbolism of an item, and looking at the object within the larger contextual framework where it is placed or used.). It’s described by Jonathan Ives when he describes the design of the iPhone or the making of jigs to help cut a single component out of a block of aluminum. It’s shown when Bill Moggridge describes his early computer laptop design’s “pencil kick” component which kicks small objects out of the hinge.

It is this second concept of design which fascinates me–because to me, it’s about thinking the problem through.

Within my own realm, I’m fascinated by user interface design–which encompasses the realms of human interaction “language” (or the gestures and symbols used to consistently communicate with the computer), about the functional decomposition of a problem set into the hierarchy of components in that representation, and about the subtle (and non-subtle) cues that we use to guide a user to the relevant bits of information.

The current project I’m working on has two pieces–and I find it quite exciting. The first piece is a consumer-facing interface–it gives ad statistics to our advertisers. It’s an interesting project because the primary thing we’re trying to communicate is the power of our system and its usefulness in bringing you potential customer leads.

The second, and far more interesting component to me, however, is our back end administrative console. This is far more interesting because rather than the front end, whose purpose is in part artistic and artistic expression–making the data useful in a pretty way–it’s a far more functional interface.

And functional interfaces are exciting to me, because it requires bringing the full gamut of design tools to communicate the functional components of the interface in a way that is completely unobtrusive. It’s a thousand little decisions like the status light on the front of a MacBook Pro: when you need the information it’s there, almost as if it should be there, almost as if it was always there–but when it’s not needed, it quietly disappears: the calm, considered solution and not constant reminders of “the terrible struggles that we as designers and engineers had in trying to solve some of the problems”, as Jonathan Ives put it.

Nouns, Verbs and UI Interface Design.

Excuse my ramblings.

Here’s a common mistake.

Say I’m building a client/server application. In attempting to understand the problem set I first design the back-end database layout, the data flow between systems, and the transformations (procedures, program statements) which perform the tasks that I need to use to achieve my system design.

The transformations then are mapped onto UI elements, so that the user can trigger these transformations by pressing a button or selecting a menu bar item, with the inputs and outputs of these transformations displayed in tables or custom views.

And we can call it a day, right?

Dear God, NO!!!

Of course you still need to design the back-end of the system, and how the data flows within the system. But users are not computer programs: most of us don’t think in terms of a sea of transformations that may apply against different data sets that can or cannot be grouped into larger transformation operations.

We think in terms of nouns, verbs and adjectives.

And user interfaces–at least when they’re done right–are not built in terms of transformation operations, but built in terms of the “parts of speech” that we understand, with the suitable “verbs” presenting themselves alongside the “nouns”.

I’m arguing for a deconstructionist perspective of design here: your interface will probably break down into a hierarchy of objects. In this hierarchy of objects you can discover the sub-objects and components which make up that object through “disclosure”: for example, double-clicking on a thing to open it, or pressing the disclosure button to reveal the sub-items in a list of items.

Actions that are appropriate to that object should be discoverable as close to that object as possible. The “verbs” (transformations) suitable for that object or it’s contained objects can be shown by updating a menu bar or having a small set of buttons associated with that item: in a sense, it’s a matter of balancing the atheistic of a minimal design with the need for discoverability: both work in favor of the user–with minimalism de-cluttering the screen from useless information, and discoverability showing the user (with subtle hints) how to perform an operation.

By associating the verbs with the nouns–the legal transformations with the objects they transform–we immediately move commands out of a global menu bar (where it’s unclear what may apply and why) and moving them with the objects that they operate on.

I like the [Edit] button on the iPhone and on the Address Book on the Macintosh: it helps balance the need of discoverability (how do I edit an entry? Hit the edit button) with minimalism (there isn’t a thousand edit boxes and icons on my main display confusing me). It adds a click to changing a record, of course–but the flip side is that this one click–this one moment in the user’s life–immediately makes it obvious what to do and how.

I also am starting to resent menu bars in general. In a perfect world they are the common verbs applicable to all (or almost all) nouns in the noun space of the application. But unfortunately most UI programmers use them as a dumping ground for noun-specific transformations, leaving the user to wonder why there are these twelve things that are always disabled. Most applications work with a menu bar because most applications operate on a very small uniform set of nouns. But with a more complex noun-space things become very messy very quickly.

It’s also why I’m very picky about the wording of menu items and the descriptions of objects in the documentation. Menu items should always be verbs: “Cut”, “Copy”, “Paste”, “Clear”, “Open”, “Close”, “Quit” are all direct verbs: “I copy this icon”, “I quit this application.” There are places where this is violated, and it sort of bothers me: in Safari the menu “User Agent” should in my opinion be “Set User Agent” (dropping the implied “to”). Of course sometimes the implied verb may be dropped, but I’ve never been quite comfortable with that: I’d rather have a sub-menu with a list of nouns buried under a menu item with the verb–using the submenu noun list as a modifier for the verb. (So: “Open >” pointing to a list of previous files, rather than a list of previous files.)

Of course, the idea is not to be strict about this–but to be clear: a new user should know if he selects a file it will open. Microsoft broke the contract of menu items as verbs with it’s quick open list in the File menu, and we’ve adopted it even though it’s bad UI design. But now everyone knows that “Open” is an implied verb, as is “Set”. *meh*

Overall I think the design of an application should be done both from the bottom up (data structures, then transformations, then presentation) as well as from the top down (user interface, then nouns and verbs, then map onto data structures and transformations). If we were to practice better design–and part of that is understanding the taxonomy of the nouns and verbs we are presenting to end-users–we could make user’s lives much easier.

And, as Apple has shown repeatedly with it’s Macintosh systems, iPhones and iPods, users will happily pay lots of serious money for systems which they perceive as easy to use.

Why good functional design now will save you a mint later.

Via BoingBoing: Gadget problems divide the sexes

The service found that 64% of its male callers and 24% of its female callers had not read the instruction manual before ringing up.

12% of male and 7% of female customers simply needed to plug in or turn on their appliance.

This is why good design–site design, software design, gadget design–is essential: because 45% of your users won’t read the manual before calling your customer support line.

Minor point of departure.

If you’re a developer, you probably have already read it, or read about it: The Duct Tape Programmer. There are some really good points about this article–but it’s clear that the article itself is written by someone who is not a duct tape programmer.

The differentiation between an Architecture Astronaut and a duct tape programmer is pragmatism and simplicity:

You see, everybody else is too afraid of looking stupid because they just can’t keep enough facts in their head at once to make multiple inheritance, or templates, or COM, or multithreading, or any of that stuff work. So they sheepishly go along with whatever faddish programming craziness has come down from the Architecture Astronauts who speak at conferences and write books and articles and are so much smarter than us that they don’t realize that the stuff that they’re promoting is too hard for us.

Architecture Astronauts proclaim with absolute certainty whatever it is they are promoting. Duct tape programmers say “shut the hell up and get this out the door.” Your toolkit is just that: a toolkit, and there is no point in tossing that toolkit out the door for new tools simply because it is the new hotness.

So how does Joel Spolsky characterizes Duct Tape Programmer thinking? By being an Architecture Astronaut:

Duct tape programmers tend to avoid C++, templates, multiple inheritance, multithreading, COM, CORBA, and a host of other technologies that are all totally reasonable, when you think long and hard about them, but are, honestly, just a little bit too hard for the human brain.

“Stop it, just stop.”

Each of these have their place, if your first desire is to ship. Me, if you treat C++ as C with classes, avoiding templates (unless it’s a STL object, ’cause using std::vector is easier than rolling your own), and for the most part avoiding exceptions (because the rules for exceptions can get really convoluted), then you should be fine. C++ is a tool, like Java, like C, like Objective-C: C++ has more potential pitfalls, but just ignore them. Multiple inheritance also has it’s place: it’s what you do when you accidentally code your class tree into a circle and don’t have time to renormalize things into a well-defined class hierarchy. Multi-threading only is useful when dealing with networking or when you put the thread handling into a thread queue and can just create a well-defined process and toss it in–and if you want to ship on Windows, you have to deal with COM.

And if we’re going to complain about CORBA, can we also complain about SOAP–which is not simple, despite the name? And can we complain about all the additional crap (like introspection) being tossed into the base XML-RPC specification?

The two overriding rules are “just ship the damned thing” and “sometimes life is messy.” Towards the first “just ship the damned thing”, it means don’t be afraid to use yesterday’s well-defined tools rather than today’s cutting edge experimental crap. It means avoiding the well-known pitfalls in the tools: if you treat C++ as C with objects, it will be far better than using C and trying to build your own class hierarchy processing. It means not rebuilding what someone else has done for you for free: why use C with objects when you have C++? Why build your own Complex number type or dynamic array processing or hash map, when there is std::complex and std::vector and java.lang.HashMap? And it means sometimes you’ll code something because it’s what will ship, rather than architecting something that is beautiful–which means sometimes you’ll find yourself creating a class which multiply-inherits or a Java class with half the methods written as one-line delegates. Because it’s ugly–but it will ship. And do you have time to figure out how to reorganize your class hierarchy into a strict tree representation?

Sometimes it also means knowing where to draw the line when using a toolset. Me, I like C++ but hate templates with a passion and think exceptions are good if used in a limited way. I like Java because it allows you to write a lot of code quickly, but hate Java because it has allowed lots of people to write lots of code quickly–and in the hands of an Architecture Astronaut it’s allowed them to create all sorts of abominations that showcase their theoretical bullshit. (cf: Spring or Hibernate, which has struggled to make a simple problem impossible to solve. Cross cutting? Puhleeeease…) The last time I wrote a Java server, I wrote code directly to the Servlet layer, with a simple bit of parser code to parse XMLRPC. Took me an afternoon to get up and running against an iPhone client.

It certainly also means waiting to jump on the next tool bandwagon once it stops being a theoretical playground for bored Architecture Astronaut wannabe weenies. Which is why I’m not on the Ruby on Rails bandwagon or the Code in the Cloud concept. I like tools that are well-defined which have well-defined and mature development tools: one reason why it’s easy to write code in Java is because there are many solid IDEs which help you write code. (Eclipse, Netbeans, IntelliJ).

Just ship the damned thing already, okay? Remember: shipping is a feature far more important than any other feature in your product.

Why I hate custom protocols over HTTP.

One recent trend is to use HTTP in order to send data between a client and server. Between protocols built on top of SOAP and XML/RPC (and yes, I’ve built code on top of XML/RPC, and have a Java XML/RPC library), it’s not all that uncommon to send text commands over HTTP.

And it makes sense: HTTP generally is not blocked by various internet providers while other ports are firewalled, and it is well supported across the ‘net.

As a rule, however, I’m generally opposed to overriding an existing protocol for private use. My instincts are if it is possible for me to open a port from a client to the server that is not in use by an existing protocol, then use that port instead.

With HTTP, there are a number of downsides. HTTP is essentially a polling protocol: ask a question, wait for an answer, get an answer. There is a lot of plumbing that has gone into HTTP in order to work around the performance issues revolving around HTTP–but because it is essentially a polling protocol, there is little you can do to bypass a resource that takes a long time to download besides opening up a second connection. (Protocols like LDAP allow multiple logical connections over the same physical TCP socket.)

HTTP has also become somewhat more complicated over the years, with things like optional keep-alive settings and an array of possible return codes. All of this makes sense if you’re building a web browser (though some of it is a bit over-engineered: I don’t know if 418: “I’m a teapot” is a joke or a sarcastic response to things like 449: “Retry With”), but for a simple RPC protocol, we really don’t need more than “success”/”failure”/”exception.”

And today I learned another thing that just confirms my “don’t override someone else’s protocol; just build your own” instinct.

As designed the client I’m working on initialized a connection by requesting information on static resources that may have changed. So I’d do an “init” call, and wait for a response. As part of the request call, the server team specified that I should send “if-modified-since” with the date of the last response, so I can tell if I should update the cached response. (This was modified from the original idea, which was to simply use an integer version.) This client runs on Android both over WiFi and over the cell network.

You can guess what happened next.

Yes, T-Mobile rolled out a new proxy server to reduce 3G network traffic by automatically detecting and caching server responses, and sending ‘304’ errors on the init call. Well, if you send ‘if-modified-since’, you better process 304 errors, right?

My client didn’t.

And so it means the 130,000 people running our client software–died. Hard.

The first time you run the application it would sync up just fine. But the next time you’d hook up, T-Mobile would detect that your request hadn’t changed, and would send a 304 response–which the client would not understand, and eventually shut down claiming the client could not connect to the server.

And we never tested this. Of course we never tested this. Our server never sent the 304 exception, and so we never had a way to test this. In retrospect, of course “everyone knows” that if you send an if-modified-since, you should process the 304 exception.

The fix was simple, as all such things tend to be once they are discovered and understood.

But it would never happened if we had never overridden an HTTP protocol, where there are layers we don’t fully understand (until they break) running on a network which can insert any ol’ proxy (some with bugs we may never understand) between the client and the server.

Words of Scheduling Wisdom.

Maker’s Schedule, Manager’s Schedule

One reason programmers dislike meetings so much is that they’re on a different type of schedule from other people. Meetings cost them more.

There are two types of schedule, which I’ll call the manager’s schedule and the maker’s schedule. The manager’s schedule is for bosses. It’s embodied in the traditional appointment book, with each day cut into one hour intervals. You can block off several hours for a single task if you need to, but by default you change what you’re doing every hour.

When you use time that way, it’s merely a practical problem to meet with someone. Find an open slot in your schedule, book them, and you’re done.

Most powerful people are on the manager’s schedule. It’s the schedule of command. But there’s another way of using time that’s common among people who make things, like programmers and writers. They generally prefer to use time in units of half a day at least. You can’t write or program well in units of an hour. That’s barely enough time to get started.

When you’re operating on the maker’s schedule, meetings are a disaster. A single meeting can blow a whole afternoon, by breaking it into two pieces each too small to do anything hard in. Plus you have to remember to go to the meeting. That’s no problem for someone on the manager’s schedule. There’s always something coming on the next hour; the only question is what. But when someone on the maker’s schedule has a meeting, they have to think about it.

For someone on the maker’s schedule, having a meeting is like throwing an exception. It doesn’t merely cause you to switch from one task to another; it changes the mode in which you work.

If you are a manager, please read the whole thing.

And if you used to be a maker and are now a manager, and you forgot the lessons in this article, then I only have one pleading question: why???