It’s why God created tool tips.

Don’t hide or disable menu items.

This is what separates the User Interface men from the User Interface boys: the suggestion “Instead, leave the menu item enabled. If there’s some reason you can’t complete the action, the menu item can display a message telling the user why” is the most Gawd-aweful suggestion I’ve heard in a long time.

The correct answer, of course, is to create a tool tip which pops up over the disabled item which indicates why the item is disabled. That way we have instant feedback on what a user can do, and if the user is puzzled, he can easily discover why a particular item isn’t currently selected.

In Java, you can do this by calling JMenuItem.setToolTipText(); set the value to an explanation why the menu item has been disabled, and the explanation will pop up when the user hovers over the disabled item.

On the Macintosh, with Cocoa you can do this by setting the ToolTip text field in the IB .xib file with the text describing the item; call NSMenuItem’s setToolTip: method in order to update the text for the disabled menu item.

And even Windows has the ability to create tool tips for disabled menu items, though it takes a hair more work: by watching for WM_MENUSELECT notifications you can then pop up a CToolTipCtrl, or update a status bar text line, showing the appropriate text if the item is disabled.

So as much as I appreciate Joel’s comments on the software development industry, on this particular item, ummmmm… No. I agree more with John Gruber: you’re much better assuming your users are clever. But if your menu command structure contains commands which are just odd or hard to puzzle out at first glance, tool tips are much better than an idiot modal dialog box. It’s just more elegant.

Java sucks and Objective-C is great? Puuuhhhllleeeeaaassseee…

I still remember the brouhaha raised over Java on the Macintosh, and pronouncements by many of the Macintosh Technorati that Java sucks. (I believe Gruber’s words were Cross-platform crippity-crap Java apps.)

By all of the various posts I’ve seen, I’d think that Java was a complete wasteland while Cocoa was the rolling green hills of programmer nerdvana.

Okay, that’s fine. I set forth building a new custom application for the retail market, and faced with the various choices for building a cross-platform system I decided to build a simple data core in C++ with the Mac UI in Objective C++ and Cocoa, and the Windows UI in MFC/C++. (The common data core is to assure that data can be easily shared, but is a very small kernel: perhaps right now about 10% of the overall code base. So I’m not trying the YAAF-lite solution of building a UI framework on top of Objective C++ or MFC; rather, I’m building the UI twice, with a common set of data accessor routines at the bottom of the stack.)

Nerdvana? Hardly.

Today’s fun-filled afternoon was spent trying to figure out how to do an easy gradient fill for a custom control so it doesn’t have a flat, 2D appearance. With the sheer beauty of the Macintosh UI environment, you’d think would come convenience routines–but not really: the amount of work to build a gradient filled background was about what I’d expect using the Graphics2D class in Java.

And I’ve come to a couple of conclusions while engaging in this little exercise.

(1) The advantages outlined in Jens Alfke’s essay about making superior interfaces only gets you half-ways through the front door. To make it across the finish line requires a lot of nit-picky detail work that the Macintosh Cocoa libraries only sorta help you with. Sure, there is plenty of support for animation stuff which is really freakin’ cool. But to draw my simple gradient in a way which was portable back to MacOS 10.4 (and my business app needs to be supported by the current version and the previous version of MacOS X) required about 120 lines of code–which, while it took a couple of hours to toss together and test, wasn’t the easy exercise that many Cocoa advocates seem to suggest awaits all who come to the Cocoa big tent.

This isn’t to say that there aren’t advantages to Cocoa over Java Swing or MFC. However, there are many places where Java’s framework has a clear advantage: the JScrollPane class, for example, is a lot more flexible to work with than the NSScrollView class. And don’t even get me started on NSTreeController, though Rentzsch’s comments on enabling NSZombieEnabled was a god-send.

A corollary to this is:

(2) Macintosh applications look great because Macintosh programmers sweat the details, not because the Macintosh environment makes sweating the details any easier than in Java or Windows MFC. It could very well be the Macintosh market: people expect pretty applications, and so a lot of attention goes into making pretty applications on the Macintosh. This attention to detail, however, doesn’t really exist in the Windows market–and don’t even get me started on Enterprise Java applications. (I will agree with Gruber on this: a lot of Java Enterprise Swing applications do look like crap.)

However, this does not mean that the look and feel of Java applications are doomed, any more than it means Cocoa applications are uniformly great–nor does it mean to get the same level of greatness on a Java application you must spend two or three times more effort than on a corresponding Cocoa application. (And it’s not like Cocoa has inherently better support for creating icons like Panic’s beautiful icons, as opposed to the type of 1980’s icons you see in Java: nowadays they’re both PNG files.)

This attention to UI detail, by the way, predates Cocoa, and even predates NeXT, as anyone who ever read the original Apple Macintosh Human Interface Guidelines would attest. In fact, if anything, NeXT programmers and Cocoa programmers prior to the Apple merger weren’t exactly producing stellar examples of UI goodness: I’d say the biggest problem that I saw as an outsider when Apple and NeXT merged was having Apple’s attention to UI detail and usability drilled into the NeXT programmers–much to their shock. Even today I’d say that Apple’s attention to UI detail is only about 90% of what it used to be in the Mac OS System 7 days.

And that attention to UI detail wasn’t because Mac OS System 7 was a fantastic object-oriented development environment. (I should know; I was writing software for System 6 and earlier.) It was because you would sweat details like the fact that on System 6, an OK button was offset 14 pixels–not 15, not 13, not 8–from the lower right of the screen and was 58 pixels wide, not 57, not 59. (Now I will say that System 6 was superior in one aspect to Microsoft Windows–a superiority which existed for quite a few years: Macintosh QuickDraw’s CopyBits() routine was light-years ahead of any bit blitter available on Windows or on X windows: it would anti-alias, crop, stretch, extrapolate, and even fold your laundry and put it away for you, giving the Mac OS System 6 platform a clear advantage over Windows 3.1. But Windows and X windows caught up in that department long ago.)

So anyone from the Macintosh Technorati who suggests that Cocoa is inherently superior–I’m used to religious wars, even from people who should be intelligent enough to know better.

Oh, and for the curious, I’ve attached my GraidentFill class in Objective C using the CGShading routines to handle my custom control shading. It’s really not as flexible as the 10.5-only NSGradient class, but it does the trick.

So… How many languages can you use?

Let’s see: Java + JSP for the server. C++ for the core desktop and phone engine. C++ for the Windows and WinCE version. Objective-C++ for the Mac and iPhone version.

Sounds just about right for a custom client-server system targeting multiple platforms: it’s about using the right tool for the job, and keeping things as simple as they can be–but no simpler.

Today’s WTF? moment is brought to you by Yahoo!

Today’s WTF? moment, found in some code I’m trying to sort out: (specific actual implementation hidden to protect the clueless)

private static void method(ItemInterface interface)
{
    ItemImplementation impl;
    if (item instanceof ItemImplementation) {
        impl = (ItemImplementation)interface;
    } else {
        return;
    }
    some action on impl
}

Here’s the thing. An interface provides an abstraction to hide the implementation, so you can swap implementations around without anyone being the wiser.

If you are “cracking the interface” by casting an interface reference to the underlying implementation, then why the fuck do you have an interface in the first place?

All it means is that you break the possibility of using an interface for what it is supposed to be used for, while preserving the illusion that you have some competence as a programmer.

Feh!

[Update: The code I was referring to was the client of a framework, where the client was casting the interface to the underlying framework implementation of the framework.]

Anti-Patterns: Roll Your Own

Not quite the class inheritance error I had originally planned to write about, but something that is currently biting me in the ass.

The project I’m working on uses a Java Swing table. The way they use the table is, um, “unique” in that “Oh My God What Were They Thinking!” sort of a way. The fundamental problem here is that rather than using the existing Swing class and interface hierarchy in order to create a new table model to feed a JTable object, they instead rolled their own code that sort of kinda uses the JTable stuff, but in their own unique way.

They have their own ‘model’ object, for example, which doesn’t have anything to do with the TableModel interface in Swing, and which uses their own ColumnSpecification class–which again has nothing to dow with the TableColumnModel interface in Swing. Perhaps once I wade through this pile of code I’ll find that their way is more streamlines and more efficient to the task at hand. But right now, none of the stuff makes any sense. I even found one case where they appear to be using a table model object as a ‘flyweight’ table object–which makes no sense, as there are only two tables here.

At home I’ve been tinkering with a piece of code which creates a database connection to a remote SQL server and runs a whole bunch of queries in parallel. I came up with what I thought was an extremely clever way to queue requests up into a pooled connection manager which would pull my requests off and run the request by translating it into a SQL query in one of several pooled background threads.

I plan to rip it all apart.

Why? Because there is an accepted way to do J2EE cached connections and cached prepared statements–and while my design is probably fairly clever, it also doesn’t do things the way it’s normally done in J2EE world with a DataSource object obtaining connections from the database.

And that’s the anti-pattern: rather than sort your way through a whole bunch of puzzling interfaces and use the accepted pattern that the designers of a class expected you to use when writing SQL queries or creating JTables in Swing, you instead decide to roll your own. While your creation may be quite solid and impressive–because it’s not the standard way to do things the next maintenance developer to come along will bang his head against a brick wall trying to sort out what you did while you move on to create new ‘self-rolled’ anti-patterns elsewhere.

For me, what should have been a 30 minute change to their source base–adding two new columns which can be dynamically turned on and off as needed–has turned into a three day nightmare of wading through foreign (and completely unnecessary) class hierarchies and odd-ball problems, such as the apparent abuse of the fireTableStructureChanged() method to indicate a request to change the structure rather than a notification that the structure has changed.

Many years ago Apple had a technical note describing this very problem. They called the problem a failure to understand not just a particular interface, but also a problem to understand the “Gestalt” of that interface. You may be using the interface in a way which accomplishes what you want–but if you don’t use the interface in the way it was intended to be used (because you don’t understand the gestalt of that interface)–you may be screwed heavily later on down the road.

Sun provides a number of Java tutorials on the different Swing components–including tutorials on the JTable class. What these tutorials are good for is not in learning how to build a table–someone could in theory roll their own table class without ever touching the JTable class–but in learning the gestalt of the existing table implementation.

And even though you may not run into dire problems by not understanding the gestalt of an interface (Apple’s tech note was dealing more with interface abuse, such as reaching into a memory handle and fiddling the bits there in a way which was documented as private–but documented nonetheless), you will make the maintenance programmer’s (or you, six months or a year later) life a living hell.

Anti-patterns: improper inheritance.

While examining some other code I’ve encountered a bunch of stuff which makes me wonder what they’re teaching people nowadays in school. Rather than silently complain about the problem to my co-workers, of course I decided instead to use this forum to rant.

Today’s anti-pattern: improper inheritance.

The problem: A class ‘ClassA’ is defined which contains two features–call them ‘feature 1’ and ‘feature 2’. A class ‘ClassB’ needs to adopt ‘feature 1’ in ‘ClassA’, but also add new features which are somewhat orthogonal to the functionality in ‘ClassA’, ignoring ‘feature 2.’

When a software maintainer comes around to change ‘feature 2’ in ClassA, he finds that ClassB breaks–ClassB was written in such a way as to hide or never initialize ‘feature 2’ within ClassA in the first place.

Why this happened: The software developer writing ClassB saw some of the code he wanted to reuse, but rather than deal with the problem of properly abstracting ‘feature 1’ out of ClassA, he instead took advantage of subclass-level visibility of the internal fields in order to bypass ‘feature 2’.

What the developer should have done: Take ‘feature 1’ out of ClassA and create an abstract super-class ‘ClassC’, then rewrote ‘ClassA’ and created ‘ClassB’ to inherit from ‘ClassC.’ Thus it’s clear that the functionality in ‘feature 2’ isn’t used in ClassB, and ClassB doesn’t have to rely upon supernatural knowledge of ‘ClassA’ which could be broken in an upgrade or bug fix.

What this illustrates: A class in object oriented design should be a self-contained unit of functionality. If a class is well designed, it should have well-designed extension points and a well-designed feature set which acts as a single unit.

When well designed, other classes do not have to rely upon super-natural knowledge of the implementation of a class in order to operate correctly; the implementation knowledge–while relatively transparent so that users can know how the black box should operate–should be implemented as if it were a black box.

When a subclass has to rely upon supernatural knowledge or relies upon knowing which calls touch certain internal states related to a feature that the subclass doesn’t want to enable or use, then this clearly shows the need to refactor the superclass.

Buzzword Compliance Library

So I’m looking at someone else’s code, and they make extensive use of a particular jar file. Time to poke around and figure out what that jar file is.

To paraphrase the info page, which should tell me what the library provides:

FooLib is an enterprise-class open source library to help deploy and integrate services in a service-oriented architecture. FooLib can help you decrease development complexity while improving end-user experience by coordinating business processes with unparalleled flexibility and reduce overall total cost of ownership.

In other words, FooLib is a module which helps my code maintain buzz-word compliance with most IT pointy-haired managers.

*sigh*

WTF?!?

Java’s Exceptions are an extremely powerful way to figure out what went wrong. Too bad many Java programmers go out of their way to hide what went wrong from their fellow developers.

If you’ve got to recast an exception:

try {
    doThing();
}
catch (RandomLocalException e) {
    throw new OuterException("Some mysterious message");
}

At least have the decency to write

try {
    doThing();
}
catch (RandomLocalException e) {
    OuterException oute = new OuterException("Some mysterious message");
    oute.initCause(e);
    throw oute;
}

That way I don’t have to chase down the rabbit hole that is doThing() to find out why it barfed.

Things that irritate me.

So I’m looking through some source code and I come across an unsatisfied reference to a library. The import package name starts with ‘org.lwes’. Custom and convention says that when you name a package in Java, you use your domain name to your company or organization, but reverse it: thus, stuff I write for chaosinmotion.com goes into the top level package com.chaosinmotion. If I’m extra tricky (but I’m not) I would create a section on my web page for each component: thus, com.chaosinmotion.UITools would be documented at http://UITools.chaosinmotion.com.

So I go to http://lwes.org, thinking I’d encounter some package maintainer for some random Java tools package–only to be greeted with the message “Living Water International – El Salvador’s web page has moved.”

Given that the code appears to be an event system (I’d guess ‘lwes’ is “Light Weight Event System”), I wasn’t expecting to encounter the Living Waters-El Salvador web site. Grrrrrrr…

One reason why I hate beans.

I’m now poking around some source code for a project that was built with a bunch of little beans. I understand the theory of a Java bean: by segregating your code into small and self-contained modules it makes development easier (the person developing the bean should only worry about just that bean), and it makes debugging easier (you can host the bean in a test framework and test it in isolation).

However (and there is always a ‘however’ in my rants), I’m reminded of a post on Cocoa development: Thoughts about large Cocoa projects:

My app isn’t huge compared to Photoshop or Word—it’s teeny in comparison—but it’s large compared to some Cocoa apps. (It has 345 .m files and an executable size of 3.2MB when stripped.)

It’s big enough that, were you to ask me how _____ works, I’d have to go look. There’s no way I can remember, with any level of detail, how every part of it works.

I call it the Research Barrier, when an app is big enough that the developer sometimes has to do research to figure things out. (“Research” just means reading the code and following some paths of execution, sometimes running in the debugger.)

To summarize my problem with beans and the Spring framework and other systems which are designed to encapsulate your Java program into a bunch of small objects interconnected with a mechanism that uses Java reflection to hook things together is that it breaks the “research barrier.” It makes answering the question “who calls this?” and “what does it call?” much harder–if not impossible–to answer. Rather than making the answer ‘who calls this’ a matter of right-clicking on the method call in Eclipse and doing a search, you cannot answer the question anymore in a reasonable way: the entire application has been reduced to a complex dance of dozens or hundreds of jar files. Worse: the overall ‘flow’ of the application is no long encapsulated in a handful of classes but instead lives in some configuration file which–by design–can change without notice.

So here I am with a bunch of beans and all I want to answer is “what database tables are queried to generate a report?” The answer? With all these beans scattered around, apparently the answer changes with the phase of the moon.