Immediate feedback.

Invasion Defense feedback thus far:

(1) My brother wants to hear audio of the people screaming as they die a fiery death when a city gets blasted. He also wants to watch martians walk around when the lander lands, blasting cities from the ground.

(2) Things I forgot to mention in the instructions which I will revise:

You only get 20 missiles. Ten on the left, ten on the right. If the missile silo gets destroyed, the missiles in that silo are also destroyed, and a missile is fired from the other side of the screen, making things harder.

In order to advance past the lander stage and the flying saucer stage, you must destroy all the landers and all the flying saucers.

I’ll revise the artwork for the ‘about page’ in the next few weeks to make it more obvious. I also think an “Out” sign floating above the missile silo when that silo is out of missiles will reinforce the idea that you only have 20 missiles.

Renaming an iPhone executable within a project

By trial and error I figured out the easiest way to rename an executable in Xcode for an iPhone.

Step 1: Select “Edit Active Target ‘MyApp'” under the Project menu in Xcode.
Step 2: Select “Product Name” under “Packaging” within the build settings for the target.
Step 3: Rename the product name to your desired application name.
Step 4: Delete the “Build” directory in your project folder or do a “Clean All”.
Step 5: (And this is where things get tricky) Quit Xcode.
Step 6: Start up Xcode with your project
Step 7: Build.

For some strange reason the product name appears to be cached by whatever system is handling code signing; thus, if you don’t quit Xcode after doing a clean all (which I’m pulling off by deleting the build directory), the old iPhone app name is used for code signing–which means you wind up with two .app folders: one which contains most of your application, and the other which just contains the signing certificate.

Go figure.

You know credit is tight when…

I got an offer from my bank to refinance my home mortgage. I’ve gotten these offers before, and the last time I did it I literally signed a single piece of paper–and the paperwork went through without any question.

This time I got an offer to refinance my mortgage to a fixed from the adjustable it used to be. Thinking it’d be the same process, I said sure.

Twenty three pages of supporting documentation later, and two documents missing (I need to call a couple of people to forward the documents) and I’m starting to feel like I’m receiving the financial equivalent of a proctological examination. Just as well: even though my current mortgage doesn’t adjust until 2011, I’d rather have the fixed mortgage–and I suspect my bank would like the updated financial information so they can repackage my mortgage as a high quality MBS rather than the junk that is floating around there, killing our banks.

So here is an interesting experiment.

I have a meeting in downtown Los Angeles, right off a subway stop at 7th street. So I thought I’d take the subway.

Why not; if I’m going to help pay the fourth billion dollar tax bill of the subway I may as well get some value out of it, right?

A few notes: first, it feels like any other subway I’ve taken in Europe or back east. Except for the turnstyles, which do not exist: the whole thing is run on the honor system. Well, there are cops, but they seem more interested in stopping me from taking pictures than they are in making sure I have a value day pass.

I take pictures anyway.

If I had taken my car to the office building at this time of day it would have taken me 20 minutes to get there. As it is it took me 10 minutes to get to the parking lot, another 10 minutes to park, and about a 10 minute wait for the subway car to run. I gave myself an hour. Let’s see how well this pans out.

A day pass is only $5, making this cost effective if I’m paying for parking. And so far the train seems to be running with efficiency: since my meeting ends at five my theory is that it may save time on the return trip, since downtown traffic is a nightmare at rush hour.

Later…

It took about 25 minutes, making me about five minutes late to my meeting. Not bad, not great.

Later…

Now the real test: leaving downtown Los Angeles. Right now trains are running every 10 minutes; if I can get back to my car in a timely fashion, then this will be–well, not a win, but at least a draw.

It’s clear to me that the real purpose of the subway is not transportation, though it serves that role. The real purpose is infill redevelopment in the Los Angeles area. I may just pop my head up at a couple of stops to see if I find what I did in Hollywood: massive development at each stop.

Huh. The train is late.

Addendum: I’m playing subway “wack-a-mole” and poking my head up every few subway stops to test my theory that the subway is about redevelopment. This may take a little longer, but it is more entertaining.

Later…

I think the LA subway is interesting. As a transportation method it is clearly a failure: for what we spent for it we could have improved the freeways a lot. But for incouraging infill development, it works very well.

Self employment health care

The first thing I did as a 42-year-old with a wife when I left my job to go down the self-employment route was to get health care insurance. And boy is it complicated!

To summarize, I managed to get Aetna with a moderately high deductible ($5k/year) for both of us for around $300/month.

What made the decision making process complicated is not really understanding the various deductions and coverage exemptions. But it boils down to this: the various Aetna plans essentially have two rate structures. The first rate structure is when you see people who are part of the Aetna system; the second is for those who are not part of the Aetna system. And each rate structure includes a deductible (the maximum you will have to pay out of pocket for medical care per year), a “major medical” deductible (the maximum additional amount you’ll have to pay out of pocket if you get something like surgery), a visit co-pay (what you’ll have to pay when you see a doctor) and a prescription co-pay (what you’ll pay for a prescription). Each of these four numbers (deductible, major medical deductible, co-pay, prescription) is repeated twice: once for in-system and once for out-of-system. Once you’ve hit the deductible, of course, Aetna covers the rest up to $5 million–at which point, I presume you’re screwed. (A friend of mine died a few years ago; his medical bill exceeded $1 million for a 90 day stay in the hospital with a constant stream of tests and the attention of a whole bunch of specialists. That means that while a $5 million cap isn’t impossible to hit, it’d require a pretty nasty health care disaster on the scope of 15 months of hospitalization.)

Each of the different plans I saw from other companies looked roughly the same: about a dozen or so confusing numbers–but all of which boil down to your own responsibility before the health care package kicks in. (I only went with Aetna because I was covered by Aetna when I worked for Yahoo! and Symantec, so I wanted to keep things simple.) Ultimately, however, after wading through each of the numbers, health care insurance boiled down to how much I was willing to pay each month verses how much risk (how high a deductible) I was willing to live with. For someone who is self-employed with a huge pile of savings, it seemed a high deductible made the most sense: in the event of a disaster I can easily write a check for $8k (my max out of pocket if I’m hit by a bus, which is the deductible amount plus the “major medical” deductible on my policy), but in the starting stage of my startup, I’m not making any income.

A New Venture

Over the past few years while working for Symantec and Yahoo! I managed to save some money, which gives me the space to try out something new: writing software on my own. We’ll see if it works, or if I starve. 🙂

As I encounter more oddities in Objective C, Java or on the Windows platform, of course, I will post them here.

Things I Don’t Get.

Suppose there is this system which is designed to process data.

Suppose data goes into that system via a JMS queue, where that data is processed, and depending on the results of that processing, the data is put into one of two outgoing queues.

Now suppose the following: (1) if the data isn’t correctly pulled off of the queue for whatever reason, the item is requeued, and (2) the system is massively parallel: the system doesn’t run on one computer, but on a half dozen.

Now suppose there is a bug in the system: on occasion for reasons no-one really understands, there is a flaw in the system which causes items to become ‘stuck’ in the queue: they get pulled off, then they get pushed back. This behavior should never happens–but it does.

What do you do?

Well, hypothetically you could debug the system and fix it, so items don’t get stuck in the queue.

Or you could create a “stuck in queue” report for management, to let them know that while there is a problem, it’s under control.

Really: does a report indicating the volume and scope of your problem really mean the problem is solved? Gah!

Having a consistant UI and Design language.

I’m heartened by the rumor that Apple is now struggling with how to add cut and paste to the iPhone. I’m heartened not because Apple is going to add cut and paste–but because (so the rumor goes) the delay comes from Apple sweating the details of the UI design:

The trouble it is having is implementation. How to easily call up a copy or cut option and then the paste action. It’s probable that the zoom bubble (the one that brings up the edit cursor) is the issue as it has removed the obvious tap and hold position from Apple to use for a pop-up menu of some sort. Text selection is another difficulty to sort out. Certainly, the cursor could be added to the menu selection; however, Apple wants to keep this as simple as possible and that added step would not lend itself to simple.

John Gruber observes in his comments about the LG Voyager being a potential iPhone killer that it is crap:

I actually got to use one of these turds for a few minutes at Macworld two weeks ago, and it’s a joke. You know the iPhone-like home screen? The one LG and Verizon show in all their promotional photos? That’s actually not the main UI of the phone. That’s just the interface for accessing secondary features of the phone. The main UI is just like that of any other crap LG phone, and one of the “apps” you can launch is the iPhone knock-off “shortcut” mode. And, when you open the slider, the inside screen has a third different UI. The overall experience is worse, way worse, than that of a typical LG phone.

I’m also reminded about the complaints about the BMW iDrive:

Since that time we’ve driven the new 5 and 6 Series and found similar issues with iDrive. I noted one specific issue while trying to adjust the audio system’s bass and treble settings (after wading through multiple LCD screens, of course). In this case, the graphical representations of the bass and treble settings on the LCD screen, along with the actual changes in the settings, were lagging behind the action of my hand turning the iDrive dial. So as I tried to listen for when the bass and treble were properly adjusted, I noticed that although my hand was turning the dial, no change in settings was occurring, either on the screen or in the sound quality. Naturally I tuned the dial further when I saw this and then — WHAM! — the system caught up quickly, pushing the sound of David Bowie from a Barry White-like low to an Alvin and the Chipmunks-high in a fraction of a second.

Two thoughts occurred to me as I experienced this. First, how ironic is it that BMW has invested all those countless man-hours and untold resources in creating the latest batch of high-fidelity Harman Kardon sound systems, only to pair it with a user interface that makes it nearly impossible to properly adjust the tonal qualities? Second, this has never happened to me in a $20,000 Honda Civic, a $12,000 Hyundai Elantra or a 31-year-old $1,700 Saab Sonett.

To me, all of these illustrate the basic problem most software developers and software designers have with User Interface design. To most developers I’ve known, a user interface lands somewhere between iCandy (oooh, look at the pretty animation when I press the ‘OK’ button) and a sort of graphical ‘man’ page where instead of just telling you about the command-line flags, you can click on them. And while on a desktop computer, treating UI design as a sort of a secondary and nearly worthless exercise in visual flash sometimes produces products that are tolerable to users (because most users today suffer from computer Stockholm Syndome where they idealize their abusers), it is the kiss of death for any mobile or auto-based user interface which uses a different set of interfaces than a desktop keyboard and mouse.

Sadly, while there are only a few rules that really need to be adhered to and (like Jesus’ famous axiom about loving God and loving thy neighbor) all the rest flows from these ideas–most developers don’t understand these rules and don’t appreciate why we should do them. So users get subjected to the BMW iDrive, which on the 3 series has three different ways to escape from a sub-menu, depending on which sub-menu you are in. (And of course, two of those give no visual feedback as to which method you should use: it’s simply assumed you somehow know the magic sequence, which you are to guess while hurdling down the freeway at 70 miles/hour, navigating traffic and watching out for obstacles which can kill you.)

Consistant Interface Language. Every user interface requires a consistent interface language–by which I mean that when you are at a particular point in the interface, performing the same action results in the same result.

Take, for example, the humble dialog box. Thirty-something years of dialog boxes, and we all just “intuitively” know what is supposed to happen when you click the “OK” button in a dialog box: the button briefly flashes, and the window closes itself (meaning the window removes itself from the screen, and the window that was behind it comes forward, hiliting itself to indicate it is active), with the settings in the dialog box saved away or updating the state of your application.

“Of course”, you say, “dialog boxes are always supposed to do this.” But think: who was the first person to decide that this was supposed to be the behavior of a dialog box? I mean, we could have immediately saved the state of each button change in the dialog box right away–and the “OK” button would be superfluous. We could have made updating the state of the dialog box triggered by closing the box. Hell, we could have made creating a dialog box automatically append a new menu in the Macintosh menu bar, unhiliting the rest of the menus–and accepting (and dismissing) the dialog could have hinged upon pulling down the dialog box menu bar and selecting the “Dismiss” item.

But instead, we click “OK.” (Except for some applications on MacOS X, which actually uses the “update as soon as the control is clicked” mode.)

Today that design language is so ingrained into the very fabric of our being that the idea of adding a new menu bar (or perhaps updating the Window’s “Start’ button with state to manipulate the current dialog box) sounds so counter-intuitive that we immediately dismiss the very idea.

That is what I mean by “interface language”: we have so ingrained into us the idea that there is one way to do something that we mentally cannot conceive of a different way to say the same thing.

On a desktop computer, of course, decades of interface language has burned itself into us: we use a mouse one way: we know the difference between left click and right click, we know what a button is and how it is supposed to behave.

But what about a mobile device, or a car–where there is no mouse and no cursor to drag around on the screen? That’s when you have to invent a consistent language of gestures and actions–and stick to it.

The BMW iDrive and the LG phones both suffer from the same problem, as does Windows Mobile 5’s dialog box actions: how you dismiss a dialog box or a modal state vary depending on what you’re playing with. To pop out of a menu state on the iDrive in the “settings” menu you press the menu key. In the navigation screen you select the “return” arrow by sliding left or right until you hilite the navigation component of the screen, and pressing the knob. In the entertainment console you select the “return” arrow by sliding the knob up or down until you select the return arrow.

On Windows Mobile 5, to dismiss a dialog box you either press the hot-key with the “Cancel” label above it, or you press the hot-key with the “Menu” label, press the up/down arrows until you select “Cancel”, then press the ‘enter’ key in the middle of the arrow keys.

And by varying each of these actions, you make it impossible to figure out what to do without looking at the device and figuring out what mode you’re in–which, on a BMW driving at high speed in traffic during a rainy day–is fucking dangerous!!!

All for want of a meeting with a designer who said “the menu key will pop the sub-menu up a level.”

This is even important with desktop applications. Even though much of the low level language for desktop applications have been codified by convention (“OK” dismisses a dialog box, ‘double-click’ opens a thing, ‘click-drag’ of a selected item causes drag-and-drop), for some specialized applications the interface language is less well defined. Anywhere where you’re subclassing JComponent or NSView you need to think about the interface language you’re using–and if it is consistent everywhere.

Eye Candy Enforces Relationships. By this I mean that eye candy exists in order to provide subtle cues as to the relationship between objects on the screen.

Look at the iPhone. Applications zoom in and out from the middle of the screen (a visual metaphor for task switching that is consistently used), submenus slide from side to side (a visual metaphor for drilling down a hierarchical structure), secondary pages flick from side to side (a visual metaphor for selecting different pages–such as Safari pages or home screen pages), and application modes or commands are selected by picking the black and white icon along the bottom of the screen.

Because of the consistency it takes a new user a few minutes to “get the lay of the land”–and then suddenly you go from “new user” to “iPhone user.” The language is easy to get as well: once you understand flicking from page to page, you can create multiple Safari windows, multiple home pages, multiple picture galleries… It’s easy.

Unfortunately some of the eye candy in other applications do less to help form relationships between behavior and action–and the lack of eye candy can sometimes hurt understanding. For example, when you click on a dialog button, you expect an immediate reaction: even if the action is turning the mouse cursor into a “wait” cursor. It can make the difference between thinking an application is doing something and thinking the application has crashed. On the iPhone, clicking the “search” button in Google Maps immediately replaces the button with a spinning wait cursor: you know it’s doing something. On many mobile devices, however, after selecting a state the embedded application just “sits there”, leaving you wondering if you just broke it: the delay after using voice command and getting a response from the BMW iDrive, for example, leaves you hanging on wondering if you said the right phrase.

What is particularly sad–and the Verizon phone demonstrates this in spades–is that a lot of eye candy (especially with mobile devices) is driven by marketing rather than by good user design: the Verizon phone has a sort of “iPhone-like” navigation screen that serves no purpose whatsoever–except to look good in the Verizon ads. Otherwise, it’s useless eye candy that actually detracts from the two other user interfaces used by the phone.

Provide Immediate Feedback. This should go without saying–but it doesn’t, as adjusting the base and treble on a BMW using iDrive demonstrates at times.

On MacOS X System 9 and earlier, the highest interrupt thread in the interface went to…the mouse cursor. Meaning that no matter what the computer was doing, no matter how much CPU was in use, if the user moved the mouse, the mouse cursor was updated. Period. Immediate feedback caused the Macintosh to seem responsive–even though MacOS 7 and earlier ran on a computer that was (by today’s standards) unimaginably slow.

You’d think that by turning a knob you should get an immediate response: you turn the volume knob and the sound gets louder or softer in direct response. But today with multi-tasking and multi-threaded embedded systems which do not guarantee real-time processing, sometimes this isn’t the case: for the first 15 seconds as WinCE Automotive boots on the 325i’s iDrive system, turning the volume knob doesn’t necessarily result in changing the volume, and pressing the “next track” button doesn’t necessarily go to the next track.

And this is a frustrating problem.

By sticking to these three items: consistent interface language (or, what button you press does the same thing regardless of what mode you’re in), useful eye-candy reinforcing contextual relationships (rather than being driven by marketing), and immediate feedback (even if that means putting up a “hang on a second” alert) helps reduce frustration and provide a nice experience to users.

It’s a shame that most people don’t do this.

That’s why I’m heartened that Apple is delaying cut and paste to the iPhone: it’s more important to get the details right than hack something together. After all, for something like a UI interface change, if Apple screws it up, they’ve screwed up the iPhone. And while code to add a button and change the behavior of a drag operation would take a good programmer maybe a week to mock up, it could destroy a multi-billion dollar business if it’s not done the right way.