May you live in interesting times and come to the attention of people in power.

Microsoft makes a bid to acquire Yahoo!.

Ballmer’s letter to Yahoo: We want you
Ballmer’s memo to his troops

Which, in a way, is irritating to me: I had planned to write a very long post about the sorts of questions one should ask when one is interviewing at a company–which was going to include a bunch of snide remarks about the hand that is currently feeding me. (Things like asking managers you’re talking to about their plans to promote from within, the types of software methodologies they use–and what to watch out for.) But today: um, I like my paycheck. And the announcement means anyone connected to the purple Y! will probably be receiving extra scrutiny.

Including idiots with tiny little no-nothing blogs like me.

Happy.

I find that I’m spending more and more of my time studying computational algebra, which makes me incredibly happy. When I was a student at Caltech I took an abstract algebra class–which I did miserably at, because the material was presented without motivation. Computational algebra has the advantage of covering much of the same material I didn’t get the first time around–but this time, with a motivating reason which allows me to actually have a model for what the different types of objects really represent.

As a side note, I find it extremely interesting how much a background in object-oriented programming helps in understanding group theory. In a way, a group, a ring and a field are all essentially like interfaces: declarations of objects and operators which observe certain well-defined properties, but with different types of objects that can be plugged into our operators. For example, we could define a ‘group’ as a set of objects and an operator:

template class Group
{
	public:
		virtual Element add(Element, Element) = 0;
};

Then the Group Z of integers is:

class Integer: public Group
{
	public:
		inline int add(int a, int b)
		{
			return a+b;
		}
};

And the group ZN of integers from 0 to N-1 (which represents a finite integral group) could be represented by:

class IntegerN: public Group
{
	public:
		IntegerN(int m)
		{
			fModulo = m;
		}
	
		inline int add(int a, int b)
		{
			return (a+b)%fModulo;
		}
	private:
		int fModulo;
};

Having a consistant UI and Design language.

I’m heartened by the rumor that Apple is now struggling with how to add cut and paste to the iPhone. I’m heartened not because Apple is going to add cut and paste–but because (so the rumor goes) the delay comes from Apple sweating the details of the UI design:

The trouble it is having is implementation. How to easily call up a copy or cut option and then the paste action. It’s probable that the zoom bubble (the one that brings up the edit cursor) is the issue as it has removed the obvious tap and hold position from Apple to use for a pop-up menu of some sort. Text selection is another difficulty to sort out. Certainly, the cursor could be added to the menu selection; however, Apple wants to keep this as simple as possible and that added step would not lend itself to simple.

John Gruber observes in his comments about the LG Voyager being a potential iPhone killer that it is crap:

I actually got to use one of these turds for a few minutes at Macworld two weeks ago, and it’s a joke. You know the iPhone-like home screen? The one LG and Verizon show in all their promotional photos? That’s actually not the main UI of the phone. That’s just the interface for accessing secondary features of the phone. The main UI is just like that of any other crap LG phone, and one of the “apps” you can launch is the iPhone knock-off “shortcut” mode. And, when you open the slider, the inside screen has a third different UI. The overall experience is worse, way worse, than that of a typical LG phone.

I’m also reminded about the complaints about the BMW iDrive:

Since that time we’ve driven the new 5 and 6 Series and found similar issues with iDrive. I noted one specific issue while trying to adjust the audio system’s bass and treble settings (after wading through multiple LCD screens, of course). In this case, the graphical representations of the bass and treble settings on the LCD screen, along with the actual changes in the settings, were lagging behind the action of my hand turning the iDrive dial. So as I tried to listen for when the bass and treble were properly adjusted, I noticed that although my hand was turning the dial, no change in settings was occurring, either on the screen or in the sound quality. Naturally I tuned the dial further when I saw this and then — WHAM! — the system caught up quickly, pushing the sound of David Bowie from a Barry White-like low to an Alvin and the Chipmunks-high in a fraction of a second.

Two thoughts occurred to me as I experienced this. First, how ironic is it that BMW has invested all those countless man-hours and untold resources in creating the latest batch of high-fidelity Harman Kardon sound systems, only to pair it with a user interface that makes it nearly impossible to properly adjust the tonal qualities? Second, this has never happened to me in a $20,000 Honda Civic, a $12,000 Hyundai Elantra or a 31-year-old $1,700 Saab Sonett.

To me, all of these illustrate the basic problem most software developers and software designers have with User Interface design. To most developers I’ve known, a user interface lands somewhere between iCandy (oooh, look at the pretty animation when I press the ‘OK’ button) and a sort of graphical ‘man’ page where instead of just telling you about the command-line flags, you can click on them. And while on a desktop computer, treating UI design as a sort of a secondary and nearly worthless exercise in visual flash sometimes produces products that are tolerable to users (because most users today suffer from computer Stockholm Syndome where they idealize their abusers), it is the kiss of death for any mobile or auto-based user interface which uses a different set of interfaces than a desktop keyboard and mouse.

Sadly, while there are only a few rules that really need to be adhered to and (like Jesus’ famous axiom about loving God and loving thy neighbor) all the rest flows from these ideas–most developers don’t understand these rules and don’t appreciate why we should do them. So users get subjected to the BMW iDrive, which on the 3 series has three different ways to escape from a sub-menu, depending on which sub-menu you are in. (And of course, two of those give no visual feedback as to which method you should use: it’s simply assumed you somehow know the magic sequence, which you are to guess while hurdling down the freeway at 70 miles/hour, navigating traffic and watching out for obstacles which can kill you.)

Consistant Interface Language. Every user interface requires a consistent interface language–by which I mean that when you are at a particular point in the interface, performing the same action results in the same result.

Take, for example, the humble dialog box. Thirty-something years of dialog boxes, and we all just “intuitively” know what is supposed to happen when you click the “OK” button in a dialog box: the button briefly flashes, and the window closes itself (meaning the window removes itself from the screen, and the window that was behind it comes forward, hiliting itself to indicate it is active), with the settings in the dialog box saved away or updating the state of your application.

“Of course”, you say, “dialog boxes are always supposed to do this.” But think: who was the first person to decide that this was supposed to be the behavior of a dialog box? I mean, we could have immediately saved the state of each button change in the dialog box right away–and the “OK” button would be superfluous. We could have made updating the state of the dialog box triggered by closing the box. Hell, we could have made creating a dialog box automatically append a new menu in the Macintosh menu bar, unhiliting the rest of the menus–and accepting (and dismissing) the dialog could have hinged upon pulling down the dialog box menu bar and selecting the “Dismiss” item.

But instead, we click “OK.” (Except for some applications on MacOS X, which actually uses the “update as soon as the control is clicked” mode.)

Today that design language is so ingrained into the very fabric of our being that the idea of adding a new menu bar (or perhaps updating the Window’s “Start’ button with state to manipulate the current dialog box) sounds so counter-intuitive that we immediately dismiss the very idea.

That is what I mean by “interface language”: we have so ingrained into us the idea that there is one way to do something that we mentally cannot conceive of a different way to say the same thing.

On a desktop computer, of course, decades of interface language has burned itself into us: we use a mouse one way: we know the difference between left click and right click, we know what a button is and how it is supposed to behave.

But what about a mobile device, or a car–where there is no mouse and no cursor to drag around on the screen? That’s when you have to invent a consistent language of gestures and actions–and stick to it.

The BMW iDrive and the LG phones both suffer from the same problem, as does Windows Mobile 5’s dialog box actions: how you dismiss a dialog box or a modal state vary depending on what you’re playing with. To pop out of a menu state on the iDrive in the “settings” menu you press the menu key. In the navigation screen you select the “return” arrow by sliding left or right until you hilite the navigation component of the screen, and pressing the knob. In the entertainment console you select the “return” arrow by sliding the knob up or down until you select the return arrow.

On Windows Mobile 5, to dismiss a dialog box you either press the hot-key with the “Cancel” label above it, or you press the hot-key with the “Menu” label, press the up/down arrows until you select “Cancel”, then press the ‘enter’ key in the middle of the arrow keys.

And by varying each of these actions, you make it impossible to figure out what to do without looking at the device and figuring out what mode you’re in–which, on a BMW driving at high speed in traffic during a rainy day–is fucking dangerous!!!

All for want of a meeting with a designer who said “the menu key will pop the sub-menu up a level.”

This is even important with desktop applications. Even though much of the low level language for desktop applications have been codified by convention (“OK” dismisses a dialog box, ‘double-click’ opens a thing, ‘click-drag’ of a selected item causes drag-and-drop), for some specialized applications the interface language is less well defined. Anywhere where you’re subclassing JComponent or NSView you need to think about the interface language you’re using–and if it is consistent everywhere.

Eye Candy Enforces Relationships. By this I mean that eye candy exists in order to provide subtle cues as to the relationship between objects on the screen.

Look at the iPhone. Applications zoom in and out from the middle of the screen (a visual metaphor for task switching that is consistently used), submenus slide from side to side (a visual metaphor for drilling down a hierarchical structure), secondary pages flick from side to side (a visual metaphor for selecting different pages–such as Safari pages or home screen pages), and application modes or commands are selected by picking the black and white icon along the bottom of the screen.

Because of the consistency it takes a new user a few minutes to “get the lay of the land”–and then suddenly you go from “new user” to “iPhone user.” The language is easy to get as well: once you understand flicking from page to page, you can create multiple Safari windows, multiple home pages, multiple picture galleries… It’s easy.

Unfortunately some of the eye candy in other applications do less to help form relationships between behavior and action–and the lack of eye candy can sometimes hurt understanding. For example, when you click on a dialog button, you expect an immediate reaction: even if the action is turning the mouse cursor into a “wait” cursor. It can make the difference between thinking an application is doing something and thinking the application has crashed. On the iPhone, clicking the “search” button in Google Maps immediately replaces the button with a spinning wait cursor: you know it’s doing something. On many mobile devices, however, after selecting a state the embedded application just “sits there”, leaving you wondering if you just broke it: the delay after using voice command and getting a response from the BMW iDrive, for example, leaves you hanging on wondering if you said the right phrase.

What is particularly sad–and the Verizon phone demonstrates this in spades–is that a lot of eye candy (especially with mobile devices) is driven by marketing rather than by good user design: the Verizon phone has a sort of “iPhone-like” navigation screen that serves no purpose whatsoever–except to look good in the Verizon ads. Otherwise, it’s useless eye candy that actually detracts from the two other user interfaces used by the phone.

Provide Immediate Feedback. This should go without saying–but it doesn’t, as adjusting the base and treble on a BMW using iDrive demonstrates at times.

On MacOS X System 9 and earlier, the highest interrupt thread in the interface went to…the mouse cursor. Meaning that no matter what the computer was doing, no matter how much CPU was in use, if the user moved the mouse, the mouse cursor was updated. Period. Immediate feedback caused the Macintosh to seem responsive–even though MacOS 7 and earlier ran on a computer that was (by today’s standards) unimaginably slow.

You’d think that by turning a knob you should get an immediate response: you turn the volume knob and the sound gets louder or softer in direct response. But today with multi-tasking and multi-threaded embedded systems which do not guarantee real-time processing, sometimes this isn’t the case: for the first 15 seconds as WinCE Automotive boots on the 325i’s iDrive system, turning the volume knob doesn’t necessarily result in changing the volume, and pressing the “next track” button doesn’t necessarily go to the next track.

And this is a frustrating problem.

By sticking to these three items: consistent interface language (or, what button you press does the same thing regardless of what mode you’re in), useful eye-candy reinforcing contextual relationships (rather than being driven by marketing), and immediate feedback (even if that means putting up a “hang on a second” alert) helps reduce frustration and provide a nice experience to users.

It’s a shame that most people don’t do this.

That’s why I’m heartened that Apple is delaying cut and paste to the iPhone: it’s more important to get the details right than hack something together. After all, for something like a UI interface change, if Apple screws it up, they’ve screwed up the iPhone. And while code to add a button and change the behavior of a drag operation would take a good programmer maybe a week to mock up, it could destroy a multi-billion dollar business if it’s not done the right way.

Productivity Killers.

Cube Farms. The research that has been used to justify putting programmers into ‘veal-fattening pens’ is flawed; most of that research was done using graduate students or undergraduates attempting to solve a common problem. The thing is, if you have two or three people who are trying to solve a problem that none of them have ever seen before, cooperation is usually better than isolation–but if you have one person who is on a roll writing code and doesn’t have any real unknowns (other than the specific problem he’s writing code for), then interruptions are bad. (If interruptions were not bad, then they’d rig bells that ring every five minutes.)

If you are good at what you do, you do not need to be pestered every five minutes.

Underpowered machines. I cannot believe in this day an age that a 40 year old software developer making $100K+ a year (where “+” is a fairly large “plus”) writing Java code would be given a three-year old underpowered computer that new was only $600. But that’s where I am–and we’re not talking about some dumb-shit hole in the wall somewhere–we’re talking Ya-friggin’-hoo!, for Christ’s sake!

A Pentium 4 at 2.6mhz with 512meg of RAM and a 40gig drive with a 17inch monitor is simply unacceptable as a development platform–especially with a model that steals 32meg of that for video RAM–yet here I am. And all requests to IT has been met so far with stonewalling and silence.

Fucking nuts.

Why the Sony Connect Failed.

Simple: no content.

The technical offerings by Sony sucked. The Science Fiction section was small. The other sections may have been fine for all I know–but if they followed the model of these other two sections, they sucked as well.

I have a Sony Reader, and the primary thing I use it for is to view specially formatted RTF and PDF files which I’ve transferred from the ‘net: every time I log onto the Sony web site, I’m disappointed at the lack of content. It could very well be that Sony’s primary deals are with Japanese publishers.

The Sony Reader itself is a wonderful electronic device, and if they had simply made a deal with anyone–even the eReader folks for content–it would have been more successful.

Which is why I think the Amazon Kindle may have a better chance: already on launch Amazon has more content than the Sony Connect site has after a year of being out.

Why I hope Amazon’s Kindle succeeds.

At Daring Fireball, John Gruber hopes that the Amazon’s Kindle fails because the books being sold by Amazon are wrapped in DRM and cannot be loaned or reasonably transferred to another format.

I appreciate and understand his misgivings. I really do. I have three walls full of books which is evidence of how much I appreciate the power of the written word. And there are some books which I hope will always remain in book form which, in my opinion, are timeless: mathematics and mathematical reference books, books on algorithms, classical books on philosophy–all of which deserve to be presented on the printed page, and saved and cherished like a fine wine which can be endlessly savored without the bottle ever going empty.

But to me there are three classes of books, and (pardon the heresy that I’m about to speak) only one of which deserve the permanence of the printed page. There are timeless works: Donald Knuth’s “The Art of Computer Programming” for example, which deserve to be printed on paper and bound in a book.

Then there are ‘trash’ entertainment novels; books which will be read for enjoyment and, chances are, stored in a basement until they are rediscovered years later soaked through by a water leak, occupying space which could have been used by something else.

And there are ‘reference’ books: technical books which describe a technique, programming API, chip set, or other thing which will find itself out of date perhaps a half dozen years from now.

For the first type of book, yes: I want to own them in book form. I want to put them on my bookshelf on display, take them down and refer to them. They speak volumes as to the type of person I am and the type of person I hope to be, and they are books I hope to pass on when I no longer need them. My copy of Aho, Sethi and Ullman’s “Dragon Book” contains information which was valid twenty years ago and will be valid twenty years from now–even when new techniques are invented which improve on the basic design. Knuth’s books are also essentially timeless, even though today most modern programming languages incorporate many of the algorithms as part of a standard run-time library, if only so one can know how these things work. Numerical Recipes will always be useful.

But there are a number of other books which are not so timeless. Any book on writing software for MacOS 7 is simply a waste of paper and, the dogma that all books are timeless and important aside, really richly deserves to be turned into pulp and recycled into drink containers for fast food restaurants. While my copy of Logic: Form and Function is still quite useful after being first published nearly a half-century ago, my copy on the pre-election biography of George W. Bush (published in early 2000) was a waste of shelf space and was recycled years ago. And who really needs a book on Windows 3.11 software development and using DDE for exchanging messages between top-level windows?

So the concern that John Gruber has with respect to the Kindle may be well founded–but there are quite a few “disposable” books out there that really are ephemeral–and if they were to disappear a decade after they were published, would probably never be missed other than by those people who like collecting obscure historical footnotes.

Portable readers, players and video devices.

It’s clear now. The RIAA and the Vivendi Corporation (who backs NBC and NBC Universal) are friggin’ morons.

Really, though, it’s not their fault. Most broadcast networks, music companies and book printers have built their business model on the simple premise that they are, first and foremost, distribution companies. And until recently, distribution has been a very hard nut to crack: to distribute television you were required to win approval for a broadcast tower from the FCC, build a broadcast tower (which is damned expensive), and line up material to broadcast, which required both local producers of content and being part of a franchise for content to rebroadcast. Put together a line of advertisers (which requires a sales department), and you have yourself something which may cover one town. Wash, rinse and repeat a few thousand times with capital costs in the billions to cover a nation.

To distribute music or books required a nation-wide series of stores to sell the material at, and a line of companies which could press the music, and an army of technicians who could create the content–a barrier to entry which implies that you also need an army of managers who can make sure the content you sell makes the entire enterprise profitable.

Today, slap it up on YouTube or upload it to your MySpace account.

And here’s the real problem with these distributors. They’ve built their entire business model–their livelihoods–based on a whole bunch of assumptions: it’s nearly impossible (or at least very difficult) to time-shift broadcast material. You either watch it live, or you don’t watch it at all. You either buy a large disk from a local store, or you hear it live on the radio. You buy the bound material from a local store, or you read it at the library (at great inconvenience). And that’s it.

They didn’t want to be inconvenient: it’s just that their business model was built on the fact that there wasn’t a more convenient way to watch television, listen to music, or read a book.

What’s most interesting to me about television was that it’s great success came from the fact that television is a more convenient way to watch material than a movie theater, which was (prior to the invention of the television set) the only way you could watch news casts and movies and mini-plays. (And prior to the invention of the motion picture, the only way you could watch a mini-play was live, waiting for a troop of actors to come to town.)

So Vivendi’s television arm owes it’s existence to the fact that it is more convenient to watch television than to drive down to the off-Broadway district in your local community.

Here’s an irony for everyone involved. The Internet is more convenient.

It’s more convenient to download a television show and watch it than it is to watch it live on television: I can watch the show when I want to, and not when I’m supposed to. It’s more convenient to download an MP3 track and transfer it to whatever device I want–and have thousands of tracks on that player–than it is to lug around a whole stack of CDs (and hope I don’t scratch one). It’s more convenient to download my technical books onto a slim device like the Sony eReader or the Amazon Kindle than it is to carry around tens of pounds of books.

And the old players are all fighting this.

And worse: they’re fighting companies like Apple who see that the point of being able to download music or TV content is not one of digital distribution (or digital promotion)–it’s one of convenience. The Apple iTunes Music Store and the iPod and the iPhone were not purchased because Apple spread some strange pixie dust or Jobs distortion field into each device: they succeeded because they are pleasantly designed devices which are easy for the average person to use. They deliver on that convenience.

And NBC wants to route people to the Amazon Unbox system which is a pain in the ass to navigate and use–because they’re afraid of Apple. Because they don’t understand why Apple succeeded in music and think that whatever it is, it has to be stopped.

By failing to understand this, Vivendi has essentially declared war on convenience.

The reason why people download content from Bittorrent is not because they want to steal material. Often the cost of setting up a Bittorrent client to download content is higher than simply giving Apple a buck to dowload a song–though the people using Bittorrent have realized that setup and use is a cost that they’re trying to lower. No; they’re doing it because it is more convenient than the established distribution channels.

I’m excited about the Amazon Kindle because it’s convenient: I can download and buy my books anywhere. I can keep hundreds of books on me at any time. And what makes that nice is not because I’m a voracious reader of fiction–I want a way to store all of my technical books that is more convenient than devoting a whole bookshelf. I want a reference library which can be accessed easily without having to find a book on a whole row of bookshelves in the other room. I want to carry my collection of technical books with me without having to use a wheelbarrel to carry them all.

It was the promise of the Sony eReader–but that failed because Sony couldn’t make the deals with publishers that Amazon may be able to make. (Sony’s selection of technical books suck hard–though Sony has the advantage of being able to read PDF files, which means I can convert technically-minded web sites to something I can carry with me–though that advantage may be negated by the iPhone, which can simply got to those web sites.)

I’m going to take a wait and see attitude–but hopefully Amazon will succeed in doing something in books that Apple did in music–and the television producers are fighting like cats and dogs–getting all that content into a form factor that is more convenient than hundreds of pounds of dead tree matter.

Cheap Two Factor Authentication

Security authentication relies upon three factors: what you know, what you are, and what you have.

What you know: the canonical example of this is a password. It’s something you know: something you’ve memorized and, when asked, you can repeat it. This is a PIN number on an ATM card, or the answer to things like “your mother’s maiden name”.

What you are: this is some physical attribute about yourself. Your fingerprint, your eye color or the pattern of veins at the back of your eye–or the relative length of the fingers on your hand: these are all attributes which are things that you are.

What you have: this is a physical object that is in your possession. The perfect example of this is the key to your house: it is a physical object that allows you to get into your home.

The idea of two-factor authentication is simple: by having two of the three items above, you can then prove that you’re ‘you’ so you can get access to your account, your money, or your property. A perfect example of two-factor authentication is using your ATM card to withdraw money: you cannot withdraw money unless you can show you have something (the card) and you know something (the PIN number). The strength of two-factor authentication relies upon the relative strength of each factor of authentication: in a sense, the overall strength of two-factor authentication is the strength of the first times the strength of the second.

Which is why ATMs are secure even though passwords (a 4 to 6 digit number) are so bloody weak: even though the password itself is extremely weak, you also need to have something in order to withdraw money. Having something times knowing something is stronger than just knowing something by itself–even if the thing you know can be easily guessed.

This also illustrates the danger of some types of two-factor authentication: they can easily collapse into one-factor authentication (thus making it extremely easy to steal your money) through a simple act. In the case of an ATM card, two-factor authentication becomes one-factor authentication if you write your PIN number on your ATM card. Now anyone who has your ATM card can withdraw money–and they don’t have to know your PIN, they can just read it off your card.

Another example of two-factor which collapses into one-factor authentication would be a pre-printed card with a random number: you can memorize the random number on the card–essentially turning your ‘two-factor’ authentication into a memorized one-factor authentication. Not that this is bad: generally longer passwords are more secure than shorter passwords, and banks are missing out on a bet when they limit ATM passwords to 4 to 8 numbers. Even so, this really is no longer two-factor authentication–which is why there are devices out there (such as key fobs) which randomly generate a number on a synchronized clock: the number constantly changes in a seemingly random way, forcing you to have the device on your possession so you can enter the randomly generated number.

Banks have been required for the past year to come up with two-factor authentication for on-line accounts–and they have failed dismally at this, as illustrated here: Wish-It-Was Two-Factor, where banks essentially require you to pick essentially three passwords rather than one: a real password, a ‘question’ and an ‘answer’. It’s not really two-factor authentication: it’s simply a much longer (and harder to remember) password which frustrates people. And again, while having a longer password is generally more secure, it’s not two-factor authentication.

It struck me one cheap way that banks can create two-factor authentication by something you know and by something you have. It’s easy, really: when you open an account with the bank, they send you a piece of paper with a table of fifty random numbers or words or phrases, all numbered on the paper. So, for example, on that page you’d see:

1: 148191     2: 381912    3: 108910

and so forth.

When you’re asked to log in, the login dialog box then asks for three things: your username, your password, and the number that is in cell N on your access page.

By making this list long enough, you virtually have to have possession of the paper to guess the password. It becomes relatively easy for someone to then find the number, rather than to guess the answer to the question “what is your third-favorite existential author” or some other nonsense. Which makes it honest to goodness two-factor authentication–and the cost to administer this is no greater than the cost to print out an additional page to send to the user.

If you use the idea, you don’t have to credit me.

I think the wave of the future…

I’ve been giving some thought as to what I think the wave of the future will be, in part because if I were to come up with a good idea for a product, I’d contemplate leaving here in a heartbeat and go build whatever it is I think will be the wave of the future.

And I think I know what that is.

As an aside, if you’re going to build the next great product–look for the next Google or the next Apple, the best place to start is with “I want.” As in “I want a great camera with a built-in GPS so I know where I took my pictures.” Don’t build a product because you think someone else will buy it–build something that you want to buy yourself. Unless you’re completely weird, chances are, someone else will also want to buy what you want to buy.

And I know what I want.

I want easy to use consumer electronics. I want an answering machine that doesn’t require a friggin’ Ph.D. to use. I want a digital recorder that doesn’t make me feel stupid. I want a way to get e-mail without feeling dumber than a pile of rocks.

Actually, what I want, honestly speaking, is stuff which is designed around ease of use.

I think, by the way, that this has been Google’s secret sauce. Say you’re looking for widgets. Go to Yahoo! (disclaimer: I work at Yahoo) then go to Google, and tell me which is easier for your grandmother to use.

Yahoo!: a window full of crap; hard to identify which of a dozen boxes I should type into to find something.

Google: a single edit form. Type ‘widgets’ and go.

It’s Apple’s secret sauce: build easy to use (and I mean easy to use for your grandmother, not easy to use for your hacker friends who think “easy” is writing specialized JavaScript to a web site’s back door) software in beautiful looking hardware.

And that’s what I want as well: stuff around my house that my grandmother could use if she were still alive. Not stuff full of a thousand flashing lights which to a demented geek says “cool high tech” but to someone who is older, intimidates her back to her bedroom under the comforting warmth of a comforter while she wishes for the good ol’ days when things didn’t flicker, flash, and beep at her like demented little monsters with trapped fireflies for eyes.

Our components and gadgets are designed for people with attention deficit disorder, and it doesn’t have to be.

Today’s WTF? moment is brought to you by Yahoo!

Today’s WTF? moment, found in some code I’m trying to sort out: (specific actual implementation hidden to protect the clueless)

private static void method(ItemInterface interface)
{
    ItemImplementation impl;
    if (item instanceof ItemImplementation) {
        impl = (ItemImplementation)interface;
    } else {
        return;
    }
    some action on impl
}

Here’s the thing. An interface provides an abstraction to hide the implementation, so you can swap implementations around without anyone being the wiser.

If you are “cracking the interface” by casting an interface reference to the underlying implementation, then why the fuck do you have an interface in the first place?

All it means is that you break the possibility of using an interface for what it is supposed to be used for, while preserving the illusion that you have some competence as a programmer.

Feh!

[Update: The code I was referring to was the client of a framework, where the client was casting the interface to the underlying framework implementation of the framework.]