Is the iPad the end of the WIMP interface? No.

With the announcement of Apple’s iPad, a reoccurring theme with both of its supporters and its detractors is the fact that the iPad uses the iPhone OS, which is a task-centric (and non-windowing) operating system. For example:

I need to talk to you about computers. …

After defining the “old world” computing experience as “In the Old World, computers are general purpose, do-it-all machines” and the “new world” as “In the New World, computers are task-centric. We are reading email, browsing the web, playing a game, but not all at once.” we find:

Apple is calling the iPad a “third category” between phones and laptops. I am increasingly convinced that this is just to make it palatable to you while everything shifts to New World ideology over the next 10-20 years.

Those who are complaining about the iPad have primarily complained about the lask of multitasking:

What We Didn’t Get: Multitasking, Notifications, …
Understanding Multi-tasking on the iPad: What is it really?
iPad for Business? Not Without Multitasking

Each of these (and I picked the more polite ones) point out the iPad’s lack of multitasking–which the second article correctly notes is two things: the ability to run two applications at once, and the ability to see two applications at once.

It is entirely conceivable that we could have the second type of multitasking without the first: imagine an operating system where a process is only marked as active (and swapped in from storage) when it’s window has the focus.

In both cases, it really boils down to the WIMP (Windows, Icon, Menu, Pointer) interface: the former supporters are looking for the “next new hotness” beyond the WIMP interface, which relieves them of the task of managing multiple windows on their desktop. The latter doesn’t like the fact that a single window at a time prevents us from (for example) browsing the web while writing in a text editor–which forecloses on the possibility of writing notes in an outliner based on research material in a web browser. My common workflow as a developer is to have multiple windows open: one on a documentation site for an API, another on the code I’m writing, and several others on classes I may be using while writing my code.

But I don’t think either camp is correct.

It is clear that the small size of the iPhone makes the screen impossible to use with the second mode of multitasking, where multiple windows coexist from multiple applications at the same time. The iPhone’s design decision was to have only one application running at a time–but each application is required to save it’s complete state as it shuts down as fast as possible, which (when done right) preserves the illusion of multitasking.

A desktop computer, on the other hand, has much greater resources–including screen real estate. It would make absolutely no sense to have one task at a time running in full screen mode on my 30″ display–and there have been plenty of people who have tried to build prototype window managers for Linux which have a “one window visible at a time” model which have utterly failed. If a one-task at a time model was superior to a windowing model for desktop computers, we would have had that model long ago, given people have been experimenting with it since the 1980’s.

The iPad is clearly between the two in size and resources. It could (with its 1024×768 display) easily display a windowing interface. But the decision to do away with a WIMP interface probably has less to do with the death of windowing interfaces (as folks like John Gruber apparently believe), and more to do with product positioning: the iPad is a peripheral device, auxiliary to your main computer, which requires your computer to function. As such, it makes sense to position the iPad as a one-task-at-a-time device like the iPhone (despite the large display) and instead encourage developers to use the large display to provide a more rich one-application-at-a-time experience.

But just because the iPad is a peripheral device doesn’t mean your desktop computer is next with a one-application-at-a-time experience.

It’s clear from Apple’s design sensibilities that a main computer (your laptop, your desktop) is a main device capable of doing several things at once. And a peripheral computing device (the iPhone, iPod Touch, iPad, and iPod) is a one-task-at-a-time device which requires your main computer, but which can be carried along separately after having been docked with your main computer. (I count the iPod here because I believe it started the trend within Apple of considering peripheral devices that do only one thing at a time.)

And it makes complete sense to me that Apple would do this.

If your screen is not big enough to display multiple windows at a time, then why put the infrastructure in place to support multiple windows? (I’m looking at you, Windows Mobile.) If your device is not big enough to display windows, then why put the infrastructure to have multiple windows?

And with the exception of two category of applications (those which play music in the background, like Pandora, or those which periodically poll services like an mail application or IM application), why even support multiple applications at once? Do we really need (as we have in Android) the ability of your game staying resident in memory when you’ve swapped tasks to read your e-mail? (After all, the fact that an application stays resident in memory on Android means your process memory space is limited to 16 megabytes of RAM–as opposed to the 100+ megabytes you get on the iPhone.)

A peripheral computing device which does not support multitasking can also be made with less RAM, less CPU power–since it only needs to do one thing at a time. Because it is peripheral and only supports one thing at a time, applications can be built which take full control of the display without worrying about other applications. And the UI can be streamlined and use a completely different model than the WIMP environment.

If you were expecting the power of a tablet computer (a’la a Lenovo ThinkPad running Windows 7) because you see a tablet computer as a main desktop computer that uses a pen or your finger, and not as a peripheral device–then buy a tablet computer. Lenovo makes very nice laptop computers.

But I’m glad Apple decided to make the iPad a peripheral device: I think it was the right decision to release a new type of device rather than releasing a small notebook computer without a keyboard.

Because that’s what a tablet computer (a’la Lenovo) is: a laptop computer either without a keyboard or with a keyboard that can be tucked away.

Why I don’t like SmartGWT.

I spent the day playing with SmartGWT, and came to the conclusion that I’m not particularly a fan.

Don’t get me wrong: the widgets that it produce are sexy as hell. And if the development work you’re doing fits into SmartGWT’s client/server model and you are willing to use SmartGWT’s server-side libraries to handle network communications, it’s probably the best way to go.

But ours doesn’t. And when our model of obtaining data from the SmartGWT server doesn’t fit in their model, it’s an uphill fight–much harder, for example, than me just building our own custom widget set. (I have the advantage over most that building custom widgets in GWT at this point is pretty straight forward for me, so your ‘cost/benefit’ ratio may vary.)

The fundamental problem I have with SmartGWT is that it is just a thin Java wrapper over the SmartClient system all written in JavaScript. While SmartClient is probably a great piece of software for JavaScript, wrapped for GWT it’s a royal pain in the ass to use. Rather than giving you a rich library of various controls (and perhaps a class hierarchy of grids, each which provides some specialized function or override from the basic component), it provides a very small number of highly customizable set of controls.

And there is where I have problems. If I want a dialog, I don’t open a DialogWindow. I open the Window and set three or four different settings to make the Window look like a dialog. If I want a dynamically updating table that takes its information from a remote server data source, I don’t create an instance of a dynamic table and provide the right interface; instead, I create a Grid and set about a half dozen settings which turn the Grid into a dynamic grid.

Now of course this complaint is stylistic: SmartGWT’s sample app makes it pretty clear how to create a dialog or a dynamic grid–and after a day I had a fairly complete screen with a tree navigation widget, a table, and a modal search window with a dynamically updating table.

But it’s beyond just being stylistic: because the DataSource for SmartGWT is not an interface implemented by a half-dozen different implementations, but a singular object which is configured (through some JavaScript mechanism I don’t grok because the whole point of this exercise is to insulate myself from JavaScript) to be a JSON or an XML/RPC or a whatever interface, creating a new DataSource to feed a dynamic table is not just a matter of creating an instance of the interface and filling in the blanks.

In fact, without diving into the underlying JavaScript, it seems impossible to create a new interface to speak the particular flavor of JSON we’re using. (And while SmartGWT does provide a mechanism to speak JSON, it also defines the specific JSON requests and responses, which doesn’t work for us.)

It is this inflexibility, created because they’ve simply wrapped an existing JavaScript framework with a thin layer of GWT Java stuff, which makes SmartGWT a pain in the ass for me to use–and it’s why I don’t like SmartGWT.

Why I like GWT.

I’ve built web sites using PHP and JSP, with ColdFusion, and now with GWT.

One of the biggest problems I’ve run into with PHP and JSP and ColdFusion is that when you obtain information from a user, you commonly wind up flowing your logic through multiple pages. For example, with a shopping cart and checkout page, you wind up flowing a checkout page via a form (and a JSP or PHP landing page which processes the results) to a second page with a form, which flows (via another JSP or PHP page) to yet another page with a form, and so forth. The business logic winds up being separated across multiple HTML pages. We excuse this dividing up a whole bunch of pages across multiple JSP or PHP landing pages (which process the result of the previous page) by trying to invent all sorts of “MVC” like logic that attempts to justify the fact that we have to serve up four separate screens to do essentially one operation. But we’re still dividing up one operation across four separate screens.

What I like about GWT (and AJAX in general, I suppose–though I’m a Java guy, not a Javascript guy) is that I can now put all of this logic on one page. If I need to make an intermediate transaction to the server to validate some element of the form–so be it; it’s not that big a deal to do quick AJAX request and response.

Some people may claim that this is still not as easy as having an application with everything on one box–after all, if you do have to make an intermediate round-trip, you wind up writing something like “doMyRequestWithResponse(new ResponseInterface() {…});”, with the response coming back some time later. But I find that sort of event-driven programming far easier than the sort of “event-driven” type programming that seems to be offered by earlier frameworks.

Sure, it’s a style thing. But I’m far faster throwing together a bunch of widgets in GWT (and debugging them on the fly in Eclipse–cool!) than I am trying to lay out a bunch of HTML for a flow of forms.

In other words, client-server is far easier than server-driven against a dumb (HTML-driven) display engine.

The similarities between team architect and college professor.

I just spent the morning putting together a syllabus document describing the design parameters of the user interface I’m working on, complete with citations references to a number of research articles describing different aspects of visual design, usability and the importance of speed.

In the next few weeks I have members of a team I’m teaching building, which will be taught brought up to speed on what we’re doing.

The only thing that is missing are classes–and I’m sure if I offered to teach them (one every few weeks on different topics of usability), my manager would be elated.

Things that perplex me.

So the people I’ve worked with hate Eclipse because Eclipse insists upon imposing (by default) a standard project structure on a Java project.

It gets in the way and prohibits the flexibility necessary to get work done.

And they love Maven because it imposes a standard project structure on a Java project, so (to quote the Maven web site) as “to allow a developer to comprehend the complete state of a development effort in the shortest period of time.”

What the hell?

Big Company/Small Company

I’ve worked for a variety of corporations at a variety of points in my career. I’ve encountered people who claim large companies are soulless mind-sucking entities, and small startups are the way to go. I’ve also encountered people who claim small startups are underpaid stressful environments, and the security and camaraderie of a large company is the way to go.

Both suck.

Well, let me qualify that. In both environments you have to deal with people. And people can be bureaucratic, officious, mean-spirited and obnoxious, as well as obtuse, bull-headed, and just plain mean–regardless of the environment in which you work.

And people can also be gracious, kind, polite, helpful, intelligent, and otherwise great to work with–again, regardless of the environment in which you work.

I’ve noticed two differences between a small company and a large company. Small companies mean more risk taking–but have the potential of a massive upside. On the other hand, there is nothing inherit in a small company being “small”–meaning agile, non-bureaucratic, or capable of quickly recognizing and moving on decisions. All it takes is an idiot who is willing to do stupid things to break one’s day.

One could argue that idiots are more rife in large corporations–after all, the the other difference between a small company and a large company is that a large company provides security, or rather, large companies have a greater vested interest in keeping people who understand how things work at a large company, while small companies have a vested interest in getting rid of bad people.

Or rather, a small company may think it has a vested interest in getting rid of bad people–but often small companies lack the experience (or have people at the top) who are unable to recognize their vested interest. We may hear about the successful small companies that go on and make millions for their founders–but how many do we never hear from which fail because of bad decisions or bad execution, caused by bad people who lingered on?

(I’m not speaking of any company in particular, by the way–I’ve worked for several very large companies and very small startups. And it’s always the same.)

In both environments, however, there are really cool projects and really crappy projects. Small companies tend to generate excitement: “hey! we’re on the cutting edge!!!”–but crappy projects are crappy projects. Some large companies like Google have also managed to capture that excitement–but what Google doesn’t tell you is that they’re like the military: unless you are extremely exceptional, you’re going to be fodder for the Google Advertising Serving System. Sure, Google may lure you in with “Go” and “GWT” and cutting edge algorithm research–but chances are you’re going to be working on how to optimize advertising keywords and making billing systems more scalable.

Okay, so I’m working on an advertising system. I’m working on Lead Generation–which is about as exciting from the outside as watching paint dry and grass grow. But there are a few cool things going on here: (1) we’re building a system from the ground up. And (2) we have a chance to make a real difference within the organization through the use of excellent customer-oriented design. What I want (and what I intend to fight for) is Apple-like design-centric design. Sure, it’s an ad system, but it doesn’t mean it has to be a poorly designed ad system.

Live in the Glendale/Burbank/Pasadena area and want to play with GWT? Got Java ski11z? Want to make a difference by providing an ad system that your corner drug store owner or restaurant owner can actually use? E-mail me at my AT&T Interactive e-mail address.

The View from AT&T Interactive.

Pictures from the new job.

I’m actually quite happy right now: sensible people who have solid experience putting together an advertising system that is well understood, with plenty of time to do it in. And while it won’t have the same “cachet” as working for a Google, the biggest win of working for AT&T Interactive is the ability to challenge Google by connecting customers with local businesses and making it as easy as possible for advertisers to get the word out.

I have always strongly believed if you want to conquer the world, make difficult things easy. That’s why the iPhone is taking over the world: Apple no more invented smart mobile phones as they did the MP3 player. But Apple made it easy, and when you realize the millions of cumulative hours spent in frustration by users who wanted something cool and got hacker crap, it’s easy to understand why people were willing to pay hundreds of dollars on an Apple iPhone when they weren’t willing to spend $50 on a Windows Mobile device.

And that’s my goal at AT&T Interactive: make it as easy as humanly possible for advertisers to reach customers. A large part of this involves designing the system so that it makes sense, and it gives people the power to run the ads they want while giving the restaurant owner, the local carpet cleaner and the clothing store owner the easiest tools possible.

You shouldn’t have to be a gear-head geek to use a cell phone, and you shouldn’t have to be a graduate of MIT (or as bright as a Googler) to place local ads.

Whether Mobile.

So the decision to accept a new job at AT&T Interactive as an Architect of Lead Generation means I’m moving away from mobile after three years of doing mobile development or mobile-related development. It doesn’t mean I’m giving up writing software for the iPhone or for Android or Windows Mobile: I have an idea I’m tinkering with in my spare time.

But it does mean I won’t be doing it professionally.

The decision to move away from Mobile development when it is glowing red hot is a deliberate one. I believe mobile development is currently a bubble: I do not believe–with perhaps some unforeseen exception–that some mobile startup will ever grow to become the next Google, Microsoft or Amazon. At best I believe you can make a nice living at it–but you’ll never reach a billion dollar market-cap, unless we have a replay of the Dot.Com bubble of the late 90’s.

The rush into Mobile has been driven by the observation that the iPhone has finally created an environment that is relatively “open” (in the sense that the development tools are free and not the thousands of dollars Microsoft and Nokia used to charge–buying a basic Mac Mini capable of running Xcode is still cheaper than Visual Studio Professional for Windows Mobile.) and shipping an application is relatively “easy” (in the sense that it only costs $99 for a year to obtain a hardware signing key, unlike Microsoft and Nokia who used to charge more). It also has created a mobile computing environment that is powerful enough to run sizable mobile applications, with an always-on Internet connection which allows creating internet-enabled mobile applications a no-brainer. Say what you will about Android and the improvements for Windows Mobile 6.5 and Nokia–but Apple showed the way.

And in the process Apple has made a ton of money.

The believe is that by extension there is a ton of money to be made in mobile: if Apple made a lot of money, so can other software developers. But what they don’t realize is that Apple is making money hand over fist by creating an extremely well-designed product (that, at the time, was a revolution in design) that disrupted the traditional relationship between MNOs (Mobile Network Operators) and headset manufacturer. Never before has a headset manufacturer demanded as much control as Apple has–Apple even went so far as to control the pricing structure, options and the initial sign-up relationship with AT&T’s customers.

Riding Apple’s coattails may make you rich–but it probably won’t. And thinking that Apple is succeeding because they just happened to be the right company in the right place at the right time denies Apple’s contributions to the state of the art: nothing prevented Nokia or Microsoft or Sony or Motorola from doing what Apple did, starting the revolution without Apple.

Which means the analysts claiming the Mobile Revolution is finally here–and Apple just happened by coincidence to stumble, like some drunken idiot, onto the center stage just as the elevator started to rise–those analysts are complete fucking idiots. It’s not a Mobile Revolution–though it is a disruption in Mobile. No, it’s a consumer design revolution. Other companies still haven’t figured that out.

Further, and more importantly, I believe development in mobile is–career wise–a dead end. Basically, any mobile company doing repeat and sustained business will take one of two shapes: a company which repeatedly puts out new products (a game company), or a company providing mobile-based information. The former will probably consist of a handful of two- or three-man teams. The latter, at best, will consist of a two to three person client team and a two to three dozen server team.

The smallness of the teams comes from the reality that there is just so much code that you can put on a mobile device. The mobile software I worked on for Geodelic, for example, was perhaps 15,000 lines total–the server side, on the other hand, winds up being megabytes and megabytes of stuff in the source control worked on by a dozen or so great engineers. And ours was a “fat” client: I suspect programs like Where or the MasterCard Priceless Picks clients weigh in at under 8,000 lines total. Even something relatively complicated as the YPmobile app can’t weigh in at more than 12,000 to 15,000 lines.

Mobile information companies like Where are like iceburgs: the 10% that is visible is kept afloat by the 90% you don’t see. Companies as ambitious as Geodelic–which are seeking to integrate all sorts of stuff into their data stream–are even more so: I suspect once Geodelic has built out the system they are seeking to build, by line count the Geodelic Mobile Client will probably be less than 0.25% of the total lines of code that comprise the overall system.

Such companies are extremely ambitious, and perhaps they may grow–not to Google size, but at least to a respectable middle-sized company. But that means that while the server team may serve to occupy two or three floors of a major office complex, the client side of the company will still comfortably fit into a small 8′ by 10′ corner office in that complex.

I love doing mobile work. I intend to keep doing mobile work in my spare time. But my ambitions are greater than being the lead guy of a three person team when I’m in my 50’s. And I just don’t see that happening in mobile.

A change of scenery.

I was just offered the position “Architect: Lead Generation” at AT&T Interactive.

Changing jobs is hard. Really, when you change jobs, you’re taking a real risk: can you do the work you’re being hired for? Can you come up to speed fast enough and understand the problem set well enough to do the work in a timely fashion? Can you adapt to the new corporate culture–or, if you’re high enough up in the totem pole, create a micro-culture around you that you can live with? With larger companies–like a Symantec, a Sony or a Yahoo!, can you navigate and understand the corporate bureaucracy? And–worse of all–is the project you’re being placed on one that is favored by upper management, or are you being set up to fail because of an internal bureaucratic struggle at a level beyond what you can influence?

With small companies, of course, there are different problems: do the people there have a sufficient skill-set to execute? Can you grow the staff fast enough to satisfy the needs of sales and marketing? Are your initial design assumptions correct or did you just build something that is five degrees off from what the market needed? And of course you still have dysfunction, though often the dysfunction looks different it comes from the same source: people are imperfect, and any organization will inherently have internal competition between its organization members as people have the need (due to greed or ego) to assert themselves.

But the change is always difficult: vacation pay is paid out and you have no accumulated vacation at the new job–meaning that two week vacation you wanted to take is now a year away. There is plenty of time spent navigating the new environment–oh, and the constant “well, of course everyone knows” stuff which bites you in the ass.

For me the decision to change jobs has always been predicated upon a personal evaluation: is the potential reward of a new job greater than the lost opportunity costs of the current job and the risks of a transition? My transition from Symantec to Yahoo! was driven by two factors: a very long commute, and not working on a project that interested me made the opportunity costs of walking away from Symantec relatively low. My departure from Yahoo! was driven by a firm belief in a dysfunctional management structure which (I believed) set up our project for failure–a dysfunction that appeared to me to come from the executive suite. (From what I’ve seen with the recent quarterly results, it appears that dysfunction may be over. But once burnt, twice shy.)

And for me, as someone who appears to be perpetually pegged as a sole contributor, the ability to move into an architectural role at a large company, where I would have managerial and design responsibilities greatly advances my own career path–that’s a big deal. That’s a big upside.

Yes, there is always a lost opportunity cost: leaving Geodelic means I’m giving up working on a product I know like the back of my hand, where all of the major problems have been solved, and trading it for The Great Unknown. I’m also walking away from a potentially extremely lucrative equity stake.

But in the end, I believe moving to AT&T Interactive will put my career farther forward in the direction I want to go–a leadership position–than if I stayed at Geodelic. I’ll have the opportunity to provide significant influence to the front-end interfaces people will use when they interact with our product, and I’ll have the opportunity to build a team and establish a culture which promotes design excellence–which, I believe, is the “next big thing” in the tech industry.