Does someone have some clarification on this?

New iPhone Developer Agreement Bans the Use of Adobe’s Flash-to-iPhone Compiler

Damn.

I am this close, and I really mean this close to having a working cross compiler system that can parse Java class files and convert them to Objective C. The goal is to allow me to write a computational module in Java, then recompile it into Objective C. (I have a working iPhone App that takes advantage of this, and I think it’s mostly bug free–it’s just a matter of finding time to package the system up and upload it for folks.)

My goal was never to write a Java layer to the iPhone API or build a framework to replace the API. All I wanted was to write a symbolic math module in Java and use that module on the iPhone.

But between the CoverFlow Design Patent which may require me to take down the FlowCover source kit, and the changes which require all iPhone code to be written in Objective C, C++ and C, I’m not sure if I’m a happy camper.

Does anyone know if I’ll be allowed to use my J2oc library to recompile non-UI libraries to run on the iphone?

If you’re a head hunter, read this.

Yes, I do mobile development. But right now I’m working for ATTi in Glendale as a software architect–and I’m assembling a team.

What this means for head hunters:

(1) Unless you’re offering six figures and the first digit is a ‘2’ or greater–please don’t bother. It’s not that I’m making that much money at ATTi, it’s that this is probably the amount it will take to convince me to go somewhere else–especially given the fact that I’m a bike ride from work. (Never underestimate the value of a very short commute to a job you love!)

(2) Don’t expect me to send you any references. We’re hiring, and I’d be stupid to forward to you the good resumes, since we need as many good resumes as we can.

(3) Even if you have the temerity to contact me, trying to entice me to jump ship with the promise of a 20% pay cut, and expect me to start forwarding your resumes (that is, expecting me to do your job for you), at least don’t do it using some sort of automated mailer which–when I politely tell you to go away–then forwards me a form letter asking for my resume for processing purposes.

If you’re going to get 10% or 20% of my salary as a bonus for placing me at another company, then do the work necessary to actually earn that bonus. There is nothing in the world more annoying than a lazy and sloppy head hunter…

Is the iPad the end of the WIMP interface? No.

With the announcement of Apple’s iPad, a reoccurring theme with both of its supporters and its detractors is the fact that the iPad uses the iPhone OS, which is a task-centric (and non-windowing) operating system. For example:

I need to talk to you about computers. …

After defining the “old world” computing experience as “In the Old World, computers are general purpose, do-it-all machines” and the “new world” as “In the New World, computers are task-centric. We are reading email, browsing the web, playing a game, but not all at once.” we find:

Apple is calling the iPad a “third category” between phones and laptops. I am increasingly convinced that this is just to make it palatable to you while everything shifts to New World ideology over the next 10-20 years.

Those who are complaining about the iPad have primarily complained about the lask of multitasking:

What We Didn’t Get: Multitasking, Notifications, …
Understanding Multi-tasking on the iPad: What is it really?
iPad for Business? Not Without Multitasking

Each of these (and I picked the more polite ones) point out the iPad’s lack of multitasking–which the second article correctly notes is two things: the ability to run two applications at once, and the ability to see two applications at once.

It is entirely conceivable that we could have the second type of multitasking without the first: imagine an operating system where a process is only marked as active (and swapped in from storage) when it’s window has the focus.

In both cases, it really boils down to the WIMP (Windows, Icon, Menu, Pointer) interface: the former supporters are looking for the “next new hotness” beyond the WIMP interface, which relieves them of the task of managing multiple windows on their desktop. The latter doesn’t like the fact that a single window at a time prevents us from (for example) browsing the web while writing in a text editor–which forecloses on the possibility of writing notes in an outliner based on research material in a web browser. My common workflow as a developer is to have multiple windows open: one on a documentation site for an API, another on the code I’m writing, and several others on classes I may be using while writing my code.

But I don’t think either camp is correct.

It is clear that the small size of the iPhone makes the screen impossible to use with the second mode of multitasking, where multiple windows coexist from multiple applications at the same time. The iPhone’s design decision was to have only one application running at a time–but each application is required to save it’s complete state as it shuts down as fast as possible, which (when done right) preserves the illusion of multitasking.

A desktop computer, on the other hand, has much greater resources–including screen real estate. It would make absolutely no sense to have one task at a time running in full screen mode on my 30″ display–and there have been plenty of people who have tried to build prototype window managers for Linux which have a “one window visible at a time” model which have utterly failed. If a one-task at a time model was superior to a windowing model for desktop computers, we would have had that model long ago, given people have been experimenting with it since the 1980’s.

The iPad is clearly between the two in size and resources. It could (with its 1024×768 display) easily display a windowing interface. But the decision to do away with a WIMP interface probably has less to do with the death of windowing interfaces (as folks like John Gruber apparently believe), and more to do with product positioning: the iPad is a peripheral device, auxiliary to your main computer, which requires your computer to function. As such, it makes sense to position the iPad as a one-task-at-a-time device like the iPhone (despite the large display) and instead encourage developers to use the large display to provide a more rich one-application-at-a-time experience.

But just because the iPad is a peripheral device doesn’t mean your desktop computer is next with a one-application-at-a-time experience.

It’s clear from Apple’s design sensibilities that a main computer (your laptop, your desktop) is a main device capable of doing several things at once. And a peripheral computing device (the iPhone, iPod Touch, iPad, and iPod) is a one-task-at-a-time device which requires your main computer, but which can be carried along separately after having been docked with your main computer. (I count the iPod here because I believe it started the trend within Apple of considering peripheral devices that do only one thing at a time.)

And it makes complete sense to me that Apple would do this.

If your screen is not big enough to display multiple windows at a time, then why put the infrastructure in place to support multiple windows? (I’m looking at you, Windows Mobile.) If your device is not big enough to display windows, then why put the infrastructure to have multiple windows?

And with the exception of two category of applications (those which play music in the background, like Pandora, or those which periodically poll services like an mail application or IM application), why even support multiple applications at once? Do we really need (as we have in Android) the ability of your game staying resident in memory when you’ve swapped tasks to read your e-mail? (After all, the fact that an application stays resident in memory on Android means your process memory space is limited to 16 megabytes of RAM–as opposed to the 100+ megabytes you get on the iPhone.)

A peripheral computing device which does not support multitasking can also be made with less RAM, less CPU power–since it only needs to do one thing at a time. Because it is peripheral and only supports one thing at a time, applications can be built which take full control of the display without worrying about other applications. And the UI can be streamlined and use a completely different model than the WIMP environment.

If you were expecting the power of a tablet computer (a’la a Lenovo ThinkPad running Windows 7) because you see a tablet computer as a main desktop computer that uses a pen or your finger, and not as a peripheral device–then buy a tablet computer. Lenovo makes very nice laptop computers.

But I’m glad Apple decided to make the iPad a peripheral device: I think it was the right decision to release a new type of device rather than releasing a small notebook computer without a keyboard.

Because that’s what a tablet computer (a’la Lenovo) is: a laptop computer either without a keyboard or with a keyboard that can be tucked away.

Love-hate relationship with reflection, proxy objects and annotations in Java.

I’m finding I have a love-hate relationship with proxy objects, reflection and annotations.

For our project I need to build a module which is capable of serializing our Java objects over JSON. The specifics of our protocol are a little different than most, so I rolled our own parser. No big deal.

But then our requirements on what can be serialized changed; the server team wants to represent the objects passed to me by interface only. So I started playing with using annotations to mark the interfaces and classes that are serialized, and mark how they should be serialized.

It’s convenient: we can declare a class, sprinkle a few annotations in, and call it a day.

Which I love.

The part I hate, however, is that I’ve now, very easily, with just a few lines of code, created a specialized “meta-language” on top of the Java language to handle the serialization code.

And I despise meta-languages with a fiery hatred that could power a thousand suns.

The problem with meta-languages is that they violate the very notion of “discoverability” which is at the heart of code maintenance. No longer can you sort out the functionality of a class or an interface by looking at it’s declaration–now you have to also understand a domain-specific meta-language which alters the behavior of the those classes and interfaces in some hard to understand way.

Yes, I know: the solution to such a thing is documentation, documentation, documentation. I get that. And I certainly plan to spend more time on my documentation than I did adding the annotations in the first place, and putting plenty of references to that documentation in our code.

But there’s the problem: most programmers despise documentation. They think code should be self-documenting. (Which is another way of saying “I’m a damned lazy fool who is too stupid to recognize those shadows on my retinas as fellow co-workers.”) And so we wind up with very strange meta-language components that are impossible to discover doing tricky things that–without proper documentation–is impossible to understand.

Come on, guys; spell check!

Just saw a résumé cross my desk today, with “JavaScript” spelled “JAVA Scripts.” (Yes, from the context of the sentence, the guy was clearly referring to JavaScript.)

Naturally I bounced it.

I’m a terrible speller. My english skills are at best “okay.” But come-on: if you’re going to send off a résumé to impress a hiring manager in order to get a higher-paying job, at least spell the damned technology correctly!

Why I don’t like SmartGWT.

I spent the day playing with SmartGWT, and came to the conclusion that I’m not particularly a fan.

Don’t get me wrong: the widgets that it produce are sexy as hell. And if the development work you’re doing fits into SmartGWT’s client/server model and you are willing to use SmartGWT’s server-side libraries to handle network communications, it’s probably the best way to go.

But ours doesn’t. And when our model of obtaining data from the SmartGWT server doesn’t fit in their model, it’s an uphill fight–much harder, for example, than me just building our own custom widget set. (I have the advantage over most that building custom widgets in GWT at this point is pretty straight forward for me, so your ‘cost/benefit’ ratio may vary.)

The fundamental problem I have with SmartGWT is that it is just a thin Java wrapper over the SmartClient system all written in JavaScript. While SmartClient is probably a great piece of software for JavaScript, wrapped for GWT it’s a royal pain in the ass to use. Rather than giving you a rich library of various controls (and perhaps a class hierarchy of grids, each which provides some specialized function or override from the basic component), it provides a very small number of highly customizable set of controls.

And there is where I have problems. If I want a dialog, I don’t open a DialogWindow. I open the Window and set three or four different settings to make the Window look like a dialog. If I want a dynamically updating table that takes its information from a remote server data source, I don’t create an instance of a dynamic table and provide the right interface; instead, I create a Grid and set about a half dozen settings which turn the Grid into a dynamic grid.

Now of course this complaint is stylistic: SmartGWT’s sample app makes it pretty clear how to create a dialog or a dynamic grid–and after a day I had a fairly complete screen with a tree navigation widget, a table, and a modal search window with a dynamically updating table.

But it’s beyond just being stylistic: because the DataSource for SmartGWT is not an interface implemented by a half-dozen different implementations, but a singular object which is configured (through some JavaScript mechanism I don’t grok because the whole point of this exercise is to insulate myself from JavaScript) to be a JSON or an XML/RPC or a whatever interface, creating a new DataSource to feed a dynamic table is not just a matter of creating an instance of the interface and filling in the blanks.

In fact, without diving into the underlying JavaScript, it seems impossible to create a new interface to speak the particular flavor of JSON we’re using. (And while SmartGWT does provide a mechanism to speak JSON, it also defines the specific JSON requests and responses, which doesn’t work for us.)

It is this inflexibility, created because they’ve simply wrapped an existing JavaScript framework with a thin layer of GWT Java stuff, which makes SmartGWT a pain in the ass for me to use–and it’s why I don’t like SmartGWT.

Why I like GWT.

I’ve built web sites using PHP and JSP, with ColdFusion, and now with GWT.

One of the biggest problems I’ve run into with PHP and JSP and ColdFusion is that when you obtain information from a user, you commonly wind up flowing your logic through multiple pages. For example, with a shopping cart and checkout page, you wind up flowing a checkout page via a form (and a JSP or PHP landing page which processes the results) to a second page with a form, which flows (via another JSP or PHP page) to yet another page with a form, and so forth. The business logic winds up being separated across multiple HTML pages. We excuse this dividing up a whole bunch of pages across multiple JSP or PHP landing pages (which process the result of the previous page) by trying to invent all sorts of “MVC” like logic that attempts to justify the fact that we have to serve up four separate screens to do essentially one operation. But we’re still dividing up one operation across four separate screens.

What I like about GWT (and AJAX in general, I suppose–though I’m a Java guy, not a Javascript guy) is that I can now put all of this logic on one page. If I need to make an intermediate transaction to the server to validate some element of the form–so be it; it’s not that big a deal to do quick AJAX request and response.

Some people may claim that this is still not as easy as having an application with everything on one box–after all, if you do have to make an intermediate round-trip, you wind up writing something like “doMyRequestWithResponse(new ResponseInterface() {…});”, with the response coming back some time later. But I find that sort of event-driven programming far easier than the sort of “event-driven” type programming that seems to be offered by earlier frameworks.

Sure, it’s a style thing. But I’m far faster throwing together a bunch of widgets in GWT (and debugging them on the fly in Eclipse–cool!) than I am trying to lay out a bunch of HTML for a flow of forms.

In other words, client-server is far easier than server-driven against a dumb (HTML-driven) display engine.

Yes it has, yes I am, and really?!?

One fix installations stuck at ‘Preparing…’

Since moving to Snow Leopard, I’ve noticed an innumerable number of times where software installations would get stalled at the “Preparing…” stage. If you’ve noticed it too, chances are you’re an iPhone developer, too…

It turns out the simple fix for this problem is to quit the iPhone Simulator, and then try the installation again.

Women in Computer Science

“Typical” computer science workspaces off-putting to women

This is something that has concerned me greatly. In part, because my wife (who graduated from Caltech and who is smarter than I am at mathematics) would do quite well as a software developer–and she won’t touch the industry with a 10 foot pole. And in part because, underneath it all, there is a definite culture which I do not like, which I tolerate because it is the price I have to pay in order to do the work I love.

The study quoted in Ars only covers the outward displays of a culture: science fiction memorabilia, snack food. But I suspect the problem runs much deeper than the outward signs.

The bottom line is that I have yet to work for any computer software development firm which didn’t have a strong wiff of adolescent teenage college boy frat-house hanging in the air–both in terms of interaction of team members, in terms of project planning (and putting out fires), and even in terms of the way projects are managed (like “scrum”, a term that comes out of Rugby).

No one thing, of course, is at fault: I suspect if it was just the matter of one too many models of the Enterprise sitting in the corner or someone using baseball terms to discuss a project, most women would be just as happy to ignore the occasional infraction on good taste. But it’s the overall culture that creates the problem.

And I don’t know how you change it.

I will note, however, that it’s not a lack of intelligence or a difference in education or something intrinsic about women–outside of a lack of willingness to put up with an adolescent male frat club culture: the majority of software developers writing the software for the Space Shuttle are women. I strongly suspect three things that make working on the Shuttle appealing are (a) a lack of “cowboy” programmers (with their ‘dick length’ contests), (b) complete predictability in the workday (and no “burn the midnight oil” rush sessions, since rushing kills astronauts), and (c) a culture of review and oversight that looks at fault as a fault in process, rather than seeking to assign blame to individuals who fail to be “cowboy” enough.

My goal with my own development group is to lead through teaching and establishing an example–and the example I want to set is one of predictability (through proper prior planning) and one where no-one is expected to “rush.” Next week I’m putting together essentially “course material” on how to write code within our project, on the theory that anyone who is interested in technology and who can write Java and use Eclipse can “turn the crank” without having to burn the midnight oil or be a self-directed “cowboy.”

We’ll see how this theory works in practice.

But I do know I completely despise the frat-house atmosphere at most software development companies. (It’s why I hated college: I loved the classes, I hated the frat-house college culture.) And if I can play a small role in banishing part of it in my own little corner of the universe, I will be a happy person.

User Interface Design Anti-Pattern: Pecked to death by ducks.

Here’s a user interface design anti-pattern I just ran into.

So I’m trying to complete my company’s on-line sexual harassment training program. I guess they teach you how to be better at engaging in sexual harassment; I dunno. And the reason why I don’t know is because when I logged into the program using Safari on the Macintosh, I got the error message:

“We’re sorry but your browser is not supported. We support Firefox and Internet Explorer.”

So I called up my copy of Firefox on the Macintosh, and got:

“We’re sorry, but your operating system is not supported. We only run on Windows XP, Windows 2000 or Windows Vista.”

Okay, so I called up my trusty copy of Parallels, launched Windows XP, and got:

“We’re sorry, but the training program requires RealPlayer or Windows Media Player.”

Which I’m now installing.

Each error landing page I got did not mention the subsequent limitations. Instead, it was just a terse error message, undoubtedly arrived with pseudocode like:

if (browser not in approved list) then
    print error "browser not in approved list"
    exit
else if (operating system not in approved list) then
    print error "operating system not in approved list"
    exit
else if (player plugin not installed) then
    print error "plugin not installed"
    exit
else ...

How stupid is this anti-pattern? I get to get pecked to death by ducks–making the whole on-line experience quite painful.

And it would have been so easy instead to write something like:

list missing = new list;
if (browser not in approved list) then
    add "browser not in approved list" to missing
end
if (operating system not in approved list) then
    add "operating system not in approved list" to missing
end
if (player plugin not installed) then
    add "plugin not installed" to missing
end
...
if (missing is not the empty list) then
    show missing on error page
    exit
end

That way you can display all of the missing problems to the user all at once. It’s not like each test couldn’t have been run in quick succession, and then the user warned about the missing requirements all at once.

Better, land the person on a succinct “requirements” page which lists, in simplified bullet form, the specific requirements of your product, after listing the stuff that is missing.

The rules for the pattern that should be applied here are:

  • Make what the user is missing clear
  • Make the requirements obvious
  • List the corrective actions the user should take immediately obvious.

And don’t peck your users to death.