Apparently the folks running the various MNOs are idiots.

Verizon Drafts Developers Into Mobile Software War on Apple

The companies appear to be responding to Apple, which announced this morning that its iPhone App Store, now only one year old, has surpassed 1.5 billion downloads and is serving 65,000 applications.

“The App Store is like nothing the industry has ever seen before in both scale and quality,” Apple CEO Steve Jobs said in a press release. “With 1.5 billion apps downloaded, it is going to be very hard for others to catch up.”

Though press releases are inherently boastful, Jobs is correct that Apple is well ahead of its competitors in the mobile software space. The company launched its application store in July 2008 with the release of the iPhone 3G. The App Store’s consumer friendly interface, which makes purchasing and downloading applications as easy as downloading songs in the iTunes Store, is benefiting software developers, some of whom have become rich thanks to explosive sales of their apps.

I can almost hear Steve Jobs cackle in his private office, realizing how badly the MNOs and MVNOs have missed the mark.

It’s because they all assume that the iPhone is a phenomenal success because of the App Store, when the reality is that the iPhone has succeeded despite the App Store.

And of course Steve Jobs continuing to remind the fellows at Verizon about the App Store completely diverts their attention from what made the iPhone a successful platform: usability, usability, usability.

What’s particularly amazing to me is that most application stores are not useable in the least. Apple’s App Store famously requires an editorial review before an application can be posted, which has resulted in rejected applications because of on-line content. They demand 30% for distribution. And there is the famous “drive to 0” in the price point, combined with top sellers being fart applications and things like “Pocket God” (great game, but not an intellectually expansive selection) which have prevented more expensive and interesting vertical market applications from appearing. And the on-phone App Store experience is, at best, useable: if you know the name of the application you’re looking for you can search for it, but the small screen means you’ll only see 7 applications out of 65,000 at a time.

So now Verizon is going to (a) piss off the developers who are used to being able to control the relationship with their clients (a valid criticism of the App Store: I cannot ship a demo which requires registration), (b) make it harder for developers to push their product (another criticism, given that in order for a user to by my iPhone app I have to pass control to Apple, losing control of the experience and potentially handing a sale to a competitor), and (c) get charged 30% for the privilege–and for what?

The iPhone App Store succeeds because the iPhone is succeeding–and the iPhone’s initial launch was a successful launch despite the App Store, which didn’t even exist during the first year of the iPhone.

Why we shouldn’t unify mobile and desktop UI frameworks.

John Gruber: Google’s Microsoft Moment

It makes no sense to me why Chrome OS isn’t based on Android. Maybe there’s a good answer to this, but Google hasn’t given it.

While I don’t understand why Google has two completely separate operating systems, one based on Davlik (a Java VM clone) and another based on Javascript (sharing more in common with the Palm WebOS platform than with Android), I do know why no-one will ever be able to successfully create a framework that unifies mobile and desktop operating systems. It’s why Apple has a separate MacOS and iPhone OS UI framework.

It’s because of the needs of each platform.

Bottom line is this: on a desktop application you can create a rich window and environment that displays everything at once. The best example of this is the Apple Mail application: in one window I see all of my mailboxes, the status of all of my mail boxes, the list of mail in my selected mail box, and the current selected message.

Mobile applications, on the other hand, have limited real estate to work with. Thus, instead of having a single window, the iPhone version of Apple Mail has a separate windows for showing accounts, showing mailboxes in an account, showing messages in a mailbox, and showing an individual message. Within a mobile version of the software we also have a notion of a “view stack”; a stack of modal views which push and pop onto a stack of views–something that has no counterpart in the desktop world.

The development model also differs: on the desktop I may have a model and controller object (from the MVC design pattern) which drives multiple views simultaneously. But on the phone, I need multiple controllers attached to the same model, each controller contains information about what was selected higher up in the view stack.

Knowing that, we can answer the question that John Gruber links: “If I make a screen two inches smaller, should I use Android instead of Chrome OS? If the keyboard works with my fingers instead of my thumbs, I should use Chrome OS and not Android?”

The answer is simple: is the screen so small that your application must be represented as a stack of views (like the iPhone mail application) or can everything relevant be placed into a single window (like the Apple mail application)? If the first answer, use the mobile version of the operating system. If the latter, use the desktop version.

Javascript? Has it come to that?

Sometimes I really have to wonder at the sanity of the computer industry. It’s almost as if we do things because it’s the flavor of the week, not because it actually makes any damned sense.

Now for me, my own belief in what it takes to build good software boils down to (1) an expressive programming language, (2) adequate operating support for building the type of software I want to build, and (3) good IDE support. (By that I mean a development environment that allows me one-button compile and test. Because the key to writing great software is making the ‘edit-compile-debug’ cycle as short and painless as possible, with debugging tools that allow a direct correlation between what I write and what I debug. MacsBug is a last resort, not a development tool.)

I like the iPhone. It meets all three criteria. I like Android for the same reason.

I’m not a fan of Javascript because so far, I haven’t seen an adequate development environment. Now one may exist that I’m not aware of–but thus far, the best I’ve seen is a combination of Coda and Safari’s debug tools. Part of my problem is that a major part of Javascript programming involves manipulating a web page’s DOM, and, um, it’s not like Microsoft has ever played well with this whole “standards” thing.

I’m also not a fan of weakly typed languages. (One place where I draw an exception here is LISP and LISP variants such as Scheme–but then, really, all variables are references to objects, so it’s not as much “weakly typed” as it is “everything is either an ATOM or an object cell.”) And I’m really not a fan of Perl’s variable prefixing system.

So the idea of the Palm OS’s using Javascript for programming, or Google’s vaporware Chrome OS using Javascript: it just seems, um, weak.

Though one upside (I guess) would be a demand for a much better WebOS-based IDE. Because right now, Dashcode and the Google GWT both seem like hacks to me; rather than helping me write great Javascript they both hide everything behind a “helpful” layer of abstraction.

Further thoughts on Security.

An additional thought on my Security Rules Of Thumb:

The DMCA’s clause making it illegal to create or investigate anti-curcumvention devices exists because stupid people were building their own encryption algorithms without having a clue how to do it, and after the inevitable occurred (I mean, really: ZIP compression is more secure than CSS for DVD movies ever was–and ZIP compression isn’t an encryption algorithm), they got their lawyers to pass a law to cover their stupidity.

If I were King for a day I would (a) repeal the DMCA’s anti-circumvision clause, and (b) force any idiot complaining about how it allows “hackers” to “break their security” (*cough* *cough*) to attend a week-long seminar on encryption algorithms taught by folks from the NSA who have a friggin’ clue.

At their own expense, of course.

Further thoughts on GPS receivers as I remember being lost on Market Street.

So I landed in San Francisco, and took the BART to Market Street to go to my hotel on 3rd street. And I’m fully prepared, too: iPhone 3G, Garmin Legend GPS, laptop computer, Kindle, and a large bag full of other, more mundane things. (Pants, shirts, shampoo, toothpaste.) Bag is large and unwieldy, laptop bag is digging a trench into my shoulder, but I’m there at Market Street.

I pick what I think is the nearest BART exit to my hotel (a selection which was made by an uneducated guess), I go up the stairs, and I’m now on Market Street.

With the location of my hotel programmed into my cell phone and into my Garmin, I turn both on in order to get my current location and figure out which way I need to go.

The Garmin does it’s little satellite animation status thingy. I watch as it captures and syncs with one satellite, two, three,… finally, it knows where I am: near Pier 29.

Huh?

I’m standing here with a huge pile of stuff in a large bag, a clean version of the random homeless who have taken an interest in extracting my spare change from me, I’m tired, and there is only 15 minutes left before early registration closes (because my flight was an hour late), and my Garmin thinks I’m two miles from where I know I am–somewhere on Market street?!?

So on comes the iPhone. Same result.

Damn!

Here’s the problem: the way GPS works is by triangulating your position between multiple satellites in orbit around the earth. Each GPS is an ultra-precise atomic clock and a transmitter: in the transmission signal are the Orbital Elements of each satellite and the precise time to the nanosecond. (The time transmitted, by the way, is adjusted to the Earth’s surface relativistic frame. It turns out that the effects of gravity and the motion of the satellites means that the GPS satellites are moving 45 microseconds faster in the orbital relativistic frame than time moves on the surface of the earth. Which means the time on the GPS satellites has to be tuned slower so that the transmitted time matches the time on the surface of the Earth. Yes, that blows my mind too.)

So your receiver gets the ultra-precise time to the nanosecond relative to the satellite, the ultra-precise position of the satellite to within a foot, repeats with five more satellites, and using the fact that the speed of light travels one foot per nano-second, does a little bit of geometry and figures out where you are.

But if you are in the canyon of buildings that is Market Street, you run into signal reflections. Signal reflections create noise on the signal, so you don’t necessarily get the ultra-precise time, and they add distance to the signal’s travel path, which screws up your location. Thus in the canyon of Market Street GPS is useless: the GPS receiver does the math and thinks you’re around Pier 29, give or take fifty feet, when in reality you’re two miles away.

The error circle for the GPS doesn’t help, either: that error is calculated based on the calculated precision assuming you have a clear sky. The GPS error calculation cannot take into account building reflections, because the GPS receiver has no way of knowing if the signal bounced around before being received.

The upshot of this is that I had two GPS receivers in complete agreement, which were utterly useless, and worse: left me lost.

And resorting to the crudest of navigational tools: reading street signs and walking two blocks in order to get my bearings.

A generation from now our children will forget about street signs and we’ll become like the streets of London: their street signs were put on the sides of buildings rather than made into stand-alone sign posts. Over the years, renovations caused the building owners to tear down the signs but never replace them–which means in many places in London there are no street signs at all.

If everyone in the United States has GPS receivers–they’re very cheap and are starting to be included in every phone–then when will city governments decide to save a little money and stop replacing street signs? Will we get to a day and age where the only way to figure out where you are is to pull out the electronic gadget from your pocket and ask it?

And what happens if you’re like me, lost on Market Street, and there are no street signs? Do you just find a warm grate along the sidewalk while your hotel room goes unfilled?

More importantly, what does that mean for the location based marketing software and location-based navigation tools of tomorrow, not to mention the new iPhone “Find my phone” service, when the phone lies about where it’s located?

Filtering Resumés.

I hate doing it.

But I had to.

We got a whole bunch of resumés and I have one job. So I filtered out half–sorry! And I plan to spend a few minutes with the other half to weed it down to two or three people to bring in.

However, here’s a hint for anyone writing a resumé. If you don’t want to find yourself on the “immediate discard” pile, don’t write that you’ve worked with “Android OS v3.0”. I didn’t bother to read past that one line.

I can’t spell worth a damn. My english skills aren’t the best in the world. So I don’t care about typos. But something like this–I don’t think so.

Thoughts on dealing with multiple Activities and UIViewControllers

In a typical model/view/controller implementation (setting aside, of course, the debate over what MVC really is), we implement a model (which maps to a document, file, or database), a controller (which manages the views), and a collection of views, generally contained within a window.

Now this model maps nicely into the iPhone API and the Android API: the basic controller class for the iPhone is UIViewController; for Android it is android.app.Activity. And of course the view hierarchy represents the UIViews and android.view.View objects, both built-in objects and custom objects. The controller object also makes the ideal target for iPhone delegates and data objects and for the various Android interfaces which do essentially the same thing.

On the desktop, most applications are built with either a single model and a single controller. Rarely do we have multiple controllers opened onto the same model, and when this happens, more often than not the multiple controllers are multiple instances of the same class declaration: multiple text editors opened onto the same text file, for example.

Mobile devices are different. On a mobile device you can have a top level screen with one controller drill down to a separate screen (with a separate controller) which displays an individual item, which drills down to a third screen showing information about that individual item, which drills down into a fourth showing the setting for a single field.

In this example, we have four (increasingly fine grained) views into the same model: in the first screen we have a top level view; in the next, a view of an item, and so forth. The original method I was using in order to code for this particular model was to create multiple model-like objects and pass them down to individual view controllers:

Now this model tends to be somewhat informal: my top-level model is (for example) an ArrayList of Items. When it’s time to edit an item, we pass the Item as the sub-model object into the sub-control display. And when it’s time to edit some piece of the Item, we pull that piece out and pass that to the next sub-control display.

The whole process becomes messy for a variety of reasons. First, it creates a strict dependency in the flow of views; even if we use interfaces and protocols in order to isolate a control (and its respective views), we still have a lot of work to do if we want to rearrange views. Second, it creates the problem that we somehow need to pass information about which sub-item was updated back to the previous view controller. This becomes problematic since the events for UIViewController and Activity don’t map well to the notion of “saving” or “marking dirty”: there is no single event we can easily grab that consistently means “You’re going away and the view above you is becoming active.”

Android presents a few twists on this as well. First, when your Android Activity goes into the background it can be deleted from memory in a low-memory situation. That is, you can find yourself in the situation where your Activity goes away and has to be rebuilt from scratch. This implies that you must strictly separate the sub-model from the controller (as opposed to the iPhone, where you can informally combine the model and view controller into the same UIViewController class). Second, well-factored activities can be set up to be brought up by external applications: it is possible your Item editor activity can be brought up by a separate application if it knows how to launch you with the correct Intent.

It strikes me that the proper way to handle this is to have a single unified model, and instead of passing sub-models, we instead pass a small record indicating what part of the model we are working on:

The idea is that the Δ object represents a small record which indicates to the sub-controller which part of the model it is working on. This has the advantage that updates are communicated to each of the controllers which are working on this model as the object updates. This also works well on Android, as individual control/view components can disappear and re-register themselves; the delta record is passed in the Intent which created that component, so when the Activity is reconstructed the Intent can be used to put the view controls back into the correct state.

I have a simple nutrition journal application I’ve been building for the iPhone which I was going to rewrite from the ground up using this model. In future posts I’ll update the post to indicate if this works well or not.

It’s a shame they allow complete idiots access to a loud microphone.

Original iPhone owners & Push Notifications

One of the most awaited features, push notifications, requires a constant data connection.

*Rolls eyes*

It’s a shame Mr. Bohon was handed a megaphone without having any bloody knowledge.

The Wireless Application Protocol is the cell equivalent of the TCP/IP protocol: it describes a wireless protocol from the hardware data layer up to the application environment.

At the bottom of this protocol we have the Wireless Datagram Protocol, akin to UDP on the TCP/IP protocol suite: it permits the delivery of data packets to an arbitrary mobile device without having the entire WAP protocol stack turned on and a connection opened or the processors powered on. This mechanism is used with WAP Push, which essentially defines a WDP port and the contents of the packet.

The only component on the phone that needs to be powered on for a WDP datagram to be received by the phone is the cell phone receiver, and it’s related to the same mechanism used for SMS messages. While it is true you cannot receive an incoming SMS or WDP datagram while on the phone if you have an EDGE connection, and while it’s also true you cannot receive an incoming phone call while receiving a WDP datagram, the duration of the WDP datagram is sufficiently short enough it shouldn’t cause more than a fraction of a second delay on that incoming phone call, and the WDP datagram can be repeated in the same way SMS messages are repeated until delivered.

Bottom line: push interferes with incoming calls in the same way SMS messages do: damned near not at all.

The best part of this exercise in cell phone stupidity passing itself for expert advise comes in the form of the mea-culpa at the start of this article:

We have received multiple reports from 3.0 firmware users on original iPhones who are NOT experiencing the problems described, and who do receive calls without difficulty with the push notification service turned on. Cory’s original post is left as-is below; however, we no longer believe the issue is widespread or will affect most original iPhone users. Our apologies for any undue anxiety or confusion.

They make it sound like their massive stupidity was actually a real bug in Apple’s software implementation which was later fixed–and so the problem “is no longer wide spread”, as opposed to “speculative bullshit pulled out of our ass.”

I swear to God I’m sick to death of morons who think a cell phone is the same thing as a network computer–and when things start coming down the pike that doesn’t quite map onto the desktop computer realm, they start making up nonsense out of whole cloth.

Process NAZIs

While formalism is gone when it comes to the science of computer programming, it appears the pent-up desire for formalism has exerted itself in spades when it comes to the management of computer programming.

The latest fad in this formalism is the Scrum process. The idea is simple enough: you maintain two lists–a product backlog, and a sprint backlog. The first is a list of long-term ideas, the second a list of things being worked on in the current work iteration called a “sprint.” Daily sprint meetings are held to make sure everyone touches base, and at the end of each sprint cycle everyone meets to discuss what was accomplished and to plan the next sprint.

There are some good ideas here. But many of the ideas here which make the scrum process work are ideas that also allow Waterfall to work, and allows various agile processes to work, such as XP. They can be boiled down to the following observations about software development:

(1) Gall’s Law: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”

(2) The human mind can only keep track of 7 things, plus or minus 2. This means that any complex task must be broken down so it fits within our ability to track it.

(3) An intellectual task begun on one day is best finished the same day, or at least bread crumbs should be left behind to help you pick up where you left off the day before.

(This third point is why my work day tends to be variable: once I know what I’m working on for that day, I need to finish those tasks: if I’m done by 4, then I’ll just goof around until quitting time. However, if the task isn’t done until 10, I’ll call my wife and let her know I’m going to be late. It freaked out my bosses at various jobs that for no reason I’d just stay in the office until really late–they were worried there was a deadline they didn’t know about. No; it just happened that the task I set for myself that morning was bigger than I anticipated.)

Any good process also takes into account the following managerial realities:

(4) Good inter-team communications is necessary for team cohesion. A cohesive team is a team which supports its members.

(5) Corporate memory (meaning the collective memory of the team) needs to be maintained in a reasonable way, so that new members can be brought up to speed, and leaving members do not cause the team to fall apart.

(6) Management must understand the business goals of his team, and effectively communicate them to the team. (Which means business goals should be well known by management.) Management must also understand the development bandwidth of his team and effectively communicate those limitations to those establishing business goals. And management must effectively collect both of these pieces of information in order to establish a reasonable development timeline for product deliverables.

Now any process that acknowledges these realities will allow you to effectively manage the team. But–and this is important–when the process serves itself and no longer serves the realities above, then the process should be paired down or scrapped.

For example, today I encountered a complaint that ‘//TODO’ considered harmful–an assertion that if you are putting ‘//TODO’ markers in your code, you’re somehow not properly following the Scrum process. In an attempt to keep the “purity” of the Scrum process (all information about future tasks stories should be in the product back log) the assertion is that ‘TODO’ markers in code (which mark areas that probably need to be revisited) should be removed. This way, you guarantee everything is in the product backlog.

But in an attempt to maintain the “purity” of the process, we violate rule 2 above: rather than maintaining an informal pointer to the code (and thus lumping all future potential modifications under a single umbrella that can be tracked) we’re forcing developers to keep track of all of the functionality outside of their code–forcing them, in other words, from putting disparate things into a common bucket and setting it aside so they can work on other things. (If you have more than 7 areas in your code you have to mentally track because you’re prohibited from using a bookmark, then you’ve filled your “7 +/- 2” bucket with information that is not immediately relevant.

We also violate rule 3 above by taking away breadcrumbs. And we potentially violate 4 and 5: //TODO markers make one form of inter-team communications and corporate memory which is then simply lost down the rabbit hole–for no better reason than it doesn’t fit into the process model.

I’ve also encountered the assertion that no defects should ever be fixed until a story is created for them. While this is all and good for large defects which take some time to resolve, there are plenty of bugs that go into the bug tracking system which are quicker to resolve than answering an e-mail. So do we also schedule time for answering e-mail? Enforcing the notion that bugs are “out of sight, out of mind” rather than scrubbed on a daily basis and assigned to the developers regularly, then allowing the developers to decide if they can answer the issue quickly or if the bug needs to go in as a story strikes me as a violation of rule 4 above: it essentially isolates the testing team from the development team and contributes to the fallacy that testing and development are separate groups with incompatible goals, rather than serving in a symbiotic and mutually supporting relationship.

(In my opinion, testing exists to take certain development tasks off the shoulders of development–such as verifying code changes work correctly–in the same way that a squire supports a knight by taking some of the tasks of battle off the shoulders of the knight. Developers are expensive, specialized resources and should be treated as such–and an entire support network should be built around them, including squires (who eventually could become knights if they so choose). By separating the two into separate teams you leave the knights isolated while their support staff is put into an adversarial role–and it leaves the illusion for upper management that their Knights are replaceable, overpaid cogs rather than the people who actually do the work for which the rest of the staff is there to support. And this misalignment of responsibilities violates rules 4 and 6, since it prevents cohesion and misrepresents to management the development bandwidth available.)

In the medieval times of old, a Knight was supported by up to seven separate support personnel, from squires to horse keepers. In today’s modern army each front-line soldier is supported by an average of seven separate support personnel, from logistics to planning.

Unfortunately most development processes have a lifecycle: they go from being the flavor of the week to being abused to being discarded as working failures. The number of shops out there who have scrapped XP or other Agile processes speaks towards the wreckage: after all, most processes grow out of the desire to assist with my six rules above–but eventually they’re usurped by people who worship the process for its own sake, and by people who wish to isolate themselves from the reality of the Knights and Soldiers of old.

Because most processes eventually die at the hands of managers who wish to isolate themselves from the reality that their Developers are like the Knights of old: expensive resources requiring a support staff to maintain maximum efficiency. And because most people working the process eventually decide gaming the bureaucracy is more important than winning the war.

The Death of Formalism.

To me, Computer Science died with the out-of-hand dismissal of proper NFA theory as applied to grep by the implementers of Perl. While the original code that wound up in Perl’s grep came from an open-source implementation of a backtracking algorithm (which exhibits poor performance under some conditions), it’s clear there are a number of people who neither understand nor respect the underlying theory.

Computer Science to me is the injection of some mathematical formalism into the art of writing software. And I suspect part of the reason why that formalism died is because there is so much money in the field now: instead of teaching formal theory we teach Java programming. Instead of surveying various programming languages and teaching about Touring Machines students want to understand the business of startups or how to engage in “social good” the second semester they’re on campus. And the money that can be made represents an economic demand which attracts not only the brightest and best, but also the average and mediocre, ground through the mill and spat out the other end, feeding people into jobs in the same way that early 20th century colleges trained mechanics to work on the farm machinery and mechanical monsters of old.

Oddly enough, it looks like I’ve been given cart blanche to start looking to hire people to work on a mobile division which I would lead–which means I’m going to be contributing to the problem. I’ve even thought about seeing if I can talk folks into opening a small office within driving distance of USC, so I can take advantage of the iPhone development classes there to draw off talent.

Which means I intend to be part of the problem.

But the lack of formalism bothers me. It bothers me there is a lack of formalism in the art of creating a user interface framework: as far as I know no-one working on new frameworks (such as Android) have done any formal survey of other user interface frameworks. And at best, within user interface design, the only “formalism” is a grudging respect for the MVC design model–but without any good idea what MVC really is. And while there have been some grudging attempts to achieve some degree of formalism in terms of development and design models, as well as in software processes and management, most of these are “flavor of the week” than actually a formal representation of best practices.

It’s a shame, really, because some degree of formalism here would allow a more unified approach to user interface framework design, which would simplify the process of creating cross-framework and cross-platform software.