Process NAZIs

While formalism is gone when it comes to the science of computer programming, it appears the pent-up desire for formalism has exerted itself in spades when it comes to the management of computer programming.

The latest fad in this formalism is the Scrum process. The idea is simple enough: you maintain two lists–a product backlog, and a sprint backlog. The first is a list of long-term ideas, the second a list of things being worked on in the current work iteration called a “sprint.” Daily sprint meetings are held to make sure everyone touches base, and at the end of each sprint cycle everyone meets to discuss what was accomplished and to plan the next sprint.

There are some good ideas here. But many of the ideas here which make the scrum process work are ideas that also allow Waterfall to work, and allows various agile processes to work, such as XP. They can be boiled down to the following observations about software development:

(1) Gall’s Law: “A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.”

(2) The human mind can only keep track of 7 things, plus or minus 2. This means that any complex task must be broken down so it fits within our ability to track it.

(3) An intellectual task begun on one day is best finished the same day, or at least bread crumbs should be left behind to help you pick up where you left off the day before.

(This third point is why my work day tends to be variable: once I know what I’m working on for that day, I need to finish those tasks: if I’m done by 4, then I’ll just goof around until quitting time. However, if the task isn’t done until 10, I’ll call my wife and let her know I’m going to be late. It freaked out my bosses at various jobs that for no reason I’d just stay in the office until really late–they were worried there was a deadline they didn’t know about. No; it just happened that the task I set for myself that morning was bigger than I anticipated.)

Any good process also takes into account the following managerial realities:

(4) Good inter-team communications is necessary for team cohesion. A cohesive team is a team which supports its members.

(5) Corporate memory (meaning the collective memory of the team) needs to be maintained in a reasonable way, so that new members can be brought up to speed, and leaving members do not cause the team to fall apart.

(6) Management must understand the business goals of his team, and effectively communicate them to the team. (Which means business goals should be well known by management.) Management must also understand the development bandwidth of his team and effectively communicate those limitations to those establishing business goals. And management must effectively collect both of these pieces of information in order to establish a reasonable development timeline for product deliverables.

Now any process that acknowledges these realities will allow you to effectively manage the team. But–and this is important–when the process serves itself and no longer serves the realities above, then the process should be paired down or scrapped.

For example, today I encountered a complaint that ‘//TODO’ considered harmful–an assertion that if you are putting ‘//TODO’ markers in your code, you’re somehow not properly following the Scrum process. In an attempt to keep the “purity” of the Scrum process (all information about future tasks stories should be in the product back log) the assertion is that ‘TODO’ markers in code (which mark areas that probably need to be revisited) should be removed. This way, you guarantee everything is in the product backlog.

But in an attempt to maintain the “purity” of the process, we violate rule 2 above: rather than maintaining an informal pointer to the code (and thus lumping all future potential modifications under a single umbrella that can be tracked) we’re forcing developers to keep track of all of the functionality outside of their code–forcing them, in other words, from putting disparate things into a common bucket and setting it aside so they can work on other things. (If you have more than 7 areas in your code you have to mentally track because you’re prohibited from using a bookmark, then you’ve filled your “7 +/- 2” bucket with information that is not immediately relevant.

We also violate rule 3 above by taking away breadcrumbs. And we potentially violate 4 and 5: //TODO markers make one form of inter-team communications and corporate memory which is then simply lost down the rabbit hole–for no better reason than it doesn’t fit into the process model.

I’ve also encountered the assertion that no defects should ever be fixed until a story is created for them. While this is all and good for large defects which take some time to resolve, there are plenty of bugs that go into the bug tracking system which are quicker to resolve than answering an e-mail. So do we also schedule time for answering e-mail? Enforcing the notion that bugs are “out of sight, out of mind” rather than scrubbed on a daily basis and assigned to the developers regularly, then allowing the developers to decide if they can answer the issue quickly or if the bug needs to go in as a story strikes me as a violation of rule 4 above: it essentially isolates the testing team from the development team and contributes to the fallacy that testing and development are separate groups with incompatible goals, rather than serving in a symbiotic and mutually supporting relationship.

(In my opinion, testing exists to take certain development tasks off the shoulders of development–such as verifying code changes work correctly–in the same way that a squire supports a knight by taking some of the tasks of battle off the shoulders of the knight. Developers are expensive, specialized resources and should be treated as such–and an entire support network should be built around them, including squires (who eventually could become knights if they so choose). By separating the two into separate teams you leave the knights isolated while their support staff is put into an adversarial role–and it leaves the illusion for upper management that their Knights are replaceable, overpaid cogs rather than the people who actually do the work for which the rest of the staff is there to support. And this misalignment of responsibilities violates rules 4 and 6, since it prevents cohesion and misrepresents to management the development bandwidth available.)

In the medieval times of old, a Knight was supported by up to seven separate support personnel, from squires to horse keepers. In today’s modern army each front-line soldier is supported by an average of seven separate support personnel, from logistics to planning.

Unfortunately most development processes have a lifecycle: they go from being the flavor of the week to being abused to being discarded as working failures. The number of shops out there who have scrapped XP or other Agile processes speaks towards the wreckage: after all, most processes grow out of the desire to assist with my six rules above–but eventually they’re usurped by people who worship the process for its own sake, and by people who wish to isolate themselves from the reality of the Knights and Soldiers of old.

Because most processes eventually die at the hands of managers who wish to isolate themselves from the reality that their Developers are like the Knights of old: expensive resources requiring a support staff to maintain maximum efficiency. And because most people working the process eventually decide gaming the bureaucracy is more important than winning the war.

The Death of Formalism.

To me, Computer Science died with the out-of-hand dismissal of proper NFA theory as applied to grep by the implementers of Perl. While the original code that wound up in Perl’s grep came from an open-source implementation of a backtracking algorithm (which exhibits poor performance under some conditions), it’s clear there are a number of people who neither understand nor respect the underlying theory.

Computer Science to me is the injection of some mathematical formalism into the art of writing software. And I suspect part of the reason why that formalism died is because there is so much money in the field now: instead of teaching formal theory we teach Java programming. Instead of surveying various programming languages and teaching about Touring Machines students want to understand the business of startups or how to engage in “social good” the second semester they’re on campus. And the money that can be made represents an economic demand which attracts not only the brightest and best, but also the average and mediocre, ground through the mill and spat out the other end, feeding people into jobs in the same way that early 20th century colleges trained mechanics to work on the farm machinery and mechanical monsters of old.

Oddly enough, it looks like I’ve been given cart blanche to start looking to hire people to work on a mobile division which I would lead–which means I’m going to be contributing to the problem. I’ve even thought about seeing if I can talk folks into opening a small office within driving distance of USC, so I can take advantage of the iPhone development classes there to draw off talent.

Which means I intend to be part of the problem.

But the lack of formalism bothers me. It bothers me there is a lack of formalism in the art of creating a user interface framework: as far as I know no-one working on new frameworks (such as Android) have done any formal survey of other user interface frameworks. And at best, within user interface design, the only “formalism” is a grudging respect for the MVC design model–but without any good idea what MVC really is. And while there have been some grudging attempts to achieve some degree of formalism in terms of development and design models, as well as in software processes and management, most of these are “flavor of the week” than actually a formal representation of best practices.

It’s a shame, really, because some degree of formalism here would allow a more unified approach to user interface framework design, which would simplify the process of creating cross-framework and cross-platform software.

Thoughts on Netbooks.

One of the things that the company I’m working for has considered is creating a Netbook version of our location-based software. So I bought a Netbook (The MSI Wind 120) in order to understand what all the fuss is about.

My observations:

(1) This is not a new product category.

To me, a new product category is a product which I interact with differently than with existing products. For example, I interact with my iPhone in a completely different way than I do with my 17″ laptop computer: I keep my iPhone in my pocket and pull it out to look thing up quickly, while my laptop gets pulled out of a briefcase and unfolded on a table.

A Netbook is a small laptop. It doesn’t fit in my pocket; I have to take it out and put it down on my lap or a table. I have to boot it up. It’s more convenient: I can use a smaller briefcase. It’ll more easily fit on the airline tray table in coach. But it’s a laptop computer.

(2) The best part about a Netbook computer is not that it is small, but that it is cheap. If you need a second computer or you cannot afford a good desktop computer, and you’re using it primarily for text editing or web browsing, a Netbook computer makes an excellent low-cost choice: I only paid $300 for mine.

(3) Other than this, the keys are cramped (making it a little harder to touch-type on), the screen is small (1024×600 pixels), making it inconvenient to use for text editing, and there is no built-in CD-ROM drive. (The Samsung SE-S084B external USB DVD-writer is around $60 and works great with the MSI Wind.) Thus, while it is an excellent low-cost choice, it’s clear it’s a low-cost choice: you are giving up a lot to save the $500-$1000 span between the 10″ laptop and a well-equipped 12″ laptop.

(4) A cheap laptop will never dethrone the iPhone in the mobile space. On the other hand the eagerness of mobile carriers to come up with something to dethrone the iPhone may force them to consider lowering the price of an all-you-can-eat data plan for laptops, which means a wireless cell card and/or built-in 3G wireless in laptops will undoubtedly be coming down the pike in the near future.

The real question, however, is will it be too little too late: a proliferation of free and cheap Wifi hotspots may make all-you-can-eat 3G wireless for laptops a terrible value proposition unless you need to surf the net out in the boondocks. (On the other hand, if you are in construction or farming, where you routinely work in the boondocks, 3G wireless for laptops will be a god-send.)

(5) A small form-factor touch-screen tablet will be a new product category if it satisfies the following requirements:

(a) Fast boot time. A touch-screen tablet needs to go from off to on in 2 seconds or less.

(b) It should be two to three times the size of the original iPhone. To get an idea of what I mean, here is a pictures showing the relative sizes of my iPhone, an HP 50g calculator, the original Kindle, the MSI Wind, and my 17″ Macbook Pro.

The iPhone is an ideal size to fit in a pocket, but once you get to the size of the HP calculator (one of the larger calculators out there), you need to put it in a backpack or a briefcase or purse. Around the size of the MSI Wind, and you need a dedicated carrying case for the device.

To me, an ideal “new product category” item would be somewhere between the size of the HP 50g calculator and the Kindle, with the Kindle being the top size for such a device.

(c) Battery capacity should be enough to allow the device to surf the ‘net and use the CPU full-boar for a minimum of 4 hours. The iPhone gets its monumental lifetime between charges from very clever power utilization: when surfing the ‘net, once a page is downloaded to your phone, the CPU is turned off. (It’s why Javascript animation execution stops after 5 seconds.) But if you write software that constantly runs and is not event-driven, especially software that uses the ‘net at the same time, the iPhone battery will drain in less than an hour.

I believe for such a small form factor touch screen device to do the trick it needs about 4 times the battery capacity of the iPhone.

Once you reach this size and have something that is “instant-on”, you now have a device that is big enough to work on where you are–and perhaps balance in one hand while you use it in another–but not so big that you need to find a table at Starbucks to pull it out. In fact, such a device would occupy the same product category space (in terms of size, form factor and how a user could interact with it) as a large calculator.

Which means one application which would be ideal for such a device would be a port of Mathematica or some other calculator software which would put the HP 50g to shame. Another application that would be ideal would be web surfing; ideally such a device would devote more disk caching than the iPhone does to web surfing. Also, vertical software for engineers, and e-book readers, would also work.

The idea here is to create a device that straddles the mid point between the iPhone “pull it out, look it up, put it away” 30 second use cycle, and the laptop “gotta find a table at Starbucks so I can pull it out of my briefcase” 1-5 hour use cycle.

And the MSI Wind (and other clamshell shaped cheap laptops) ain’t it.

Update: However, the CrunchPad very well may be the product I’m thinking about, assuming there is a way to install new software on the unit.

Fixed a memory leak in Flow Cover.

Meh. This one was a nasty little bug. And now its fixed.

The problem is that in FlowCoverView, the dealloc routine was resetting the EAGLContext object early. This caused the rest of the dealloc call to fail, which then resulted in the overall view to fail to be released.

The fix is to rewrite the dealloc routine for FlowCoverView. Around line 230, replace the old -dealloc routine with:

- (void)dealloc 
{
    [EAGLContext setCurrentContext:context];

    [self destroyFrameBuffer];
    [cache release];

    [EAGLContext setCurrentContext:nil];
    
    [context release];
    context = nil;
    [super dealloc];
}

The first call to EAGLContext setCurrentContext makes sure the context is current, so all operations done on the various texture maps and OpenGL objects are done in the right context. Then everything is released–and at the end, the EAGLContext is then released, and so is the view.

I’ve also uploaded a new version of FlowCover which puts the FlowCover view on a second view; press the “test” button to have it pop up.

A new version of the code has been uploaded here. Download and incorporate into your code as you will.

Oh, and I also modified the code license; essentially you do not have to include mention of me in the binary redistribution; just the source code.