NSOutlineView

I’m spending a lot of time now with my new friend, the AppKit object NSOutlineView. Right now I’m trying to implement drag and drop, which is easy if you actually look for the documentation for NSOutlineViewDataSource, which, silly me, isn’t mentioned in the “Drag and Drop Programming Topics for Cocoa.” Instead, the documentation outlines how to create drag and drop for tables–and silly me, knowing that the NSOutlineView inherits from NSTableView, started writing the data methods for table drag and drop instead of outline view drag and drop.

The other thing that is extremely disappointing are NSViewControllers. When I first read about them I thought “oh, cool; the perfect class thingy to insert into the view hierarchy to provide the functional equivalent of a NSWindowController but for just a subclass of the window view hierarchy!”

Uh, no.

First, the NSViewController only exists for 10.5 as far as I can tell–and any application development I want to do should be 10.4 backwards compatible. (Many corporations such as Yahoo! are still using 10.4 internally, as corporations tend to be about one to two years behind in deploying new operating systems.) Second, it seems NSViewController objects are designed to allow and control views that are “out in space”–that is, that are not associated with a window, or that are programmatically associated later on with a window.

To do what I want to do, I simply created a custom class and hooked the responder chain in my class. It’s ugly as hell–but it’s exactly what I want: a way to handle just a subset of controls within a region of my window to properly manipulate the events. (Think of a group of radio buttons or check boxes which change the state of other radio buttons or check boxes in a group box of controls–at a higher level you may just want the abstraction “I’m in state ‘N'”–which translates to some combination of check boxes being enabled or disabled at the lower state. By placing the functionality of translating “N” into the state of a bunch of controls, I’ve isolated key functionality somewhere else–which is, to me, the whole point of object oriented programming: hiding the ugly details in reusable classes.)

Things I’ve learned while playing with AppKit

Every framework I have ever encountered, every API, Toolkit, or whatever, consists of (a) a specification, (2) an implementation, and (3) a whole bunch of bugs which require experience to understand and work around. In many ways one could say that the difference between one who is just learning a framework and one who understands the framework is in part measured by how many bugs one “just knows how to work around.”

Here is what I’ve learned so far about Apple’s AppKit:

(1) If you want to make Apple’s synchronized scrolling hint work, you need to make sure the document views across the different scroll windows have the same frame origin. In order to guarantee this, the following snippet of code seems to work for me:

- (id)initWithFrame:(NSRect)frame 
{
	/* Kludge: Modify the origin to be 0. This forces the value to be 0 so
	 * we can do synchronized scrolling across multiple desparate objects.
	 * Otherwise, we don't have a common origin, so things don't synchronize
	 * as the value of the origin is essentially random from the nib.
	 */
	
	frame.origin.y = 0;
	frame.origin.x = 0;
	
    self = [super initWithFrame:frame];
    if (self) {
         ... the rest of your initialization goes here ...
    }
    return self;
}

(2) (And the subject of my last rant post) If you embed a your own custom view within an NSScrollView, if your view implements either -(NSView *)headerView or -(NSView *)cornerView, you’ll get either a horizontal header or a corner view a’la the NSTableView’s documentation on both of these methods. (In other words, NSScrollView handles and manages the NSClipView for the table view by seeing if the document implements these methods.) Handy if you need a custom header along the top of your document view–though honestly, if you’re creating a table or a drawing program, you’re better off using the built-in NSTableHeaderView or NSRulerView classes instead.

For example:

- (NSView *)headerView
{
	if (fRuler == nil) {
		NSRect r;
		
		r.origin.x = 0;
		r.origin.y = 0;
		r.size.width = 320; // default width; resize frame with correct width as frame resizes
		r.size.height = 16;
		
		fRuler = [[ResourceRulerView alloc] initWithFrame:r];
	}
	return fRuler;
}

(3) Oh, and to answer the off-asked question how do you get your content view document to grow from top down rather than from bottom up, and how to stick to the top:

(a) Implement -(BOOL)isFlipped in your custom view. Return YES. Yes, this flips the coordinate system so the upper left corner is the origin, instead of the lower left corner. No, I don’t know how to do this without flipping the coordinate system: I’m new to this whole thing. 🙂

(b) Make sure you flip the autosize parameters as shown below:

Disclamer: All of these things seemed to work for me. However, I reserve the right (as always) to be a complete idiot and get the details wrong.

The Objective-C/Java Flame Wars: Uh, huh?

Several months ago there was a flame war amongst many of the better read technical blogs, and from what I read in places such as Daring Fireball and Thought Palace was that Java sucked–and part of the reason why it sucked was that Java Swing sucked.

By extension, of course, the Objective-C Application Framework was a very well thought out framework designed by people who knew how to do user interface design–and beautiful user interface design–as opposed to those neanderthal Java developers at Sun who, in a massive language land grab, built a user interface environment which was so horrific that it doesn’t even entertain being taken seriously as a user interface API.

Okay, I will admit Swing has its places where the designers made some really bad design decisions. The whole Swing pluggable L&F environment, while that may be cool in theory on platforms like Linux (where the idea of a design language is either C or C++), but really blows on Windows or on the Macintosh as it requires extra work to make your application fit in rather than look like a refugee from a 1980’s Motif concentration camp. And the fact that it took Sun until v5.0 of Java to realize that floating windows aren’t a bad thing–a fact that some Window Managers still haven’t figured out, combined with a lack of a reasonable serialization standard–okay, I could bitch about Swing on and on and on. (And please don’t get me started on keyboard shortcuts or the fact that you have to write extra code to make the escape key cancel a modal dialog box.)

But now that I’m starting to crack the mysteries of the Objective-C Application Framework, I’m not exactly seeing the nerdvana of a perfect and well designed UI construction environment created by experienced user interface designers.

To whit, I give you–NSScrollView.

I only have three words for those who think the Objective C Application Framework is superior to Java Swing: What The Frack?

First, NSScrollView really really wants to pin your document to the lower left, according to the Application Framework’s insistence on using a mathematically correct coordinate system with the origin to the lower left. That’s great for the 5 programmers out there who are creating statistics and mathematical plotting packages. But for the rest of us, who are creating text and textually-based packages, flipping the Y axis isn’t just an exercise in mathematically imprecise masturbation. Documents–and by extension computer displays–tend to be read top down. Scrolling tends to focus on top-down reading. Documents should start at the top and go down.

And don’t give me this “just use -isFlipped;” nonsense, as without knowing the magic size settings to fiddle in the Interface Builder, when you have a document that is bigger than the window, on resize you’ll find that your document expands upwards, not downwards like every fracking window out there. (You need to select your custom view, then set its size in the inspector to pin to the lower left instead of the upper left. By pinning it to the lower left, after the coordinate system is flipped, document resizing will fix the upper left as you expand your window. Yes, that made no sense. But it works.)

But that’s not what brings me here today; oh, no. I can live with the quirkiness of some stupid design decision made early on that now everyone winds up living with because it’s too deep in the underlying framework to fix. And God knows there are plenty of these in any framework, where you wish some early on design decision could be taken back, but it’s too late now, and now everyone has to just learn the quirk to get along with your code.

No, what I want to talk about is creating custom horizontal and vertical headers.

In Swing, this is a relatively simple proposition: when building your JScrollPane, you also create a view which manages your custom column header, then add it with jScrollPane.setColumnHeaderView(). This may make creating a table view a little harder–you have to tell the JScrollPane holding your table to use the table view passed to it from your constructed table view item, but it’s all fairly well documented in the Sun “How to use Tables” docs. And for the small percentage of us who want to roll our own tables–because we have a complex drawing task we need to accomplish or because we need to draw something that is almost, but not entirely like, a table–well, we can just build a document view and a header view, and pass them to JScrollPane.

So I went to figure out how to create my own custom header view for my NSScrollView. Okay, perhaps I was going about it the wrong way: perhaps I should have just constructed a table view then used a very large hammer to bang the damn thing into the precise shape I wanted. However, that’s not what brings me here either.

What brings me here is the following snippet of code which I ganked from GNUStep. (As a footnote I find that, for understanding the fringe cases and the assumed design models of an application framework–and every application framework has underlying assumptions as to how the pieces will be strung together which seem obvious to the initiated, but are often completely undocumented by the developers.) What’s great about GNUStep is that, at least for the core elements, at least you can look at the code and learn how things are actually built under the hood, so when gaps show up in Apple’s documentation (and Apple’s documentation has gaps that you can drive a truck through–which disappoints me, given how great pre-NeXT Apple docs were), you at least have a fighting chance to figure out what’s going on without wasting a day.

And here is what I found:

- (void) _synchronizeHeaderAndCornerView
{
  BOOL hadHeaderView = _hasHeaderView;
  BOOL hadCornerView = _hasCornerView;
  NSView *aView = nil;

  _hasHeaderView = ([[self documentView] 
                        respondsToSelector: @selector(headerView)]
                    && (aView=[(NSTableView *)[self documentView] headerView]));
  if (_hasHeaderView == YES)
    {
      if (hadHeaderView == NO)
        {
...

In other words, deep in the bowels of NSScrollView, in order to allow an NSTableView to build a custom table column header, NSScrollView gets the current document being scrolled–and if that document responds to headerView, uses that view as the header.

And notice that unnecessary cast to NSTableView. Not to belabor the point here, but what the hell is this? In any case where you have an undocumented (at least in NSScrollView) hook, mentioned in passing in NSTableView, which provides a hook for giving a header to your custom view–to me, it’s a sure sign of a hack. A kludge. A piece of code tossed in (and this is not GNU’s fault–GNUStep is simply implementing to the best of their ability the OpenStep specification on which Cocoa is based) in order to support a specific case behind the scenes–the need to build a custom table view–rather than creating a general solution then implementing the specific instances for specific cases.

WTF?

Any time in your class hierarchy where you use “supernatural knowledge” of another class–and one of the things that irritates me about Objective-C is how easily someone can create hooks which creates this “supernatural knowledge” through querying an object to see if it implements a poorly documented hook–its a sure sign of something that congealed, rather than something that was designed.

And while that wouldn’t matter much to me: after all, any software project is about making compromises to get something out the door–it wouldn’t irritate me quite so much if the accusations of poor design weren’t leveled so fiercely by those who think Objective C is the epitome of good UI framework design. And I speak as someone who once built my own framework.

Feh!

So… How many languages can you use?

Let’s see: Java + JSP for the server. C++ for the core desktop and phone engine. C++ for the Windows and WinCE version. Objective-C++ for the Mac and iPhone version.

Sounds just about right for a custom client-server system targeting multiple platforms: it’s about using the right tool for the job, and keeping things as simple as they can be–but no simpler.

Heh.

Two used G4-based Mac Minis: $550.
One 17″ monitor: $180
Cheap keyboard and mouse: $30
Two resistors to enable the G4s to run headless: $1.90 (for two packs, one at 270ohms, one at 120ohms).

Instant testbed environment to permit simultaneous debugging MacOS X software for v10.3, v10.4 and v10.5 compatibility: Priceless.

Frustration.

One of the must frustrating things in the world is having what you believe is a brilliant idea for a startup company and a strong drive to create a startup–yet having no-one you know who you think you can approach to go in with such a project.

Bah!

Computer Languages and Entry Points

Every computer language has what I would call an “entry point”: a set of things which you need to ‘grok’ in order to understand that computer language. They’re fundamental assumptions made about the shape of the universe which is just “off” enough that you’ll forever be confused if you don’t get them. (Or at least they’re things that confused me until I got them.)

Here are a few languages I spent time learning and the ‘pivot points’ around those languages.

LISP
* It’s all about recursion. Learn to live by, and love, recursion.
* Because there is no real natural ‘syntax’ per se, pick a ‘pretty print’ style and stick with it.
* There is no such thing as a single “program”; applications just sort of ‘jell’. This is unlike C or Pascal programs, which definitely have a beginning, middle, and end to development. (This fact made me wonder why, besides Paul Graham, no-one used LISP for web development.)

Pascal and C
* The heap is your friend. Understand it, appreciate it, love it.
* The heap is your enemy; it is a mechanism designed to kill your program at every opportunity if given a chance.

C++
* Objects are your friends. Instead of telling the program the steps to accomplish something, you describe the objects which are doing things. To someone who is used to thinking procedurally, this is a really fantastic brain fuck.
* While a->foo() looks like a pointer to a method, it is really a reference to a vtable which happens to hold a pointer to a method with an invisible argument. If you’re used to thinking about C as a glorified portable assembly language (and can even remember which form of a++ for a pointer a translates into a single assembly instruction on a 680×0 processor), vtables help bridge the gap from “talking to bare metal” to “abstract object oriented programming.”
* Resource Acquisition Is Initialization.

Java
* The package hierarchy is like a parallel “code-space” file system.
* When learning Java, also learn the run-time library. Love it or hate it, but you should first learn the java.lang package, followed by java.util, java.io, and then java.net. All the rest is icing.

Objective C
* Learn to love the Smalltalk-like syntax. For someone who cut their eye-teeth on LISP, C++ and Java, the Smalltalk-like syntax leaves you wondering what to call a method; after all, every language except Smalltalk has just one name for a function and like any good language based on math, the name prefixes the parameters. Not Smalltalk, nor Objective-C.
* If you cut your eye-teeth on the rules for addRef()/release() in Microsoft’s COM environment, the retain/release rules are fucking odd. With Microsoft, the rule was simple: if something hands you a pointer to an object, you own it; you release it. If you pass a pointer to an object, you must increment the reference count on the object because you pass the object means you’re passing ownership.

Not Objective C, which in adding ‘autorelease’ has made the rules a little bit more complicated. Now the rules seem to be:

(1) If you are handed a pointer to an object, you must explicitly retain it if you are holding it.
(2) If you pass an object to another object, it will retain it, so you need to do nothing.
(3) If you create an object out of thin air, you own it–and if you don’t need to hold it, you must autorelease it so it gets released.

Now (3) gets real tricky: basically, there is a naming convention which tells you if you ‘created an object out of thin air.’ Basically, if the name of the method you called starts with ‘alloc’ or ‘new’ or contains the word ‘copy’, you own it, and you have to let it go with autorelease. (Why not ‘release’ instead? Because ‘release’ causes it to go away now, while autorelease causes it to go away “eventually.”)

I long for the simplicity of COM, even though it creates additional verbage in your code. Though I’d much rather have the garbage collection of Java or LISP: just drop the damned thing, and it’ll go away on its own.

Multi-threaded programming

While not strictly a language, it does impose its own rules on the universe, and there are a few key things to ‘grok’ to get it:

* There really are only a few “safe” design models for building multi-threaded code. You have thread pools, single-threaded event pumps, and re-entrant data structures (which can be as simple as using a semaphore to grant exclusive access to as complicated as using read/write locks on a complex data structure)–and that’s it. In other words, you either have units of work which can be isolated into a single object and worked in parallel (because they’re unrelated)–good for a thread pool–or you have a process (such as a user interface) where you shove all your parallelism into a queue which is operated on in a single thread (as is done in most UI libraries), or you have a specialized case where you have a re-entrant data structure which was designed from the ground up to be multi-thread ready.

And that’s it.

Anything else is a bug waiting to be discovered.

* If you need to create a re-entrant data structure that is more complicated than a semaphore-protected object, you really really need to remember to avoid deadlocks by making sure you use a consistent locking order.

* Oh, and it is quite helpful to create a semaphore object or a locking object wrapper, for testing purposes, which times out (and fails noisily) after some set period, such as five minutes. Preferably blowing up by killing all threads waiting on this object with a complete stack trace.

Internet Development verses Product Development

I used to think developing for the Internet would be no different than any other sort of product development: there would be some sort of software methodology used to write products, which would ship on a given timetable, with features reduced and a complete QA cycle done prior to releasing a product. The only difference here is what “product release” means: for an Internet company it means uploading your software to a server or bank of servers, while for product development it means pressing a CD master and sending it off for duplication.

In many ways I thought Internet development would be easier than Product development: you only have to make sure your software works on one server rather than thousands (or millions) of individual computers, each with their own quirks and oddities. And while various operating systems go out of their way to help isolate your product from other software running on the same system, various ‘hacks’ tend to burrow into the operating system, creating unpredictable features which may or may not break whatever API you are using.

What I didn’t appreciate is how different Internet development is to Product development.

At first I thought that somehow management where I work was screwed up: everyone was all into “agile” development (as opposed to “waterfall” where I worked before)–and their implementation of “agile” seemed to borrow all the worst elements without any of the benefits. (Waterfall is also a silly methodology, but at least it implied forcing everyone to a similar timetable so different teams didn’t stomp arbitrarily over other teams, as seems to be the case here.) But now I’m realizing that two features of the Internet make the risk/reward equation such a completely different animal that, aside from the fact that you write code and test it, Internet development and Product development share damned near nothing in common.

The first factor is the fact that you only push your software on one computer, a computer you own. This means that it is possible to push incremental improvements damned near daily.

Yeah, various software update panels (Symantec’s LiveUpdate, Apple’s Software Update panel, Microsoft’s Windows Updater) allow your company to push patches out to your customers. But at best this is a mechanism that can be used quarterly: because you are pushing software to potentially millions of computers, you really only have one chance to get it right, and that sort of testing takes time. And even then, you still run a risk of screwing it up, as Microsoft learned with it’s latest Vista patch.

But if your target system is just one server, a server that you own and control–well, the natural risk to pushing software out is significantly lowered. Sure, large Internet companies put arbitrary barriers to pushing out patches outside of the normal delivery cycle (where I now work, there is a committee which approves such things), because even where I work there is at most 8 server locations with two boxes each, it’s not the same as pushing software down LiveUpdate.

And that leads to the second difference, which puts incredible pressure on extremely short delivery cycles: software bugs are easier to fix and push out to production.

Because the cost of delivering a fix is extremely low (relative to a product update), the threshold for what constitutes an emergency requiring everyone to drop everything and push out a fix is lowered–and thus everything becomes an emergency. And, likewise, because everything is an emergency and fixing it in production is easier, there is pressure to (a) shorten the development cycle, and (b) less pressure to think through a problem to make sure the solution being implemented is the correct one.

In the systems I’ve worked on, the code quality is (from my perspective, with four and a half years creating products and another 9 years consulting) complete crap: there are places where deadlocks are just patched over, and fundamental flaws are ignored. One system I’ve done some work on desperately needs a complete rewrite, and other systems I’ve seen I’d never in a million years engineer that way.

And while it’s easy to chalk these things up to incompetence, I’ve realized that the economics of an Internet company forces these sorts of shortcuts: experiment, get something out the door, and move on–and if it breaks, take a couple of days and upload a fix. We don’t get six months or even six weeks to think through a problem–fix it in six days or else you’re too slow.

I don’t think I like working for an Internet company.

Sadly, what worries me more is that the attitudes people are learning at Internet companies are being brought over to Product companies–and rather than think through problems these new generation of software developers are slapping crap together because they’re more used to a two-week development cycle than a six month development cycle. And God help us if critical software systems (or, for that matter, consumer electronics) are developed on Internet time!

Things I Don’t Get.

Suppose there is this system which is designed to process data.

Suppose data goes into that system via a JMS queue, where that data is processed, and depending on the results of that processing, the data is put into one of two outgoing queues.

Now suppose the following: (1) if the data isn’t correctly pulled off of the queue for whatever reason, the item is requeued, and (2) the system is massively parallel: the system doesn’t run on one computer, but on a half dozen.

Now suppose there is a bug in the system: on occasion for reasons no-one really understands, there is a flaw in the system which causes items to become ‘stuck’ in the queue: they get pulled off, then they get pushed back. This behavior should never happens–but it does.

What do you do?

Well, hypothetically you could debug the system and fix it, so items don’t get stuck in the queue.

Or you could create a “stuck in queue” report for management, to let them know that while there is a problem, it’s under control.

Really: does a report indicating the volume and scope of your problem really mean the problem is solved? Gah!