Geek Moment: LISP IDE.

Excuse me while I have a geeky moment.

So in the background for a couple of hours here and there I’ve been tinkering with a LISP interpreter, with the goal of eventually inserting the LISP engine into an iOS application in order to make a programmable calculator with some ability to do basic calculus, as well as standard scientific calculator stuff, and automatic unit conversion stuff. (I even have some code somewhere which can handle arbitrary unit conversions, including arbitrary conversions in and out of standard scientific units.)

(Well, properly it’s a LISP compiler which compiles to a VM instruction set.)

A little while ago I managed to write a LISP compiler in LISP, compiled by a bootstrap compiler written in C, which was capable of compiling itself.

Well, now I have a primitive IDE for writing LISP code, which supports syntax coloring, automatic indentation, as well as setting breakpoints in my LISP code (step over and step out turns out to be a bitch) and examining the variable stack.

Breakpoints are managed by maintaining a table (including PC references into the instruction set) and setting a special instruction which halts the VM, and “single step” is handled by executing one VM instruction at a time until I hit one of the PC breakpoints.

And when not single stepping instructions, the VM is run on a second thread, to allow me to stop execution arbitrarily (in case of infinite loops).

Some pictures: showing breakpoints


Showing the debugger in progress, examining the stack and variables.


Of course there are some usability issues; I certainly wouldn’t want to ship this as a product. (In particular, you need to specify the entry point of a routine to debug, and breakpoints require the code to be compiled first–and compilation is a manual process; you have to press the compile button before adding breakpoints or starting to debug.)

And there are some gaps–the LISP implementation is incomplete, closures haven’t been written into the compiler (yet), and I don’t have a means to write to the standard out via a LISP instruction.

But there is enough to start actively building these components and testing them within the IDE, and fleshing out my LISP implementation–particularly since my LISP compiler is built in LISP, so adding features can be done with my IDE.

Next steps: include a window in my IDE which emulates the planned UI for the iOS product (a calculator keyboard, menu and display system–with keyboard events executing predefined LISP instructions, the menu built from a LISP global variable and the display formatting display instructions from another LISP global variable), as well as fixing bugs, fleshing out the LISP interpreter, and creating a mechanism for printing from LISP into my output window.

After that I can start writing the calculator code in LISP.

The syntax coloring was hung off the NSTextView (by setting a 1 second timer on update text events, and running a simplified LISP parser to do syntax coloring and variable detection to color local variables), and autoindent was handled by listening for newline events and calculating the proper indentation for the current parenthesis level.

And the nice part is that similar hooks exist on iOS in UITextView, and the debugger wrapper class should all port fairly easily to iOS. Meaning the functionality of the IDE should port rather easily to iOS.

Things to remember: Time Zones.


Okay, here are some things to remember about time zones, jotted out in no particular order.

  • On most modern operating systems, “time” is an absolute quantity, usually measured as the number of seconds from some fixed time in the past, for example, since midnight on January 1, 1970 GMT. This fixed time is called the “epoch.”

This time is an absolute quantity regardless of where you are in the world or how you represent time at your location.

  • Time Zones represent a way by which we represent an absolute time in a human readable format, appropriate for the location where the person is on the Earth.

So, for example, 1,431,385,083 is the number of seconds since our epoch, may be displayed as “May 11, 2015 @ 22:58:03 UTC”, or as “May 11, 2015 @ 6:58:03 PM EST”, or as “May 11, 2015 @ 3:58:03 PM PST”.

While it is easy to think of Time Zones as an offset from GMT, I find that it’s better to treat Time Zones as a sort of “black box”, since the rules as to which time zone offset that applies at a given date can be quite complex. (And if you are in one of the counties in part of Indiana which cannot make up its mind which time zone it wants to be in–EST or CST–the actual rules can be quite arbitrary and bizarre.)

So in a sense a time zone is a thing which converts an absolute number of seconds since epoch, and presents a time suitable for display to the user.

  • When dealing with web applications or mobile applications, the time zone to use when displaying a time depends on the application.

Most of the time, the appropriate timezone to use when displaying a time is the device’s native time zone. Meaning most of the time the default on most platforms is to use whatever time zone the device is in–which is the default when creating classes which display or parse the time.

Sometimes, however, it may be appropriate to display the time in a different time zone. For example, an application that may display the time of an event at a park or for a conference or at a particular movie theater should use the time zone associated with the location of the park or conference or movie theater.

And sometimes I would argue it would be appropriate to display the time twice in two separate time zones. For example, a chat application which notes the sender and receiver of a message are in two different time zones may wish to display the time in the receiver’s time zone and a second time in the sender’s time zone.

Rarely you may need to do something else. For example, an application which tracks an event that happens at the same time of day across multiple dates regardless of the actual time zone. If that is the case, then you may wish to do something other than store the time as an absolute number of seconds and translate to the appropriate time zone.

  • If you ever need to ask for the timezone a user is in–for example, you need to set the preferred time zone for a movie theater–remember: in the United States there are only 10 time zones.

If you look through the IANA Time Zone Database, you may see dozens and dozens of time zones for the United States. But the reality is, there are only 10 time zones that you need to worry about: the nine mentioned in this article, and Arizona, which does not observe daylights savings time.

If the user lives in the areas of Arizona which do observe DST, they can manually select “Mountain Time”; users in Indiana can sort out their own state’s issues.

(Many of the entries in the IANA database deal with historic timezone issues as individual areas of the country tried to sort out which time zone they wanted to be in.)

Similarly Canada has only 6 time zones, and other nations really only observe a handful of time zones.

  • In general, time zones mentioned in the IANA Time Zone database are named after the most populous city in that time zone. Which means if you want to present a more user friendly label you’ll need to roll something which translates between the time zone name and the IANA time zone title.

For example, PST/PDT is actually labeled “America/Los_Angeles”; Los Angeles is the most populous city in the area which observes Pacific Time.

Things to remember: IB_DESIGNABLE

Some random notes about creating designable custom views in Xcode:

  1. Adding the symbol IB_DESIGNABLE to the view declaration just before the @interface declaration of your view in the view header will mark your view as designable in Interface Builder; the view will be rendered in your view.
  2. Each property that is declared with the IBInspectable keyword will then be inspectable and can be set in Interface Builder.
  3. The properties that can be inspected this way are: “boolean, integer or floating point number, string, localized string, rectangle, point, size, color, range, and nil.”
  4. The preprocessor macro TARGET_INTERFACE_BUILDER is defined to be true if compiled to run on Interface Builder. Use this to short circuit logic which shouldn’t run on IB.
  5. Likewise, the method – (void)prepareForInterfaceBuilder; is called (if present) on initializing the class only if it is loaded in Interface Builder.
  6. You can debug your views within Interface Builder; Apple documents the process here.

Things to remember: Apple Store and TestFlight Edition.

  1. The ID you sign in on and the ID you sign in using must be the same ID. If they are different, you cannot submit through Xcode.
  2. You cannot get around this by doing an ad-hoc export of your .IPA file and using Apple’s Application Loader to get around this. For some reason your .IPA will not be signed with the correct endorsements to enable TestFlight.
  3. Until you get all the stars to correctly align:
    • Use the same Apple ID for your and accounts,
    • Set up a new application in for your application you wish to test,
    • Correctly build your .IPA with a distribution certificate and a provisioning profile which is set up correctly,
    • Correctly submit your .IPA through Xcode (rather than exporting the .IPA and submitting with Apple’s Application loader),
    • Correctly update your build numbers as you submit incremental builds,

    then the Apple TestFlight UI makes absolutely no fucking sense.

    There is no hint how to get through the happy path, or why things go wrong. You just see weird errors that make no sense, and instead of seeing a line in your pre-release software list saying why something cannot be tested, you instead just see a blank line and errors which make little sense and give zero hint as to what you did wrong.

Apple sure can suck the wind out of the fun of writing iOS software.

Update: Even if you do manage to finally work the ‘happy path’ correctly, (a) Apple still does a review on the application before they allow it to go out to “external testing” (really?!? isn’t the point of ad-hoc distribution for your own people to test features for incomplete applications???), and (b) it’s not entirely clear why some of your members do not have the ability to become “internal testers.”

Combine this with the fact that each Apple device generally is held by a single owner with a single Apple ID, and that makes even accepting invites (assuming invites are sent out) for a consultant who may have multiple Apple IDs a bit problematic. Especially if I want to test for a client on my own private iPhone or iPad.

Why design patterns from web development makes crappy iOS or Android apps.

Increasingly I’ve been encountering software developed for iOS which was written by someone who cut their eye-teeth on writing UI software for the web in Javascript. And I’ve noticed a common pattern which arises from that code which makes their code overly complex and hard to maintain on iOS.

Basically, they treat UIView-based objects as passive objects.

And if you started live writing software on the web using Javascript, this makes sense. After all, the view structure represented in the DOM hierarchy is by and large a passive participant in the user interface. When you want to present a list of items in a scrolling region, for example, you would create the scrolling region, then populate the items in that region, setting the attributes of each of the <div> sections and <li> sections with the appropriate CSS styles for overflow and font information.

(In something like GWT it’s not hard to create your own custom view objects, just as you can create custom view objects in Javascript. But at the bottom of the stack, the Menu class or the Table class you built simply declares a <div> and manipulates the subset of the DOM within that <div> tag. The view hierarchy is still a passive participant; you simply wrap part of that passive content into something with some logic.)

The problem is when you move all this onto iOS or onto Android or onto Windows or onto MacOS X, you lose one of the most powerful elements of those platforms–and that is Views are active objects rather than passive participants.

Meaning that a child class of a UITableViewCell in iOS can actively know how to present a data record passed to it; you do not need to put the code to populate the table view cell within the UITableViewController class. You can build an Image View which knows how make a network call and load and cache an image without putting image loading and caching code into the UIViewController. You can create a single view which has complex behavior–without having to put the behavior code inside the view controller.

And this allows you to create very complex user interface elements and reuse them throughout your code.

Of course this also needs to be balanced with the available tools. For example, it makes no sense to create a custom UIButton which presents a button with a different font, when you can set the font and appearance of a default button within the NIB.

But it does indicate that in many cases the proper thing to do is to push functionality down into the view, rather than make the view controller responsible for that logic.

You can’t do this in Javascript. You cannot create a new <myTableTag> which takes special arguments and presents the contents in a unique way, which can be reused throughout your site.

And there is nothing inherently “wrong” with putting all the logic for your button’s behavior, the way table view cells are laid out, and the responsibility for loading images in the background into your view controller.

It just makes life far more complex than it has to be, because you’re leaving one of the most important tools in your arsenal on the floor.

Design affordances

An “affordance” is an element of a design which suggests to the user how to use or operate an object. For example, the handle of a teapot suggests how to grab the teapot to lift it; a doorknob suggests a means to opening a closed door.

Affordances are inherently cultural: an amazonian tribesman who has never visited the modern world may need help discovering how to open a closed door. But affordances that work well are common: once he learns how to open one door he should be able to open all doors–even those with slightly different affordances. (A lever rather than a knob, for example, so long as it is located where the knob would be located.)

Because they are cultural, like all cultural things we take them for granted: we don’t recognize that, for example, there are multiple different ways to latch a closed door and an accident of history could have us reaching down for the floor latch to unlatch a door, rather than rotating a knob or lever.

Design failures are those where affordances are not clearly discoverable or which follow the design language we’ve learned. For example, if one were to design a door with a latch at the floor, and no knob or push plate, how long would it take you to discover the thing closing the door was a latch at the floor rather than a knob at waist height? Probably a few moments and you’d probably be very frustrated.

Now unlike the physical world, where there really are only so many interactions you can have to open a door or a window or lift a teapot from a hot surface, the software world is completely abstract. There are no limits, really, on what you display or how you interact with that display. In custom installations (such as in your car or at a kiosk), you can literally use any hardware interaction you like: the BMW iDrive system, for example, uses a rotary knob that can be pushed around like a joystick and depressed, as well as two separate buttons. Modern touch screens make it possible to tap with multiple fingers and tap, press and hold, tap and drag, or any combination of those motions.

And because affordances are, in some sense, cultural (there is no reason why a door knob is at waist level except custom, for example), affordances become part of a “design language”–a means of representing things on a screen that say “I can be touched” or “I can be clicked” or “This is how you back up a level on the screens.”

A well designed set of affordances as part of a well designed visual language can do three things:

They can provide a way for users to quickly discover how to use your interface, thus reducing user frustration. (A common ‘back’ button to back up from a modal display in a car would make it easier on a driver who may be operating the car’s systems while driving down the road at 70 miles/hour.)

They simplify the design of a new application. A common visual design language makes it easier for designers and developers to follow a common design language by essentially following a short recipe book of design objects and design attributes.

They simply user interface development, by allowing software developers to design common control elements which embody these design elements. For example, if all action buttons look the same, a developer could write a single action button class (or use one provided by the UI framework), thus saving himself the task of building separate buttons for each screen of the application.

Likewise, a poorly executed design visual language creates problems for each of these items above: it can increase user frustration as none of the elements of an app (or multiple apps on the same system) work enough the same for the user to know what is happening. They complicate designers who feel free to design each page of an app separately, and who may not follow similar design patterns. They increase the workload on developers who cannot use common controls or common design elements in the application.

The most frustrating part to me is that at some level the computer software industry has come to consider these three failures features rather than bugs: a lack of a common design language allows elements of the application to be partitioned across different teams without having to worry about those teams needing to communicate. And at some level some users have come to believe some level of difficulty in using an application is a feature; a list of hurdles to overcome in order to become an “expert” on a product.

Though why one must become an “expert” to use a poorly designed radio function embedded in a car computer boggles my mind.

One of the most serious problems I have with Apple’s iOS design guidelines is twofold.

First, while Apple’s section on UI elements is rather complete in describing each of the built-in controls provided by iOS, it provides no unifying theory behind those controls which give any suggestion to designers seeking to design custom controls how to proceed.

For example, there is no unified theory as to how tap-able items should stand out that says to the user “I can be tapped on.”

As it turns out that’s because there is no unified theory: along the top and bottom of the screen controls are tappable based solely on their placement. For example, a tab bar at the bottom assumes all icons are tappable and represents switching to a different tab, at the top of a navigation bar the words on the left and right corners are tappable; the left generally takes you back a level and the right generally represents an action appropriate to the screen you are looking at. Tappable areas in an alert view are suggested by a gray hairline breaking the alert box into sections, and often (but not always) icons suggest tappability.

Second, the only suggestion Apple provided for custom controls is to “Consider choosing a key color to indicate interactivity and state,” and “Avoid using the same color in both interactive and noninteractive elements.”

In other words, the only design affordance Apple talks about is using color to indicate interactivity: a colored word may indicate a verb for an action, a colored icon may indicate a control that can be tapped on.

Which, in my opinion, directly conflicts with: “Be aware of color blindness.”

My own rule of thumb, by the way, is that if the affordances of your application cannot be discovered when your interface is rendered in black and white, then they cannot be discovered by people who are color blind.

Color, in other words, is a terrible affordance.

Google, apparently, is trying to resolve this by proposing a different user interface paradigm of “material.”

Many of the suggestions seem a little whacky (such as the idea that larger objects should animate more slowly as if they have greater mass), but the underlying theory of using shadows and hairlines to subtly indicate components of a user interface goes a long way towards solving the problem with affordances on a touch display.

(Which is why the idea that material can “heal itself” bothers me, because if an item is separate and can be moved, it should be represented as a separate object.)

Which is why I’ve taken the time to write this long blog post: because at least Google is trying to create a unified theory behind its user interface choices, and that suggests that Google may be heading in a completely different direction than Apple when it comes to user design.

In other words, Apple appears to be headed towards using the design elements of book design (typography, color, subtle use of shapes and typeface that may change from product to product) to suggest functionality, while Google is taking a more traditional HIG approach of creating an underlying unified theory as to what it is we’re manipulating, then building the basic building blocks off of this.

So, for example, a table cell in a table with a detail button may look like this on Apple’s iOS devices, but on Google the theory of “material” suggests we may separate the individual rows and separate out the right part of the each table to form a separate pane which is clickable:


And while the design on the right, the Google “material” style looks considerably busier than the design on the left, functionality makes itself extremely apparent rather quickly.

It’s also, in a sense, far easier to design: icons do not need to be designed to differentiate a ‘verb’ action button from a status icon. Components can be separated out so that they are obviously clickable by simply bordering the control appropriately. Actions can be made apparent on the screen.

And while it is true that it gives rise to what appears to be a more “clunky” screen, the reality is on any mobile device–especially a phone form factor device, the user will be confronted with dozens (or even hundreds) of different screens, each one capturing one bit of information (a single item from a list of choices, for example), and configuring the equivalent of a single dialog box on a desktop machine may involve passing in and out of dozens of different screens on a mobile device.

Quickly being able to design those screens and make their functionality extremely apparent is extremely important, and if you don’t have the aesthetic design sense of a Jonathan Ives, having guide rails that results in some degree of clunkiness may be more desirable to the end user than a design which is completely unusable.


One of the biggest mistakes Apple made in the design of the iOS API was to expose the component views within a UITableViewCell.

Okay, so it really wasn’t Apple’s fault; they wanted to solve the problem of providing access to the components of a table view cell so we could use the off-the-shelf parts easily in our application. By exposing the textLabel, detailTextLabel and imageView UIView components, it makes it easy for us to modify the off-the-shelf parts to our own needs, providing a look which is consistent with the rest of the iOS universe.

But it was a mistake because it taught an entire generation of iOS developers some really bad habits.

One of the things that I greatly appreciate about object oriented programming is that it allows us to easily design applications as a collection distinct separate objects with well-defined purposes or tasks. This separation of concerns permits us to create modular classes, which have the following advantages:

  • Maintainable code. By isolating components into well-defined objects, a modification that needs to be made to one section of the code will, in a worse case scenario, require minor modifications in some other isolated areas of the code. (This, opposed to “spaghetti code” where one change ripples throughout the entire application.)
  • Faster development. Modules which are required by other areas of the application but which haven’t been developed yet can be “mocked up” in the short term to permit testing of other elements of the code.
  • Simplified testing. Individual modules can be easily tested and verified as correct within their limited area of concern. Changes that need to be made to fix a bug in an individual module or object can be made without major changes in the rest of the application.

There are others, but these are the ones I rely upon on a near daily basis as I write code.

Now why do I believe Apple screwed up?

Because it discourages the creation of complex table views (and, by extension, complex view components in other areas of the application) as isolated and well-defined objects which are responsible for their own presentation, and instead encourages “spaghetti code” in the view controller module.

Here’s a simple example. Suppose we want to present a table view full of stock quotes, consisting of the company name, company ticker symbol, current quote and sparkline–a small graph which shows the stock’s activity for the past 24 hours.

Suppose we code our custom table cell in the way that Apple did: by creating our custom view, custom table layout–and then exposing the individual elements to the table view controller.

Our stock data object looks like:

@interface GSStockData : NSObject

@property (readonly) NSString *ticker;
@property (readonly) NSString *company;
@property (readonly) double price;
@property (readonly) NSArray *history;

This would give us a stock table view cell that looks like this:

#import "GSSparkView.h"

@interface GSStockTableViewCell : UITableViewCell

@property (strong) IBOutlet UILabel *tickerLabel;
@property (strong) IBOutlet UILabel *companyLabel;
@property (strong) IBOutlet UILabel *curQuoteLabel;
@property (strong) IBOutlet GSSparkView *sparkView;


And our table view controller would look like this:

#import "GSStockTableViewController.h"
#import "GSStockData.h"
#import "GSStocks.h"
#import "GSStockTableViewCell.h"

@interface GSStockTableViewController ()


@implementation GSStockTableViewController

#pragma mark - Table view data source

- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
    return [[GSStocks shared] numberStocks];

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
	GSStockTableViewCell *cell = (GSStockTableViewCell *)[tableView dequeueReusableCellWithIdentifier:@"StockCell" forIndexPath:indexPath];

	GSStockData *data = [[GSStocks shared] dataAtIndex:(int)indexPath.row];

	cell.tickerLabel.text = data.ticker;
	cell.companyLabel.text =;
	cell.curQuoteLabel.text = [NSString stringWithFormat:@"%.2f",data.price];
	[cell.sparkView setValueList:data.history];

    return cell;


Seems quite reasonable, and very similar to what Apple does. No problems.

Until we learn that the API has changed, and now in order to get the stock price history for our stock, we must call an asynchronous method which queries a remote server for that history. That is, instead of having a handy history of stocks in ‘history’, instead we have something like this:

#import "GSStockData.h"

@interface GSStocks : NSObject

+ (GSStocks *)shared;

- (int)numberStocks;
- (GSStockData *)dataAtIndex:(int)index;

- (void)stockHistory:(int)index withCallback:(void (^)(NSArray *history))callback;

That is, in order to get the history we must obtain it asynchronously from our stock API.

What do we do?

Well, since the responsibility for populating the table data lies with our view controller, we must, on each table cell, figure out if we have history, pull the history if we don’t have it, then refresh the table cell once the data arrives.

So here’s one approach.

(1) Add an NSCache object to the table view controller, and initialize in viewDidLoad:

@interface GSStockTableViewController ()
@property (strong) NSCache *cache;

@implementation GSStockTableViewController

- (void)viewDidLoad
	self.cache = [[NSCache alloc] init];

(2) Pull the history from the cache as we populate the table contents. If the data is not available, set up an asynchronous call to get that data.

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
	GSStockTableViewCell *cell = (GSStockTableViewCell *)[tableView dequeueReusableCellWithIdentifier:@"StockCell" forIndexPath:indexPath];

	GSStockData *data = [[GSStocks shared] dataAtIndex:(int)indexPath.row];

	cell.tickerLabel.text = data.ticker;
	cell.companyLabel.text =;
	cell.curQuoteLabel.text = [NSString stringWithFormat:@"%.2f",data.price];

	 *	Determine if we have the contents and if not, pull asynchronously

	NSArray *history = [self.cache objectForKey:@( indexPath.row )];
	[cell.sparkView setValueList:history];		// set history
	if (history == nil) {
		/* Get data for cache */
		[[GSStocks shared] stockHistory:data withCallback:^(NSArray *fetchedHistory) {
			 *	Now pull the cell; we cannot just grab the cell from above, since
			 *	it may have changed in the time we were loading

			GSStockTableViewCell *stc = (GSStockTableViewCell *)[tableView cellForRowAtIndexPath:indexPath];
			if (stc) {
				[stc.sparkView setValueList:fetchedHistory];
			[self.cache setObject:fetchedHistory forKey:@( indexPath.row )];

    return cell;

Notice the potential problems that can happen here. For example, if a user didn’t understand that a cell may be reused by the tableview, the wrong sparkline could be placed in the tableview if the user scrolls rapidly:

	NSArray *history = [self.cache objectForKey:@( indexPath.row )];
	[cell.sparkView setValueList:history];		// set history
	if (history == nil) {
		/* Get data for cache */
		[[GSStocks shared] stockHistory:data withCallback:^(NSArray *fetchedHistory) {
			 *	Now pull the cell; we cannot just grab the cell from above, since
			 *	it may have changed in the time we were loading

			// The following is wrong: the cell may have been reused, and this
			// will cause us to populate the wrong cell...
			[cell.sparkView setValueList:fetchedHistory];
			[self.cache setObject:fetchedHistory forKey:@( indexPath.row )];

And that’s just one asynchronous API with a simple interface. How do we handle errors? What if there are multiple entry points? What if other bits of the code is using the table view?

Now if we had put all the responsibility for displaying the stock quote into the table view cell itself:

#import "GSStockData.h"

@interface GSStockTableViewCell : UITableViewCell

- (void)setStockData:(GSStockData *)data;


Then none of the asynchronous calling to get values and properly refreshing the cells is the responsibility of the table view cell. All it has to do is:

#pragma mark - Table view data source

- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
    return [[GSStocks shared] numberStocks];

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
	GSStockTableViewCell *cell = (GSStockTableViewCell *)[tableView dequeueReusableCellWithIdentifier:@"StockCell" forIndexPath:indexPath];

	GSStockData *data = [[GSStocks shared] dataAtIndex:(int)indexPath.row];
	[cell setStockData:data];

    return cell;


This is much cleaner. And for our table cell, because it has all the responsibility of figuring out how to populate the contents, if we have to rearrange the cell, it has zero impact on our table view controller.

Our table view cell becomes relatively straight forward as well:

#import "GSStockTableViewCell.h"
#import "GSSparkView.h"
#import "GSStocks.h"

@interface GSStockTableViewCell ()
@property (strong) GSStockData *currentData;

@property (strong) IBOutlet UILabel *tickerLabel;
@property (strong) IBOutlet UILabel *companyLabel;
@property (strong) IBOutlet UILabel *curQuoteLabel;
@property (strong) IBOutlet GSSparkView *sparkView;

@implementation GSStockTableViewCell

- (void)setStockData:(GSStockData *)data
	self.tickerLabel.text = data.ticker;
	self.companyLabel.text =;
	self.curQuoteLabel.text = [NSString stringWithFormat:@"%.2f",data.price];

	 *	If the history was in our object then we'd write the following:

	// [self.sparkView setValueList:data.history];

	 *	Instead we get it through a call to our data source

	self.currentData = data;	/* Trick to make sure we still want this data */
	[[GSStocks shared] stockHistory:data withCallback:^(NSArray *fetchedHistory) {
		 *	Verify that the history that was returned is the same as the
		 *	history we're waiting for. (If another call to this is made with
		 *	different data, self.currentData != data, as data was locally
		 *	cached in our block.

		if (self.currentData == data) {
			[self.sparkView setValueList:fetchedHistory];


And what about our caching?

Well, if we’re using proper separation of concerns, we’d create a new object which was responsible for caching the data from our API. And that has the side effect of being usable everywhere throughout the code.

My point is simple: unless you are using the canned UITableViewCell, make your table view cell responsible for displaying the contents of the record passed to it. Don’t just expose all of the internal structure of that table cell to your view controller; this also pushes responsibility for formatting to the view controller, and that can make the view controller a tangled mess.

Make each object responsible for one “thing”: the table view cell is responsible for displaying the data in the stock record, including fetching it from the back end. The view controller simply hands off the stock record to the table view cell. A separate class can be responsible for data caching. And so forth.

By making each object responsible for it’s “one thing”, it makes dealing with changes (such as an asynchronous fetch of history) extremely easy: instead of having to modify a view controller which slowly becomes extremely overloaded, we simply add a few lines of code to our table view cell–without actually expanding our class’s “responsibility” for displaying a single stock quote.

And in the end you’ll reduce spaghetti code.


Just read the following article: Security Trade-Offs, which refers to an original article claiming that people need to stop trusting Apple with their data because Apple, as a purveyor of “Shiny Objects”, doesn’t understand security.

Which is funny because the original article shows a lack of knowledge of computer security.

I’ve encountered this lack of understanding of security when talking to friends and co-workers as well, and it irritates me. Worse, they “know” they’re right, because of course it’s incredibly obvious, and they’ve read all sorts of stuff which reaffirm their ignorance thinking they were learning something new.


Security involves three aspects, not just one–and to better understand each of these issues we can think of a house rather than a computer. After all, for your house to be a home, it’d be nice to know it was physically secure, right?

The three fundamental aspects of security are Confidentiality, Integrity and Availability.

For your house to be secure, it needs to be “confidential”: meaning access controls need to be implemented to prevent people who do not have access to get in. That’s the lock on your front door: the house needs to be locked, your house (just one of a bunch in a neighborhood) is somewhat anonymous, perhaps drapes on the front windows will help people see you don’t have an expensive stereo system and big screen TV inside.

Now the mistake most people make is that they stop here: as long as people can’t break into my house, all is well. But keeping people out of your house is dirt simple: just cover the front door with cement. Bury your house under a mound of dirt. No-one can enter your house if it is encased in a sealed metal box–not even you.

Sure, we can talk about two factor authentication and if our default of putting patio furniture outside makes sense or we should chain your patio furniture down with bolts in your back yard or keep your patio furniture inside your house so someone can’t jump the fence and steal it.

But all this ignores the two other dimensions of security: availability and integrity.

Availability means can you get into your house easily, or are you going to be outside fumbling with your multiple keys and trying to remember button combinations while standing in the rain? Does your house do what you want–can you move from room to room easily and look at the view from the bedroom window–or are the windows encased in bars and are you constantly having to unlock the door to your bathroom? (After all, your house would be more secure if every door was equipped with a combination lock which automatically locked when the door automatically closed.)

And integrity: does all the weight of those metal plates and the bars on the window corrupt the appearance of your house or make part of it structurally weak and cause the back bedroom to collapse? Sure, you can reduce the attack profile of your house by barring up all the windows–preventing crooks from breaking in by breaking through the window. But you’ve corrupted the functionality of your house: you’ve made it difficult to evacuate your house in the event of a fire. (People have died because of this.)

And sure, the default of putting your patio furniture unlocked in the back yard where it can be stolen by any miscreant capable of climbing a fence makes no sense if you only look at access controls–but if your guests cannot move the patio furniture around freely or you’re constantly dragging the furniture out from a locked garage (locked with a combination lock and separate deadbolt lock), you’re not going to use your patio.

The lack of easy availability of your patio furniture, in other words, means you may as well not have any furniture outside.

Think of Apple as the developer of a subdivision of homes. They all have similar front door locks, similar patios, similar layouts. Apple made those houses convenient and pretty and nice for people to use.

And now a bunch of homes got broken into, and people are (rightfully) asking Apple if somehow the design of the house make it easy for the burglars to break in. And after looking at the corner security cameras, Apple concluded that only specific homes were targeted by burglars who somehow copied the keys to the front door using ‘social engineering’ means: people followed the owners of those homes around with silly putty, and managed to get an imprint of the front door key.

The fact that Apple made it easy to open the front door with just a key is not a stupid vulnerability, as the original Slate article implies.

And in fact, the Slate article ignores the single most common security attack used against computer systems and homes alike: social engineering. Which is just a fancy way of saying that if someone spends the time befriending you at the local bar, and then asks you to show him what’s on your computer system or inside your house–you’ll happily bypass security for them by holding the front door open, or handing him your unlocked phone.

Social engineering was apparently used for most of the break-ins to get into Apple iCloud accounts–by determining the e-mail address of various celebrity accounts (stalking the neighborhood to see who lives where), and then following those celebrities around trying to guess their password (taking pictures of the people who live in the neighborhood hoping to get a glance of the key so one can be made from the photo).

This sort of thing happened over the course of years and wasn’t limited to just Apple’s neighborhood; Android and Windows Mobile devices were also hacked. And this was a sophisticated ring who had gathered stolen images over years.

In fact, the only place where the analogy breaks down is that until the leak this past weekend, we had no idea stuff had been stolen.

Does this mean we should start blaming Apple for building houses with front door windows whose drapes are sometimes left open, which can be opened with a single front door key? Should we demand Apple go back and board up the windows and put multiple locks on everyone’s door? Should Apple patrol the neighborhood and force people who leave patio furniture out in the back yard to bolt it down with bolts sunk in cement, or at least move them into the garage when not in use?

Should Apple go back and retrofit interior doors with automatically locking locks and automatically closing doors?

Or does this mean those who have more to protect should be more thoughtful about protecting their stuff?

Bringing it back to the computer model, should we all be forced to use two factor authentication to access our photos on the Cloud, using a password that may be forgotten and cannot be reset, and a thumb print reader that flakes out when the sensor gets dirty–just to protect the occasional picture I may snap while hiking or the occasional selfie someone may take in front of a museum exhibit–simply because a few starlets took naked pictures of themselves they intended only for their boyfriends, pictures that were then stuck on a server using an inadequate password managed by a handler who then scraped the naked photos off the phone without the starlet knowing?

After all, it was not my iCloud account that was targeted. And even if my photos happened to be scraped from iCloud–all you would see are bird photos and hiking photos and the occasional photo shot from an airplane: worthless crap to anyone who is not me or in my immediate circle of friends.

And while we’re on the topic, let’s talk “factors.”

All access control boils down to three factors: who you are, what you know and what you have.

An ATM is “two factor authentication”: it relies on you having an ATM card, and knowing your PIN. Scanners which know “who you are” rely on some physical attribute: think finger print scanners or retina scanners or devices that measure relative finger lengths.

The problem with always going back to using multiple “factors” ignores the fact that some factors are strong, and some are weak. A 4 digit PIN is a weak password. As Mythbusters showed, fingerprint scanners are moderately weak. And the ready availability of card readers have made ATM cards weak: given that most ATM cards now double as debit cards, it would be easy for a waitress to scan your card at a restaurant, duplicating your ATM/debit card with just a swipe.

Combining factors make things stronger: combining a 4 digit PIN with your ATM card makes it safe to hand just your card to your waitress. Using a key fob which generates a cryptographically pseudo-random number combined with a relatively strong password makes an even stronger security gateway. One place I knew which hosted servers required you to present an identity card, get through a palm scanner, and know an 8 digit PIN to enter the cage.

But all of this is worthless if the guard leaves the door open–which they did on occasion.

The point is two factor authentication is not a cure-all. Two factor authentication can make weak security (such as a 4 digit pin) stronger (by requiring a card as well), but it doesn’t save you from social engineering, such as asking a security guard to keep a door open as you bring in a bunch of boxes. Having a combination padlock on your front door along with a key-activated deadbolt doesn’t help if you leave your front door unlocked.

And worse, two factor authentication violates availability. And why availability is important is simple: the harder it is to get into the front door lock of your house, the less likely you are to lock the front door.

Perhaps the real lesson here is twofold. First, a dedicated burglar will break into your house if the incentive is high enough, regardless of the security checks: remember, while everyone is looking at Apple’s iCloud, other services were broken into as well: these photos came from a variety of sources, and iCloud was a common target only because it was the largest neighborhood.

So part of the problem is just a matter of keeping the drapes closed, so the burglars can’t see your expensive stereo and big-screen TV.

Second, it means you need to be a little bit aware of your neighborhood: if you have something important to secure, perhaps a little additional attention is in order. If you’re a young and beautiful starlet taking naked pictures of yourself, you may want to consider not putting those photos up on the Internet.

But then, most starlets who took these photos did not consider security at all–they did not consider these photos as having any value. And so even if Apple made two factor authentication dirt simple, passwords would have been set to “1234” and the option to double-encrypt the individual files would have been set to “no”. After all, it was just a fun nude photo between her and her boyfriend–no biggie. Right?

Because we don’t understand security: we think its okay to leave the front drapes open and the front doors unlocked, until someone breaks in–at which point we demand our houses be encased in cement.

Finding the boundary of a one-bit-per-pixel monochrome blob

Recently I’ve had need to develop an algorithm which can find the boundary of a 1-bit per pixel monochrome blob. (In my case, each pixel had a color quality which could be converted to a single-bit ‘true’/’false’ test, rather than a literal monochrome image, but essentially the problem set is the same.)

In this post I intend to describe the algorithm I used.

Given a 2D monochrome image with pixels aligned in an X/Y grid:

Arbitrary 2D Blob

We’d like to find the “boundary” of this blob. For now I’ll ignore the problem of what the “boundary” is, but suffice to say that with the boundary we can discover which pixels are inside the blob which border the exterior of the blob, which pixels on the outside border the pixels inside the blob, or to find other attributes of the blob such as a rectangle which bounds the blob.

If we treat each pixel as a discrete 1×1 square rather than a dimensionless point, then I intend to define the boundary as the set of (x,y) coordinates which define the borders around each of those squares, with the upper-left corner of the square representing the coordinate of the point:

Pixel Definition

So, in this case, I intend to define the boundary as the set of (x,y) coordinates which define the border between the black and the white pixels:

outlined blob

We do this by walking the edge of the blob through a set of very simple rules which effectively walk the boundary of the blob in a clock-wise fashion. This ‘blob walking’ only requires local knowledge at the point we are currently examining.

Because we are walking the blob in a clock-wise fashion, it is easy to find the first point in a blob we are interested in walking: through iteratively searching all of the pixels from upper left to lower right:

(Algorithm 1: Find first point)

-- Given: maxx the maximum width of the pixel image
          maxy the maximum height of the pixel image

   Returns the found pixel for an eligable blob or 'PixelNotFound'.

for (int x = 0; x < maxx; ++x) {
    for (int y = 0; y < maxy; ++y) {
        if (IsPixelSet(x,y)) {
            return Pixel(x,y);
return PixelNotFound;

Once we have found our elegable pixel, we can start walking counter clockwise, tracking each of the coordinates we have found as we walk around the perimeter of the blob, traversing either horizontally or vertically.

Given the way we’ve encountered our first pixel in the algorithm above, the pixels around the immediate location at (x,y) looks like the following:

Point In Center

That’s because the way we iterated through, the first pixel we encountered at (x,y) implies that (x-1,y), (x,y-1) and (x-1,y-1) must be clear. Also, if we are to progress in a clock-wise fashion, clearly we should move our current location from (x,y) to (x+1,y):


The design of our algorithm proceeds in a similar fashion: by examining each of the 16 possible pixel configurations we can find in the surrounding 4 pixels, and tracking the one of 4 possible incoming paths we take, we can construct a matrix of directions in which we should take to continue progressing in a clockwise fashion around our eligible blob. And in all but two configuration cases, there was only one possible incoming path we could have taken to get to the center point, since as we presume we are following the edge of the blob we could not have entered between two black pixels or between two white pixels. Some combinations are also illegal since we presume we are walking around the blob in a clock-wise fashion rather than in a counter-clockwise fashion. (This means that we should be, when standing at the point location, pixels should be on the right and never on the left. There is a proof for this which I will not sketch here.)

The 16 possible configurations and the outgoing paths we can take are illustrated below:

All directions

Along the top of this table shows the four possible incoming directions: from the left, from the top, from the right and from the bottom. Each of the 16 possible pixel combinations are shown from top to bottom, and blanks indicate where an incoming path was illegal–either because it comes between two blacks or two whites, or because the path would have placed the black pixel on the left of the incoming line rather than on the right.

Note that with only two exceptions each possible combination of pixels produces only one valid outgoing path. Those two exceptions we arbitrarily pick an outgoing of two possible paths which excludes the diagonal pixel; we could have easily gone the other way and included the diagonal, but this may have had the property of including blobs with holes. (If this is acceptable or not depends on how you are using the algorithm.)

This indicates that we could easily construct a switch statement, converting each possible row into an integer from 0 to 15:

Algorithm 2: converting to a pixel value.

-- Given: IsPixel(x,y) returns true if the pixel is set and false 
          if it is not set or if the pixel is out of the range from
          (0,maxx), (0,maxy)
   Return an integer value from 0 to 15 indicating the pixel combination

int GetPixelState(int x,y int y)
    int ret = 0;
    if IsPixel(x-1,y-1) ret |= 1;
    if IsPixel(x,y-1) ret |= 2;
    if IsPixel(x-1,y) ret |= 4;
    if IsPixel(x,y) ret |= 8;
    return ret;


We now can build our switch statement:

Algorithm 3: Getting the next pixel location

-- Given: the algorithm above to get the current pixel state,
          the current (x,y) location,
          the incoming direction dir, one of LEFT, UP, RIGHT, DOWN

   Returns the outgoing direction LEFT, UP, RIGHT, DOWN or ILLEGAL if the
   state was illegal.

Note: we don't test the incoming path when there was only one choice. We
could, by adding some complexity to this algorithm, for testing purposes.
The values below are obtained from examining the table above.

int state = GetPixelState(x,y);
switch (state) {
    case 0:    return ILLEGAL;
    case 1:    return LEFT;
    case 2:    return UP;
    case 3:    return LEFT;
    case 4:    return DOWN;
    case 5:    return DOWN;
    case 6:    {
                   if (dir == RIGHT) return DOWN;
                   if (dir == LEFT) return UP;
                   return ILLEGAL;
    case 7:    return DOWN;
    case 8:    return RIGHT;
    case 9:    {
                   if (dir == DOWN) return LEFT;
                   if (dir == UP) return RIGHT;
                   return ILLEGAL;
    case 10:   return UP;
    case 11:   return LEFT;
    case 12:   return RIGHT;
    case 13:   return RIGHT;
    case 14:   return UP;
    case 15:   return ILLEGAL;

From all of this we can now easily construct an algorithm which traces the outline of a blob. First, we use Algorithm 1 to find an eligible point. We then set ‘dir’ to UP, since the pixel state we discovered was pixel state 8, and the only legal direction had we been tracing around the blob was UP.

We store away the starting position (x,y), and then we iterate through algorithm 3, getting new directions and updating (x’,y’), traversing (x+1,y) for RIGHT, (x,y+1) for UP, (x-1,y) for LEFT and (x,y-1) for DOWN, until we arrive at our starting point.

As we traverse the parameter, we could either build an array of found (x,y) values, or we could do something else–such as maintaining a bounding rectangle, bumping the boundaries as we find (x,y) values that are outside the rectangle’s boundaries.

First pass at a more formal language for JSON.

So the single most common thing I run into, which is a source of all sorts of headaches when writing custom software for clients, is hooking into their back end system.

A very common pattern for me is to create a single interface which can perform a HTTP ‘get’ or ‘post’ call in order to obtain the contents, run everything through a JSON parser, and then handing the resulting NSDictionary or NSArray to an object which converts the results into a set of Objective C classes.

Up until now I’ve been using JSON Accelerator, which is a really nice little tool for converting JSON into a set of classes. But this runs into a couple of problems.

(1) A number of sites I integrate with have multiple JSON endpoints, each which return subtly different JSON results. Using JSON Accelerator and I wind up generating a lot of duplicate classes which represent more or less the same thing.

(2) Often those sites will change; after all, the back end is under development as well as the front end. I often have a hard time seeing the structure from the JSON; sometimes buried in a few hundred lines is a field that contains a null pointer or which was changed from a string to a JSON field–and tracking those bugs down can be a pain in the ass.

It seemed to me the best way to handle this is to have an intermediate representational language which allows me to see what it is that I’m working with, and to allow allow me to ‘tweak’ the results, so I can point out that the ‘Person’ record in call A is the exact same thing as the ‘Person’ record in call B, except for one of the fields being omitted.

So I built a simple analysis app and a simple compiler app to resolve this problem.

You can download the compiled tools and read the documentation (such as it is) from here.

The representational language is fairly simple: a set of objects, which can be compiled into Objective C and (when I have time) into Java. Each field in an object can be a primitive, an object or an array of objects. So, for example:

/*  Feed 
 *      Top level of the feed

Feed {
    id: integer,
    name: string,
    date: string,
    active: boolean,
    addressList: arrayof Address,
    phoneList: arrayof Phone,

/*  Address
 *      The user's address

Address {
    id: integer,
    name: string,
    address: string,
    address2: (optional) string, // optional in the data stream
    city: string,
    state: string,
    zip: string

/*  Phone
 *      The user's phone

Phone {
    id: integer,
    name: string,
    phone: string

Note that fields can also be marked as ‘nullable’:

Feed {
    id: integer,
    name: string,
    value: (nullable) real

This will translate into an NSNumber * field rather than into a double.

There are also a couple of tools: one that generates the Objective C code, and one which reads in a bunch of JSON (in fact, it will read multiple JSON objects all in a row), and makes a best guess at the underlying structure, collapsing common objects as needed, and even noting when the same field appears to contain ambiguous content.

At some point I will need to clean this up, add Java support, and push this out to GIT. But for now, there you go.

Let me know if this seems useful.

1 2 3 4 37