Apologies.

For some reason or another I am still not receiving notifications when comments are left on this blog.

So my apologies for not responding to questions or comments left here. I’ve also changed the settings to immediately approve all comments (except those containing links and those which look like spam), because for some reason e-mail alerting me to new posts haven’t been arriving either.

MVVM, iOS and Design Patterns

So on the project I’m working on, an architect has asked us to use the MVVM model on iOS to develop certain components within the application. If those components work out correctly, then eventually we’ll be asked to refactor the rest of the application to use the same design pattern.


Okay, so here are some random thoughts about this.

First, I’m not a fan of “Design Patterns” as we currently seem to be using the term.

To me, a “design pattern” is essentially a technique for solving a problem.

And for those who think this is a difference without a difference, Design Patterns are not just useful ways to think about a problem in order to solve it, but actually represents specific codified solutions which are intended to be used relatively unmodified. Techniques, on the other hand, represent ways of thinking about a problem which may or may not be reproduced with perfect fidelity from place to place.

To give a concrete example of what I mean by this, take the current Model-View-Controller Design Pattern. As described on Wikipedia, it represents a Core Solution to building User Interfaces, where each component is well defined: a “view” which generates an output representation, showing data from a “model” which passively stores data the user manipulates using a “controller” which mediates messages between the two.

But if you go back to the original papers discussing Model-View-Controller, you see something much less rigid in thought: it was a way to separate the functionality used to drive a user interface into three loosely grouped ideas: views which show things, models which store things and controllers which manipulate things.

A technique, in other words, to help you organize your thoughts and your code better.

Second, not all techniques are “One Size Fits All.”

Take MVC again. There is nothing that requires your user interface application to use all the pieces of a model view controller: in fact, one could very easily write a simple calculator application that has no model at all.

For example, here is the UIViewController class of a trivial application which takes the input of a text field and converts from celsius to fahrenheit:

#import "ViewController.h"

@interface ViewController ()
@property (weak, nonatomic) IBOutlet UITextField *centigrade;
@property (weak, nonatomic) IBOutlet UITextField *fahrenheit;
@end

@implementation ViewController
- (IBAction)doConvert:(id)sender
{
	double c = [self.centigrade.text doubleValue];
	double f = 32 + (9/5) * c;
	self.fahrenheit.text = [NSString stringWithFormat:@"%.2f",f];
}
@end

Now the most pedantic jerk would say “well, technically the above is a perfect example the MVC Design Pattern, with your model implicit in the method doConvert:, in the variables c and f.”

To which I’d respond really??? Are you so hell bent to squeeze everything into an artificially strict interpretation that you must find a model where one doesn’t really exist?

And thus, the difference between “technique” and “Design Pattern.”

Third, there are far more techniques under the sun than the so-called “Gang Of Four” first espoused upon, techniques that we have forgotten are design patterns in their own right.

Remember: techniques are ways of thinking about a problem that help solve a problem, rather than strictly formed legos in the toy chest that must be assembled in a particular way.

So, for example, “separation of concerns” is a design pattern in its own right: a way to think about code that involves separating it into distinct separate components which are responsible for their own, more limited jobs.

Take the TCP/IP software stack, for example. The power of the stack comes from the fact that each layer in the protocol is responsible for a very limited job. But when assembled into a stack it creates a rather powerful communications paradigm that underlies the Internet today.

So, for example, the link layer is responsible for talking to the actual physical hardware. The IP layer is responsible for routing; in essence it is responsible for converting an IP address into the appropriate hardware device to talk to on the local network, and for receiving incoming IP packets addressed to this computer.

But the IP layer makes no guarantees the message is sent successfully; instead, that lies on the TCP layer, which chops large messages up into packets that fit into an IP frame, and which tracks which packets have been successfully sent and received, sending an acknowledgement when a packet is received successfully. This allows TCP to note when a packet goes missing and trigger a resend of that packet.

And on top of these three simple relatively straight forward components a global Internet was built.

The thing is about patterns like the Separation of Concerns is that because it’s so fundamental to the way we think about software development we forget that it is yet another technique, yet another design pattern that developers use. In fact, we’ve taken it so for granted we no longer really teach the concept in school. We just assume new developers will understand how to break their code into separate distinct modules, each reflecting a specific concern.

Other techniques we’ve simply dropped on the floor, forgetting their value.

For example, we’ve forgotten the power of finite state machines to represent computational state when performing a task. Yet the tool YACC rests on the work done on finite state machines, by converting a representation of a language into a state machine which can be used to parse that language. Similar state machine representations have been used when building parsers for ASN-1 communication protocols, are often used to represent the internal working of IP, and are implicit in the design of virtual machines, such as the JavaVM system.

But because there is no One True Way to implement a state machine, it’s seldom thought of as a Design Pattern, if it is even thought of at all.


Let’s go back to MVC for a moment.

The original idea behind Model View Controller was simply as a technique to think about how to organize your code into separate concerns: one which handles the data model, one which handles the views the user sees, and one which controls how views and models interact.

Think of that in the context of the following article: Model-View-ViewModel for iOS

Missing Network Logic

The definition of MVC – the one that Apple uses – states that all objects can be classified as either a model, a view, or a controller. All of ‘em. So where do you put network code? Where does the code to communicate with an API live?

You can try to be clever and put it in the model objects, but that can get tricky because network calls should be done asynchronously, so if a network request outlives the model that owns it, well, it gets complicated. You definitely should not put network code in the view, so that leaves… controllers. This is a bad idea, too, since it contributes to our Massive View Controller problem.

HEADBANG!

If you think of MVC as an organizational principle, the question “where should you put your network code?” becomes painfully obvious. It belongs in the model code.

But it also assumes the model code also may contain business logic which affects how objects within the model may be manipulated, as well as alternate representations of the data within the model.

But if you think of MVC in the way we’ve grown accustom to, then the “model” is a bunch of passive objects, no better than a file storage system. And if you think of the model code as a passive collection of data objects to be manipulated and perhaps serialized–then of course “where should you put your network code?” becomes a pressing concern.

Just as if you think of a kitchen as being a room that only contains a stove, refrigerator and a microwave, the question “where should I store my pots” becomes a pressing question.

“Well, in the cabinets.”

“But kitchens don’t have cabinets! They only have stoves, refrigerators and microwaves!”

HEADBANG!


But okay, I guess we’re in the world that tries very hard to squeeze the World Wide Web into the Model-View-Controller paradigm (which, when you think about it, doesn’t make a whole lot of sense outside of a Javascript-based AJAX style web page–and please, don’t talk to me about FuBar XYZ framework that promises to allow you to write HTML style pages which use the MVC pattern without redefining the terms “view” and “controller” beyond recognition.), so if we have stupid views which cannot participate in the UI, I guess we also must deal with stupid models which cannot participate in the UI.

Which is why, when you think about it, why MVC now seems to stand for “Massive View Controllers”–because if you don’t allow any logic in your view and you don’t allow any logic in your model, then you’re stuck slamming everything in the controller code, including shit that doesn’t belong there, like model business logic.


And into this world, we see MVVM.

After two or three ill-considered days of staring at this for the project I’ve come to some conclusions:

First, MVVM makes sense if you consider the responsibility of a controller to both handle the interactions of views within a view controller, and to handle the business logic for communicating with the model code.

Again, I believe this distinction is only necessary because we’ve come to think of views as stupid (and in an iOS application, generally tinker-toys we drag off the Xcode storyboard palette), and because we’ve come to think of models as stupid–at best serializable collections of Plain Ordinary Objects.

(Personally I like thinking of a model as being more than a pile of POO, but that’s just me.)

MVVM, as handled in environments like iOS, is really MVCVM.

In other words, you don’t get rid of view controllers. Instead, you separate the code so that view controllers handle the user interface (such as making sure table views are populated, that events repopulate changed controls, etc), and the “ViewModel” code handles the view-specific interaction logic with the model code.

Again, I believe the model code should be more than a pile of POO. But as an organizational principle it’s not a bad one, putting the code which manipulates the model separate from the code which manipulates the views, and having them communicate through a well described interface.

MVVM assumes a “binder”, or rather, a means by which changes in the ViewModel are sent to the View Controller/View combination.

So inherent in the design of MVCVM is the notion that changes made to a publicly exposed value in the ViewModel will be announced to the view controller so the views handled by the view controller will be automatically updated. Some documents describe using ReactiveCocoa, though one page suggested something as simple as a delegate could be used, though one could also use KVO to observe changes.

In some example code I’ve seen the View Controller instantiate the ViewModel object. In others, I’ve seen the ViewModel object passed to the initializer of the View Controller.

I gather that (a) there is supposed to be a “correct” (*ugh*) way to do this, but (b) if you want to use Storyboards or NIBs in order to create your view controllers, you’re sort of stuck with having the View Controller create the ViewModel. (Besides, being able to instantiate the ViewModel without the View Controller is supposed to allow us to supposedly test our user interface without having a user interface…)

On the other hand, you can always attach your ViewModel to the View Controller in the prepareForSegue:sender: method.


And finally:

This feels like it’s solving a problem we shouldn’t have if we weren’t being such pedantic assholes.

Meaning if we hadn’t forgotten that MVC is an organizational principle rather than a strict formula, and hadn’t forgotten that our Views don’t need to be stupid and our Models doesn’t need to be a pile of POO, then we wouldn’t be left wondering where our network code belongs or wondering where to put our business logic.

Because the answer to that question would be immediately obvious.

But since this is where we are, separating out what properly belongs in the model code and calling it something new may help a new generation of developers realize they don’t need to build a big pile of sphagetti code to make their applications work.

This doesn’t guarantee better code.

The real problem, of course is that code will be no better than the programmer who writes it, no matter how many different techniques they try. A good, well organized programmer will produce good, well organized code. A poor, disorganized programmer will produce poor, disorganized code.

Unknown Knowns.

While Donald Rumsfeld was blasted for discussing “known knowns” and the like, it is a core principle that you see in things like the CISSP security training materials and in other areas where security is a factor. It also factors into things like scientific research, where we constantly push the boundaries of what we know.

And the idea goes something like this:

There are known knowns, that is, things we know that we know. For example, I know the lock on the back door of the building is locked.

There are known unknowns, that is, things we know that we don’t know. For example, I don’t know if the lock on the back door of the building is locked–and knowing I don’t know that, I can go down and check.

And there are unknown unknowns, that is, things we don’t know that we don’t know. For example, I’ve never been in this building before, and I have no idea that there is even a back door to this building that needs to be locked.

Unknown unknowns can often be uncovered through a systematic review and through imagining hypothetical scenarios. We could, for example, on moving into this building, walk the perimeter of the building looking for and noting all the doors that go in and out of the building: we use the fact that we don’t know the building (a known unknown) to uncover facts about the building which then help us make informed decisions in the future–converting the unknown unknown about back doors into a known quantity.

Though that doesn’t help us if there is a tunnel underneath.


If we put these into a graph it may look something like this:

 
Known knowledge
Unknown knowledge
What we know:
Known knowns

Things we know we know. (Example: we know the back door is locked.)

Unknown knowns

???

What we don’t know:
Known unknowns

Things we know we don’t know. (Example: we don’t know if the back door is locked.)

Unknown unknowns

Things we don’t know we don’t know. (Example: we don’t know there is a back door to lock.)

Now the forefront of scientific knowledge lives in the lower right quadrant: we’re constantly pushing the frontier of knowledge–and part of that means figuring out what are the right questions to ask.

The forefront of security disasters also live in that lower right quadrant: this is why, for example, safety regulations for airplanes are often written only after a few hundred people die in a terrible airplane accident. Because until the accident took place we didn’t even know there was an issue in the first place.

But what is the upper right quadrant? What is a “unknown known?”


The side axis: what we know and what we don’t know–this is pretty straight forward. I either know if the back door is locked or I don’t.

The top axis: I would propose that this is really a discussion about our self-awareness or self-knowledge: do we even know the right question to ask? Do we even have a full enough mental model to know what we should know?

Do we even know there is a back door on the building?

Have you ever gone to a lecture and, at the end of a long and involved technical discussion, the presenter turns to the audience and asks if there are any questions–and there is nothing but silence? And afterwards you realize the reason why you’re not asking questions is not because the information presented was so obvious that you just managed to absorb it all–but instead found that you didn’t even know the right question to ask? That’s “unknown knowledge”–you don’t really know what you just learned, and of course are unable to formulate a question, because to do so would require that you knew what you knew and what you didn’t know, and that you knew what you didn’t understand.

It takes time to learn.


So I would propose that an unknown known is something we know–but we are unaware of the fact that we know it. Our self-knowledge does not permit us from knowing that we know something–or rather, from knowing that we have learned something or are aware of something that perhaps others may not be aware of.

Meaning “unknown knowns” are any bit of knowledge that, when someone asks about it, we respond with “Well, of course everyone knows that!”–which is a genuine falsehood, since clearly the person asking didn’t know.

Unknown knowns are things we are unaware that we have to communicate. Unknown knowns are things we don’t know require documentation or, when we are made aware of them we think is either stupid or obvious.

Unknown knowns include things like:

Undocumented software which has no comments or documentation in them, because “of course everyone should be able to read the code.”

Undocumented corporate procedures, which is just “part of the corporate culture” that “everyone” should just understand.

Anything we think just “should be obvious.”

Nothing is obvious.

Yes, we need signs in a bathroom reminding workers at a restaurant to wash their hands–because some people may not know the effects of hygiene on food preparation and it is better to constantly remind people than to suffer the consequences. Yes, we need corporate policy manuals, ideally abstracted for ease of reading, which reminds people that sexual harassment is not welcome–and defines what sexual harassment actually is. Yes, developers need to document their code: code is not “self-documenting.”

And that arrogant feeling that rises up in most people when they respond “well, of course everyone knows that!” is a natural response to the embarassment on discovering an “unknown known”–on discovering that you were so unaware of yourself that you couldn’t catalog a piece of vital knowledge and share it properly with someone who needed it.

And by placing the blame on someone else with your “well, of course!” statement you deflect blame from your own lack of self-awareness.

Worse: because we don’t know what we know, and because for many of us the natural reaction is to place blame on the person who genuinely didn’t know, we create barriers when training new hires or when teaching new developers or when bringing on new players onto the team.

Because by telling new people “it’s your fault that I didn’t tell you what I didn’t know I should tell you” we diminish others for what is, essentially, our own lack of self-awareness.

Things to remember: Time Zones.

*sigh*

Okay, here are some things to remember about time zones, jotted out in no particular order.

  • On most modern operating systems, “time” is an absolute quantity, usually measured as the number of seconds from some fixed time in the past, for example, since midnight on January 1, 1970 GMT. This fixed time is called the “epoch.”

This time is an absolute quantity regardless of where you are in the world or how you represent time at your location.

  • Time Zones represent a way by which we represent an absolute time in a human readable format, appropriate for the location where the person is on the Earth.

So, for example, 1,431,385,083 is the number of seconds since our epoch, may be displayed as “May 11, 2015 @ 22:58:03 UTC”, or as “May 11, 2015 @ 6:58:03 PM EST”, or as “May 11, 2015 @ 3:58:03 PM PST”.

While it is easy to think of Time Zones as an offset from GMT, I find that it’s better to treat Time Zones as a sort of “black box”, since the rules as to which time zone offset that applies at a given date can be quite complex. (And if you are in one of the counties in part of Indiana which cannot make up its mind which time zone it wants to be in–EST or CST–the actual rules can be quite arbitrary and bizarre.)

So in a sense a time zone is a thing which converts an absolute number of seconds since epoch, and presents a time suitable for display to the user.

  • When dealing with web applications or mobile applications, the time zone to use when displaying a time depends on the application.

Most of the time, the appropriate timezone to use when displaying a time is the device’s native time zone. Meaning most of the time the default on most platforms is to use whatever time zone the device is in–which is the default when creating classes which display or parse the time.

Sometimes, however, it may be appropriate to display the time in a different time zone. For example, an application that may display the time of an event at a park or for a conference or at a particular movie theater should use the time zone associated with the location of the park or conference or movie theater.

And sometimes I would argue it would be appropriate to display the time twice in two separate time zones. For example, a chat application which notes the sender and receiver of a message are in two different time zones may wish to display the time in the receiver’s time zone and a second time in the sender’s time zone.

Rarely you may need to do something else. For example, an application which tracks an event that happens at the same time of day across multiple dates regardless of the actual time zone. If that is the case, then you may wish to do something other than store the time as an absolute number of seconds and translate to the appropriate time zone.

  • If you ever need to ask for the timezone a user is in–for example, you need to set the preferred time zone for a movie theater–remember: in the United States there are only 10 time zones.

If you look through the IANA Time Zone Database, you may see dozens and dozens of time zones for the United States. But the reality is, there are only 10 time zones that you need to worry about: the nine mentioned in this article, and Arizona, which does not observe daylights savings time.

If the user lives in the areas of Arizona which do observe DST, they can manually select “Mountain Time”; users in Indiana can sort out their own state’s issues.

(Many of the entries in the IANA database deal with historic timezone issues as individual areas of the country tried to sort out which time zone they wanted to be in.)

Similarly Canada has only 6 time zones, and other nations really only observe a handful of time zones.

  • In general, time zones mentioned in the IANA Time Zone database are named after the most populous city in that time zone. Which means if you want to present a more user friendly label you’ll need to roll something which translates between the time zone name and the IANA time zone title.

For example, PST/PDT is actually labeled “America/Los_Angeles”; Los Angeles is the most populous city in the area which observes Pacific Time.

Things to remember: IB_DESIGNABLE

Some random notes about creating designable custom views in Xcode:

  1. Adding the symbol IB_DESIGNABLE to the view declaration just before the @interface declaration of your view in the view header will mark your view as designable in Interface Builder; the view will be rendered in your view.
  2. Each property that is declared with the IBInspectable keyword will then be inspectable and can be set in Interface Builder.
  3. The properties that can be inspected this way are: “boolean, integer or floating point number, string, localized string, rectangle, point, size, color, range, and nil.”
  4. The preprocessor macro TARGET_INTERFACE_BUILDER is defined to be true if compiled to run on Interface Builder. Use this to short circuit logic which shouldn’t run on IB.
  5. Likewise, the method – (void)prepareForInterfaceBuilder; is called (if present) on initializing the class only if it is loaded in Interface Builder.
  6. You can debug your views within Interface Builder; Apple documents the process here.

Design affordances

An “affordance” is an element of a design which suggests to the user how to use or operate an object. For example, the handle of a teapot suggests how to grab the teapot to lift it; a doorknob suggests a means to opening a closed door.

Affordances are inherently cultural: an amazonian tribesman who has never visited the modern world may need help discovering how to open a closed door. But affordances that work well are common: once he learns how to open one door he should be able to open all doors–even those with slightly different affordances. (A lever rather than a knob, for example, so long as it is located where the knob would be located.)

Because they are cultural, like all cultural things we take them for granted: we don’t recognize that, for example, there are multiple different ways to latch a closed door and an accident of history could have us reaching down for the floor latch to unlatch a door, rather than rotating a knob or lever.

Design failures are those where affordances are not clearly discoverable or which follow the design language we’ve learned. For example, if one were to design a door with a latch at the floor, and no knob or push plate, how long would it take you to discover the thing closing the door was a latch at the floor rather than a knob at waist height? Probably a few moments and you’d probably be very frustrated.


Now unlike the physical world, where there really are only so many interactions you can have to open a door or a window or lift a teapot from a hot surface, the software world is completely abstract. There are no limits, really, on what you display or how you interact with that display. In custom installations (such as in your car or at a kiosk), you can literally use any hardware interaction you like: the BMW iDrive system, for example, uses a rotary knob that can be pushed around like a joystick and depressed, as well as two separate buttons. Modern touch screens make it possible to tap with multiple fingers and tap, press and hold, tap and drag, or any combination of those motions.

And because affordances are, in some sense, cultural (there is no reason why a door knob is at waist level except custom, for example), affordances become part of a “design language”–a means of representing things on a screen that say “I can be touched” or “I can be clicked” or “This is how you back up a level on the screens.”

A well designed set of affordances as part of a well designed visual language can do three things:

They can provide a way for users to quickly discover how to use your interface, thus reducing user frustration. (A common ‘back’ button to back up from a modal display in a car would make it easier on a driver who may be operating the car’s systems while driving down the road at 70 miles/hour.)

They simplify the design of a new application. A common visual design language makes it easier for designers and developers to follow a common design language by essentially following a short recipe book of design objects and design attributes.

They simply user interface development, by allowing software developers to design common control elements which embody these design elements. For example, if all action buttons look the same, a developer could write a single action button class (or use one provided by the UI framework), thus saving himself the task of building separate buttons for each screen of the application.

Likewise, a poorly executed design visual language creates problems for each of these items above: it can increase user frustration as none of the elements of an app (or multiple apps on the same system) work enough the same for the user to know what is happening. They complicate designers who feel free to design each page of an app separately, and who may not follow similar design patterns. They increase the workload on developers who cannot use common controls or common design elements in the application.

The most frustrating part to me is that at some level the computer software industry has come to consider these three failures features rather than bugs: a lack of a common design language allows elements of the application to be partitioned across different teams without having to worry about those teams needing to communicate. And at some level some users have come to believe some level of difficulty in using an application is a feature; a list of hurdles to overcome in order to become an “expert” on a product.

Though why one must become an “expert” to use a poorly designed radio function embedded in a car computer boggles my mind.


One of the most serious problems I have with Apple’s iOS design guidelines is twofold.

First, while Apple’s section on UI elements is rather complete in describing each of the built-in controls provided by iOS, it provides no unifying theory behind those controls which give any suggestion to designers seeking to design custom controls how to proceed.

For example, there is no unified theory as to how tap-able items should stand out that says to the user “I can be tapped on.”

As it turns out that’s because there is no unified theory: along the top and bottom of the screen controls are tappable based solely on their placement. For example, a tab bar at the bottom assumes all icons are tappable and represents switching to a different tab, at the top of a navigation bar the words on the left and right corners are tappable; the left generally takes you back a level and the right generally represents an action appropriate to the screen you are looking at. Tappable areas in an alert view are suggested by a gray hairline breaking the alert box into sections, and often (but not always) icons suggest tappability.

Second, the only suggestion Apple provided for custom controls is to “Consider choosing a key color to indicate interactivity and state,” and “Avoid using the same color in both interactive and noninteractive elements.”

In other words, the only design affordance Apple talks about is using color to indicate interactivity: a colored word may indicate a verb for an action, a colored icon may indicate a control that can be tapped on.

Which, in my opinion, directly conflicts with: “Be aware of color blindness.”

My own rule of thumb, by the way, is that if the affordances of your application cannot be discovered when your interface is rendered in black and white, then they cannot be discovered by people who are color blind.

Color, in other words, is a terrible affordance.


Google, apparently, is trying to resolve this by proposing a different user interface paradigm of “material.”

Many of the suggestions seem a little whacky (such as the idea that larger objects should animate more slowly as if they have greater mass), but the underlying theory of using shadows and hairlines to subtly indicate components of a user interface goes a long way towards solving the problem with affordances on a touch display.

(Which is why the idea that material can “heal itself” bothers me, because if an item is separate and can be moved, it should be represented as a separate object.)

Which is why I’ve taken the time to write this long blog post: because at least Google is trying to create a unified theory behind its user interface choices, and that suggests that Google may be heading in a completely different direction than Apple when it comes to user design.

In other words, Apple appears to be headed towards using the design elements of book design (typography, color, subtle use of shapes and typeface that may change from product to product) to suggest functionality, while Google is taking a more traditional HIG approach of creating an underlying unified theory as to what it is we’re manipulating, then building the basic building blocks off of this.

So, for example, a table cell in a table with a detail button may look like this on Apple’s iOS devices, but on Google the theory of “material” suggests we may separate the individual rows and separate out the right part of the each table to form a separate pane which is clickable:

Style

And while the design on the right, the Google “material” style looks considerably busier than the design on the left, functionality makes itself extremely apparent rather quickly.

It’s also, in a sense, far easier to design: icons do not need to be designed to differentiate a ‘verb’ action button from a status icon. Components can be separated out so that they are obviously clickable by simply bordering the control appropriately. Actions can be made apparent on the screen.

And while it is true that it gives rise to what appears to be a more “clunky” screen, the reality is on any mobile device–especially a phone form factor device, the user will be confronted with dozens (or even hundreds) of different screens, each one capturing one bit of information (a single item from a list of choices, for example), and configuring the equivalent of a single dialog box on a desktop machine may involve passing in and out of dozens of different screens on a mobile device.

Quickly being able to design those screens and make their functionality extremely apparent is extremely important, and if you don’t have the aesthetic design sense of a Jonathan Ives, having guide rails that results in some degree of clunkiness may be more desirable to the end user than a design which is completely unusable.

UITableViewCells

One of the biggest mistakes Apple made in the design of the iOS API was to expose the component views within a UITableViewCell.

Okay, so it really wasn’t Apple’s fault; they wanted to solve the problem of providing access to the components of a table view cell so we could use the off-the-shelf parts easily in our application. By exposing the textLabel, detailTextLabel and imageView UIView components, it makes it easy for us to modify the off-the-shelf parts to our own needs, providing a look which is consistent with the rest of the iOS universe.

But it was a mistake because it taught an entire generation of iOS developers some really bad habits.


One of the things that I greatly appreciate about object oriented programming is that it allows us to easily design applications as a collection distinct separate objects with well-defined purposes or tasks. This separation of concerns permits us to create modular classes, which have the following advantages:

  • Maintainable code. By isolating components into well-defined objects, a modification that needs to be made to one section of the code will, in a worse case scenario, require minor modifications in some other isolated areas of the code. (This, opposed to “spaghetti code” where one change ripples throughout the entire application.)
  • Faster development. Modules which are required by other areas of the application but which haven’t been developed yet can be “mocked up” in the short term to permit testing of other elements of the code.
  • Simplified testing. Individual modules can be easily tested and verified as correct within their limited area of concern. Changes that need to be made to fix a bug in an individual module or object can be made without major changes in the rest of the application.

There are others, but these are the ones I rely upon on a near daily basis as I write code.

Now why do I believe Apple screwed up?

Because it discourages the creation of complex table views (and, by extension, complex view components in other areas of the application) as isolated and well-defined objects which are responsible for their own presentation, and instead encourages “spaghetti code” in the view controller module.

Here’s a simple example. Suppose we want to present a table view full of stock quotes, consisting of the company name, company ticker symbol, current quote and sparkline–a small graph which shows the stock’s activity for the past 24 hours.

Suppose we code our custom table cell in the way that Apple did: by creating our custom view, custom table layout–and then exposing the individual elements to the table view controller.

Our stock data object looks like:

@interface GSStockData : NSObject

@property (readonly) NSString *ticker;
@property (readonly) NSString *company;
@property (readonly) double price;
@property (readonly) NSArray *history;
...
@end

This would give us a stock table view cell that looks like this:

#import 
#import "GSSparkView.h"

@interface GSStockTableViewCell : UITableViewCell

@property (strong) IBOutlet UILabel *tickerLabel;
@property (strong) IBOutlet UILabel *companyLabel;
@property (strong) IBOutlet UILabel *curQuoteLabel;
@property (strong) IBOutlet GSSparkView *sparkView;

@end

And our table view controller would look like this:

#import "GSStockTableViewController.h"
#import "GSStockData.h"
#import "GSStocks.h"
#import "GSStockTableViewCell.h"

@interface GSStockTableViewController ()

@end

@implementation GSStockTableViewController

#pragma mark - Table view data source

- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
{
    return [[GSStocks shared] numberStocks];
}

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
	GSStockTableViewCell *cell = (GSStockTableViewCell *)[tableView dequeueReusableCellWithIdentifier:@"StockCell" forIndexPath:indexPath];

	GSStockData *data = [[GSStocks shared] dataAtIndex:(int)indexPath.row];

	cell.tickerLabel.text = data.ticker;
	cell.companyLabel.text = data.company;
	cell.curQuoteLabel.text = [NSString stringWithFormat:@"%.2f",data.price];
	[cell.sparkView setValueList:data.history];

    return cell;
}

@end

Seems quite reasonable, and very similar to what Apple does. No problems.

Until we learn that the API has changed, and now in order to get the stock price history for our stock, we must call an asynchronous method which queries a remote server for that history. That is, instead of having a handy history of stocks in ‘history’, instead we have something like this:

#import 
#import "GSStockData.h"

@interface GSStocks : NSObject

+ (GSStocks *)shared;

- (int)numberStocks;
- (GSStockData *)dataAtIndex:(int)index;

- (void)stockHistory:(int)index withCallback:(void (^)(NSArray *history))callback;
@end

That is, in order to get the history we must obtain it asynchronously from our stock API.

What do we do?

Well, since the responsibility for populating the table data lies with our view controller, we must, on each table cell, figure out if we have history, pull the history if we don’t have it, then refresh the table cell once the data arrives.

So here’s one approach.

(1) Add an NSCache object to the table view controller, and initialize in viewDidLoad:

@interface GSStockTableViewController ()
@property (strong) NSCache *cache;
@end

@implementation GSStockTableViewController

- (void)viewDidLoad
{
	self.cache = [[NSCache alloc] init];
}

(2) Pull the history from the cache as we populate the table contents. If the data is not available, set up an asynchronous call to get that data.

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
	GSStockTableViewCell *cell = (GSStockTableViewCell *)[tableView dequeueReusableCellWithIdentifier:@"StockCell" forIndexPath:indexPath];

	GSStockData *data = [[GSStocks shared] dataAtIndex:(int)indexPath.row];

	cell.tickerLabel.text = data.ticker;
	cell.companyLabel.text = data.company;
	cell.curQuoteLabel.text = [NSString stringWithFormat:@"%.2f",data.price];

	/*
	 *	Determine if we have the contents and if not, pull asynchronously
	 */

	NSArray *history = [self.cache objectForKey:@( indexPath.row )];
	[cell.sparkView setValueList:history];		// set history
	if (history == nil) {
		/* Get data for cache */
		[[GSStocks shared] stockHistory:data withCallback:^(NSArray *fetchedHistory) {
			/*
			 *	Now pull the cell; we cannot just grab the cell from above, since
			 *	it may have changed in the time we were loading
			 */

			GSStockTableViewCell *stc = (GSStockTableViewCell *)[tableView cellForRowAtIndexPath:indexPath];
			if (stc) {
				[stc.sparkView setValueList:fetchedHistory];
			}
			[self.cache setObject:fetchedHistory forKey:@( indexPath.row )];
		}];
	}

    return cell;
}

Notice the potential problems that can happen here. For example, if a user didn’t understand that a cell may be reused by the tableview, the wrong sparkline could be placed in the tableview if the user scrolls rapidly:

	NSArray *history = [self.cache objectForKey:@( indexPath.row )];
	[cell.sparkView setValueList:history];		// set history
	if (history == nil) {
		/* Get data for cache */
		[[GSStocks shared] stockHistory:data withCallback:^(NSArray *fetchedHistory) {
			/*
			 *	Now pull the cell; we cannot just grab the cell from above, since
			 *	it may have changed in the time we were loading
			 */

			// The following is wrong: the cell may have been reused, and this
			// will cause us to populate the wrong cell...
			[cell.sparkView setValueList:fetchedHistory];
			[self.cache setObject:fetchedHistory forKey:@( indexPath.row )];
		}];
	}

And that’s just one asynchronous API with a simple interface. How do we handle errors? What if there are multiple entry points? What if other bits of the code is using the table view?

Now if we had put all the responsibility for displaying the stock quote into the table view cell itself:

#import 
#import "GSStockData.h"

@interface GSStockTableViewCell : UITableViewCell

- (void)setStockData:(GSStockData *)data;

@end

Then none of the asynchronous calling to get values and properly refreshing the cells is the responsibility of the table view cell. All it has to do is:

#pragma mark - Table view data source

- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
{
    return [[GSStocks shared] numberStocks];
}

- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
	GSStockTableViewCell *cell = (GSStockTableViewCell *)[tableView dequeueReusableCellWithIdentifier:@"StockCell" forIndexPath:indexPath];

	GSStockData *data = [[GSStocks shared] dataAtIndex:(int)indexPath.row];
	[cell setStockData:data];

    return cell;
}

@end

This is much cleaner. And for our table cell, because it has all the responsibility of figuring out how to populate the contents, if we have to rearrange the cell, it has zero impact on our table view controller.

Our table view cell becomes relatively straight forward as well:

#import "GSStockTableViewCell.h"
#import "GSSparkView.h"
#import "GSStocks.h"

@interface GSStockTableViewCell ()
@property (strong) GSStockData *currentData;

@property (strong) IBOutlet UILabel *tickerLabel;
@property (strong) IBOutlet UILabel *companyLabel;
@property (strong) IBOutlet UILabel *curQuoteLabel;
@property (strong) IBOutlet GSSparkView *sparkView;
@end

@implementation GSStockTableViewCell

- (void)setStockData:(GSStockData *)data
{
	self.tickerLabel.text = data.ticker;
	self.companyLabel.text = data.company;
	self.curQuoteLabel.text = [NSString stringWithFormat:@"%.2f",data.price];

	/*
	 *	If the history was in our object then we'd write the following:
	 */

	// [self.sparkView setValueList:data.history];

	/*
	 *	Instead we get it through a call to our data source
	 */

	self.currentData = data;	/* Trick to make sure we still want this data */
	[[GSStocks shared] stockHistory:data withCallback:^(NSArray *fetchedHistory) {
		/*
		 *	Verify that the history that was returned is the same as the
		 *	history we're waiting for. (If another call to this is made with
		 *	different data, self.currentData != data, as data was locally
		 *	cached in our block.
		 */

		if (self.currentData == data) {
			[self.sparkView setValueList:fetchedHistory];
		}
	}];
}

@end

And what about our caching?

Well, if we’re using proper separation of concerns, we’d create a new object which was responsible for caching the data from our API. And that has the side effect of being usable everywhere throughout the code.


My point is simple: unless you are using the canned UITableViewCell, make your table view cell responsible for displaying the contents of the record passed to it. Don’t just expose all of the internal structure of that table cell to your view controller; this also pushes responsibility for formatting to the view controller, and that can make the view controller a tangled mess.

Make each object responsible for one “thing”: the table view cell is responsible for displaying the data in the stock record, including fetching it from the back end. The view controller simply hands off the stock record to the table view cell. A separate class can be responsible for data caching. And so forth.

By making each object responsible for it’s “one thing”, it makes dealing with changes (such as an asynchronous fetch of history) extremely easy: instead of having to modify a view controller which slowly becomes extremely overloaded, we simply add a few lines of code to our table view cell–without actually expanding our class’s “responsibility” for displaying a single stock quote.


And in the end you’ll reduce spaghetti code.

Security

Just read the following article: Security Trade-Offs, which refers to an original article claiming that people need to stop trusting Apple with their data because Apple, as a purveyor of “Shiny Objects”, doesn’t understand security.

Which is funny because the original article shows a lack of knowledge of computer security.

I’ve encountered this lack of understanding of security when talking to friends and co-workers as well, and it irritates me. Worse, they “know” they’re right, because of course it’s incredibly obvious, and they’ve read all sorts of stuff which reaffirm their ignorance thinking they were learning something new.

(Sigh.)


Security involves three aspects, not just one–and to better understand each of these issues we can think of a house rather than a computer. After all, for your house to be a home, it’d be nice to know it was physically secure, right?

The three fundamental aspects of security are Confidentiality, Integrity and Availability.

For your house to be secure, it needs to be “confidential”: meaning access controls need to be implemented to prevent people who do not have access to get in. That’s the lock on your front door: the house needs to be locked, your house (just one of a bunch in a neighborhood) is somewhat anonymous, perhaps drapes on the front windows will help people see you don’t have an expensive stereo system and big screen TV inside.

Now the mistake most people make is that they stop here: as long as people can’t break into my house, all is well. But keeping people out of your house is dirt simple: just cover the front door with cement. Bury your house under a mound of dirt. No-one can enter your house if it is encased in a sealed metal box–not even you.

Sure, we can talk about two factor authentication and if our default of putting patio furniture outside makes sense or we should chain your patio furniture down with bolts in your back yard or keep your patio furniture inside your house so someone can’t jump the fence and steal it.

But all this ignores the two other dimensions of security: availability and integrity.

Availability means can you get into your house easily, or are you going to be outside fumbling with your multiple keys and trying to remember button combinations while standing in the rain? Does your house do what you want–can you move from room to room easily and look at the view from the bedroom window–or are the windows encased in bars and are you constantly having to unlock the door to your bathroom? (After all, your house would be more secure if every door was equipped with a combination lock which automatically locked when the door automatically closed.)

And integrity: does all the weight of those metal plates and the bars on the window corrupt the appearance of your house or make part of it structurally weak and cause the back bedroom to collapse? Sure, you can reduce the attack profile of your house by barring up all the windows–preventing crooks from breaking in by breaking through the window. But you’ve corrupted the functionality of your house: you’ve made it difficult to evacuate your house in the event of a fire. (People have died because of this.)

And sure, the default of putting your patio furniture unlocked in the back yard where it can be stolen by any miscreant capable of climbing a fence makes no sense if you only look at access controls–but if your guests cannot move the patio furniture around freely or you’re constantly dragging the furniture out from a locked garage (locked with a combination lock and separate deadbolt lock), you’re not going to use your patio.

The lack of easy availability of your patio furniture, in other words, means you may as well not have any furniture outside.


Think of Apple as the developer of a subdivision of homes. They all have similar front door locks, similar patios, similar layouts. Apple made those houses convenient and pretty and nice for people to use.

And now a bunch of homes got broken into, and people are (rightfully) asking Apple if somehow the design of the house make it easy for the burglars to break in. And after looking at the corner security cameras, Apple concluded that only specific homes were targeted by burglars who somehow copied the keys to the front door using ‘social engineering’ means: people followed the owners of those homes around with silly putty, and managed to get an imprint of the front door key.

The fact that Apple made it easy to open the front door with just a key is not a stupid vulnerability, as the original Slate article implies.

And in fact, the Slate article ignores the single most common security attack used against computer systems and homes alike: social engineering. Which is just a fancy way of saying that if someone spends the time befriending you at the local bar, and then asks you to show him what’s on your computer system or inside your house–you’ll happily bypass security for them by holding the front door open, or handing him your unlocked phone.

Social engineering was apparently used for most of the break-ins to get into Apple iCloud accounts–by determining the e-mail address of various celebrity accounts (stalking the neighborhood to see who lives where), and then following those celebrities around trying to guess their password (taking pictures of the people who live in the neighborhood hoping to get a glance of the key so one can be made from the photo).

This sort of thing happened over the course of years and wasn’t limited to just Apple’s neighborhood; Android and Windows Mobile devices were also hacked. And this was a sophisticated ring who had gathered stolen images over years.

In fact, the only place where the analogy breaks down is that until the leak this past weekend, we had no idea stuff had been stolen.


Does this mean we should start blaming Apple for building houses with front door windows whose drapes are sometimes left open, which can be opened with a single front door key? Should we demand Apple go back and board up the windows and put multiple locks on everyone’s door? Should Apple patrol the neighborhood and force people who leave patio furniture out in the back yard to bolt it down with bolts sunk in cement, or at least move them into the garage when not in use?

Should Apple go back and retrofit interior doors with automatically locking locks and automatically closing doors?

Or does this mean those who have more to protect should be more thoughtful about protecting their stuff?

Bringing it back to the computer model, should we all be forced to use two factor authentication to access our photos on the Cloud, using a password that may be forgotten and cannot be reset, and a thumb print reader that flakes out when the sensor gets dirty–just to protect the occasional picture I may snap while hiking or the occasional selfie someone may take in front of a museum exhibit–simply because a few starlets took naked pictures of themselves they intended only for their boyfriends, pictures that were then stuck on a server using an inadequate password managed by a handler who then scraped the naked photos off the phone without the starlet knowing?

After all, it was not my iCloud account that was targeted. And even if my photos happened to be scraped from iCloud–all you would see are bird photos and hiking photos and the occasional photo shot from an airplane: worthless crap to anyone who is not me or in my immediate circle of friends.


And while we’re on the topic, let’s talk “factors.”

All access control boils down to three factors: who you are, what you know and what you have.

An ATM is “two factor authentication”: it relies on you having an ATM card, and knowing your PIN. Scanners which know “who you are” rely on some physical attribute: think finger print scanners or retina scanners or devices that measure relative finger lengths.

The problem with always going back to using multiple “factors” ignores the fact that some factors are strong, and some are weak. A 4 digit PIN is a weak password. As Mythbusters showed, fingerprint scanners are moderately weak. And the ready availability of card readers have made ATM cards weak: given that most ATM cards now double as debit cards, it would be easy for a waitress to scan your card at a restaurant, duplicating your ATM/debit card with just a swipe.

Combining factors make things stronger: combining a 4 digit PIN with your ATM card makes it safe to hand just your card to your waitress. Using a key fob which generates a cryptographically pseudo-random number combined with a relatively strong password makes an even stronger security gateway. One place I knew which hosted servers required you to present an identity card, get through a palm scanner, and know an 8 digit PIN to enter the cage.

But all of this is worthless if the guard leaves the door open–which they did on occasion.

The point is two factor authentication is not a cure-all. Two factor authentication can make weak security (such as a 4 digit pin) stronger (by requiring a card as well), but it doesn’t save you from social engineering, such as asking a security guard to keep a door open as you bring in a bunch of boxes. Having a combination padlock on your front door along with a key-activated deadbolt doesn’t help if you leave your front door unlocked.

And worse, two factor authentication violates availability. And why availability is important is simple: the harder it is to get into the front door lock of your house, the less likely you are to lock the front door.


Perhaps the real lesson here is twofold. First, a dedicated burglar will break into your house if the incentive is high enough, regardless of the security checks: remember, while everyone is looking at Apple’s iCloud, other services were broken into as well: these photos came from a variety of sources, and iCloud was a common target only because it was the largest neighborhood.

So part of the problem is just a matter of keeping the drapes closed, so the burglars can’t see your expensive stereo and big-screen TV.

Second, it means you need to be a little bit aware of your neighborhood: if you have something important to secure, perhaps a little additional attention is in order. If you’re a young and beautiful starlet taking naked pictures of yourself, you may want to consider not putting those photos up on the Internet.

But then, most starlets who took these photos did not consider security at all–they did not consider these photos as having any value. And so even if Apple made two factor authentication dirt simple, passwords would have been set to “1234” and the option to double-encrypt the individual files would have been set to “no”. After all, it was just a fun nude photo between her and her boyfriend–no biggie. Right?

Because we don’t understand security: we think its okay to leave the front drapes open and the front doors unlocked, until someone breaks in–at which point we demand our houses be encased in cement.

Finding the boundary of a one-bit-per-pixel monochrome blob

Recently I’ve had need to develop an algorithm which can find the boundary of a 1-bit per pixel monochrome blob. (In my case, each pixel had a color quality which could be converted to a single-bit ‘true’/’false’ test, rather than a literal monochrome image, but essentially the problem set is the same.)

In this post I intend to describe the algorithm I used.


Given a 2D monochrome image with pixels aligned in an X/Y grid:

Arbitrary 2D Blob

We’d like to find the “boundary” of this blob. For now I’ll ignore the problem of what the “boundary” is, but suffice to say that with the boundary we can discover which pixels are inside the blob which border the exterior of the blob, which pixels on the outside border the pixels inside the blob, or to find other attributes of the blob such as a rectangle which bounds the blob.

If we treat each pixel as a discrete 1×1 square rather than a dimensionless point, then I intend to define the boundary as the set of (x,y) coordinates which define the borders around each of those squares, with the upper-left corner of the square representing the coordinate of the point:

Pixel Definition

So, in this case, I intend to define the boundary as the set of (x,y) coordinates which define the border between the black and the white pixels:

outlined blob

We do this by walking the edge of the blob through a set of very simple rules which effectively walk the boundary of the blob in a clock-wise fashion. This ‘blob walking’ only requires local knowledge at the point we are currently examining.

Because we are walking the blob in a clock-wise fashion, it is easy to find the first point in a blob we are interested in walking: through iteratively searching all of the pixels from upper left to lower right:

(Algorithm 1: Find first point)

-- Given: maxx the maximum width of the pixel image
          maxy the maximum height of the pixel image

   Returns the found pixel for an eligable blob or 'PixelNotFound'.

for (int x = 0; x < maxx; ++x) {
    for (int y = 0; y < maxy; ++y) {
        if (IsPixelSet(x,y)) {
            return Pixel(x,y);
        }
    }
}
return PixelNotFound;

Once we have found our elegable pixel, we can start walking counter clockwise, tracking each of the coordinates we have found as we walk around the perimeter of the blob, traversing either horizontally or vertically.


Given the way we’ve encountered our first pixel in the algorithm above, the pixels around the immediate location at (x,y) looks like the following:

Point In Center

That’s because the way we iterated through, the first pixel we encountered at (x,y) implies that (x-1,y), (x,y-1) and (x-1,y-1) must be clear. Also, if we are to progress in a clock-wise fashion, clearly we should move our current location from (x,y) to (x+1,y):

Blob5


The design of our algorithm proceeds in a similar fashion: by examining each of the 16 possible pixel configurations we can find in the surrounding 4 pixels, and tracking the one of 4 possible incoming paths we take, we can construct a matrix of directions in which we should take to continue progressing in a clockwise fashion around our eligible blob. And in all but two configuration cases, there was only one possible incoming path we could have taken to get to the center point, since as we presume we are following the edge of the blob we could not have entered between two black pixels or between two white pixels. Some combinations are also illegal since we presume we are walking around the blob in a clock-wise fashion rather than in a counter-clockwise fashion. (This means that we should be, when standing at the point location, pixels should be on the right and never on the left. There is a proof for this which I will not sketch here.)

The 16 possible configurations and the outgoing paths we can take are illustrated below:

All directions

Along the top of this table shows the four possible incoming directions: from the left, from the top, from the right and from the bottom. Each of the 16 possible pixel combinations are shown from top to bottom, and blanks indicate where an incoming path was illegal–either because it comes between two blacks or two whites, or because the path would have placed the black pixel on the left of the incoming line rather than on the right.

Note that with only two exceptions each possible combination of pixels produces only one valid outgoing path. Those two exceptions we arbitrarily pick an outgoing of two possible paths which excludes the diagonal pixel; we could have easily gone the other way and included the diagonal, but this may have had the property of including blobs with holes. (If this is acceptable or not depends on how you are using the algorithm.)

This indicates that we could easily construct a switch statement, converting each possible row into an integer from 0 to 15:

Algorithm 2: converting to a pixel value.

-- Given: IsPixel(x,y) returns true if the pixel is set and false 
          if it is not set or if the pixel is out of the range from
          (0,maxx), (0,maxy)
   Return an integer value from 0 to 15 indicating the pixel combination

int GetPixelState(int x,y int y)
{
    int ret = 0;
    if IsPixel(x-1,y-1) ret |= 1;
    if IsPixel(x,y-1) ret |= 2;
    if IsPixel(x-1,y) ret |= 4;
    if IsPixel(x,y) ret |= 8;
    return ret;

}

We now can build our switch statement:

Algorithm 3: Getting the next pixel location

-- Given: the algorithm above to get the current pixel state,
          the current (x,y) location,
          the incoming direction dir, one of LEFT, UP, RIGHT, DOWN

   Returns the outgoing direction LEFT, UP, RIGHT, DOWN or ILLEGAL if the
   state was illegal.

Note: we don't test the incoming path when there was only one choice. We
could, by adding some complexity to this algorithm, for testing purposes.
The values below are obtained from examining the table above.

int state = GetPixelState(x,y);
switch (state) {
    case 0:    return ILLEGAL;
    case 1:    return LEFT;
    case 2:    return UP;
    case 3:    return LEFT;
    case 4:    return DOWN;
    case 5:    return DOWN;
    case 6:    {
                   if (dir == RIGHT) return DOWN;
                   if (dir == LEFT) return UP;
                   return ILLEGAL;
               }
    case 7:    return DOWN;
    case 8:    return RIGHT;
    case 9:    {
                   if (dir == DOWN) return LEFT;
                   if (dir == UP) return RIGHT;
                   return ILLEGAL;
               }
    case 10:   return UP;
    case 11:   return LEFT;
    case 12:   return RIGHT;
    case 13:   return RIGHT;
    case 14:   return UP;
    case 15:   return ILLEGAL;
}

From all of this we can now easily construct an algorithm which traces the outline of a blob. First, we use Algorithm 1 to find an eligible point. We then set ‘dir’ to UP, since the pixel state we discovered was pixel state 8, and the only legal direction had we been tracing around the blob was UP.

We store away the starting position (x,y), and then we iterate through algorithm 3, getting new directions and updating (x’,y’), traversing (x+1,y) for RIGHT, (x,y+1) for UP, (x-1,y) for LEFT and (x,y-1) for DOWN, until we arrive at our starting point.

As we traverse the parameter, we could either build an array of found (x,y) values, or we could do something else–such as maintaining a bounding rectangle, bumping the boundaries as we find (x,y) values that are outside the rectangle’s boundaries.

Handy trick: trap back button in a UINavigationController stack

- (void)didMoveToParentViewController:(UIViewController *)parent
{
	[super didMoveToParentViewController:parent];

	if (![parent isEqual:self.parentViewController]) {
		NSLog(@"Back");
	}
}

Inserted into a view controller pushed into a UINavigationController stack, this will fire the ‘back’ message when the user presses ‘Back’ to back up the view controller stack.

As seen on Stack Overflow