Apologies.

For some reason or another I am still not receiving notifications when comments are left on this blog.

So my apologies for not responding to questions or comments left here. I’ve also changed the settings to immediately approve all comments (except those containing links and those which look like spam), because for some reason e-mail alerting me to new posts haven’t been arriving either.

Creating a custom Key in Objective C

When creating a custom key in Objective C for NSDictionary or NSCache or the like, you need to create an object which does the following:

Implements <NSCopying> protocol.

If your key is invariant, you can implement the method copyWithZone: as follows:

- (id)copyWithZone:(NSZone *)zone
{
    return self;
}

Of course if your key is invariant, ideally you would create the key entirely using a custom init function, and mark all the properties (readonly).

Implement the isEqual: method.

This is part of the NSObject protocol. Note that any class (or nil) could be passed in as the argument to isEqual: method, so you may want to use the method isKindOfClass: to verify that you got what you expected as the parameter.

Implement the hash method.

This is also part of the hash function of the data you passed in.

The hash function doesn’t need to be complicated. For example, if your key is three integers, your hash function could be as simple as:

- (NSUInteger)hash
{
    return (self.a << 8) ^ (self.b << 4) ^ self.c;
}

What is important is that two keys passed into your system are unlikely to have a similar value.

Also note that many of the classes that you see used routinely as keys (such as NSString or NSNumber) also follow this protocol. Meaning if your custom key has a string in it, you can use the NSString’s hash function as one of the inputs to your own hashing function:

- (NSUInteger)hash
{
    return (self.intVal << 16) ^ [self.stringVal hash];
}

Thinking about mobile, tablets, desktops and TVs

So I got an Apple TV and the necessary cables in order to sideload software to it. It’s a very interesting product.

But it’s a product which I’m having a hard time wrapping my head around, so here are my thoughts.

Think, for a moment, about how you interact with your mobile device. You may be waiting for a bus or you may be waiting at an airport–so you pull your mobile device out and maybe kill 5 minutes surfing the web or playing a game. (Thus, games that are easy to learn and which have a short play cycle–meaning a game you can play a level in 30 seconds or so–are quite popular. What makes games like Candy Crush or Caesar III).

Now a tablet combined with a keyboard would make a good device for creating some content–and in some ways it occupies the same space as a small laptop computer, which is also equally hard to pull out, and set up. So a tablet with a keyboard is like a laptop computer: you’re not pulling it out of your pocket like a cell phone. You’re not pulling it out of a backpack and holding it like a paperback book. Instead, you’re pulling it out, putting it together (a tablet with a keyboard) or opening it up, and you’re setting it on a desk.

At which point it’s time to start creating content–even if that’s just a blog post or a long response to an e-mail from work.

Desktop computers, of course, sit on your desk; they’re ideal for creating content, and since they are not mobile, they can be far more powerful since there are fewer constraints on power consumption and size. And being the most powerful, they are ideal for high powered games–games which require far more computational power than can run on a laptop computer. (Though today most processor manufacturers are concentrating on energy efficiency over raw performance, so the gap between desktop and laptop computers are not as wide as they used to be.)

Desktop computers are ideal for software developers, for running video and photo editing, and for sophisticated music editing. (I have a MacPro with 64gb RAM as my primary development computer, and it can compile a product like JDate’s mobile app in moments, where my 13 inch Mac Air takes several minutes to do the same task. I also have a 21″ monitor and a 27″ monitor attached to my MacPro–which means I can easily open Xcode on one monitor, have the app I’m debugging on the second, and have documentation open while I’m debugging the code.)


The Apple TV is not something you pull out of your pocket and fiddle with for 5 minutes. It’s not something you pull out of your backpack or purse and open up like a paperback book. It’s not even a laptop or tablet with a keyboard that you pull out of your backpack and set up on a convenient desk. It isn’t even a desktop computer, since the monitor is across the room and being watched by several people rather than sitting a couple of feet from your face on your desk.

And that makes the use case of the Apple TV quite different than the device you pull out and fiddle with for 5 minutes while waiting for a train, or pull out of your backpack and read like a paperback book.


Think of how you use your TV. You may pop some popcorn, or grab something to eat (my wife and I routinely eat dinner in front of the TV), sit down and eat while watching the TV. You may play a video game in front of the TV–but ideally the best video games for a TV can be a social experience.

But sitting down in front of the TV is not as trivial a process for many of us than even sitting down in front of a desktop computer. (A desktop computer you may sit down in front of in order to check your e-mail, but chances are you’re not sitting down for the long haul. So you’re not relaxing as you would on a couch, settling in and leaning back, sometimes with pillows or a blanket. Your desktop chair is probably far more utilitarian than your couch.)

That means you’ve sat down for the long haul, and you’re seeing some degree of entertainment. Even if it is interactive–a video game–you’re not sitting down on a couch for 5 minutes to check your e-mail.


So I would contend that the Apple TV is ideal for the following types of things:

(1) Watching video content. (Duh.) And it’s clear the primary use case Apple has with the Apple TV is to permit individual content providers help Millennials “cut the cord” by allowing content providers build their own content apps.

(2) Playing interactive games with high production value and deep and involving storylines. (Think Battlefront or Fallout 4.)

On this front I’m concerned Apple’s limits on the size of shipping apps may hinder this, since a lot of modern games have memory requirements larger than the current Apple TV app size constraints.

People are complaining that Apple is also hobbling app developers by requiring all games to also work in some limited mode with the Apple remote–but realize that the serious gamer that is your target market will have quickly upgraded to a better input device, so think of using the Apple control as a sort of “demo mode” for your game. Yeah, you can play with the Apple remote, but to really enjoy the game you need a joystick controller.

(3) Browsing highly interactive content that may also work on other form factors. (I could envision, for example, a shopping app that runs on your TV that is strongly tied with video content–such as an online clothing web site with lots of video of models modeling the clothing, or a hardware store web site tied with a lot of home improvement videos. Imagine, for example, a video showing how to install a garbage disposal combined with online ordering for various garbage disposals from the site.)

I think this is a real opportunity for a company with an on-line shopping presence to provide engaging content which helps advertise their products, though it does increase the cost of reaching users in an era where margins are getting increasingly thinner.

(4) Other social content which may involve multiple people watching or flipping through content. Imagine, for example, a Domino’s Pizza Ordering app for your Apple TV, or a version of Tinder that runs on the TV.

MVVM, iOS and Design Patterns

So on the project I’m working on, an architect has asked us to use the MVVM model on iOS to develop certain components within the application. If those components work out correctly, then eventually we’ll be asked to refactor the rest of the application to use the same design pattern.


Okay, so here are some random thoughts about this.

First, I’m not a fan of “Design Patterns” as we currently seem to be using the term.

To me, a “design pattern” is essentially a technique for solving a problem.

And for those who think this is a difference without a difference, Design Patterns are not just useful ways to think about a problem in order to solve it, but actually represents specific codified solutions which are intended to be used relatively unmodified. Techniques, on the other hand, represent ways of thinking about a problem which may or may not be reproduced with perfect fidelity from place to place.

To give a concrete example of what I mean by this, take the current Model-View-Controller Design Pattern. As described on Wikipedia, it represents a Core Solution to building User Interfaces, where each component is well defined: a “view” which generates an output representation, showing data from a “model” which passively stores data the user manipulates using a “controller” which mediates messages between the two.

But if you go back to the original papers discussing Model-View-Controller, you see something much less rigid in thought: it was a way to separate the functionality used to drive a user interface into three loosely grouped ideas: views which show things, models which store things and controllers which manipulate things.

A technique, in other words, to help you organize your thoughts and your code better.

Second, not all techniques are “One Size Fits All.”

Take MVC again. There is nothing that requires your user interface application to use all the pieces of a model view controller: in fact, one could very easily write a simple calculator application that has no model at all.

For example, here is the UIViewController class of a trivial application which takes the input of a text field and converts from celsius to fahrenheit:

#import "ViewController.h"

@interface ViewController ()
@property (weak, nonatomic) IBOutlet UITextField *centigrade;
@property (weak, nonatomic) IBOutlet UITextField *fahrenheit;
@end

@implementation ViewController
- (IBAction)doConvert:(id)sender
{
	double c = [self.centigrade.text doubleValue];
	double f = 32 + (9/5) * c;
	self.fahrenheit.text = [NSString stringWithFormat:@"%.2f",f];
}
@end

Now the most pedantic jerk would say “well, technically the above is a perfect example the MVC Design Pattern, with your model implicit in the method doConvert:, in the variables c and f.”

To which I’d respond really??? Are you so hell bent to squeeze everything into an artificially strict interpretation that you must find a model where one doesn’t really exist?

And thus, the difference between “technique” and “Design Pattern.”

Third, there are far more techniques under the sun than the so-called “Gang Of Four” first espoused upon, techniques that we have forgotten are design patterns in their own right.

Remember: techniques are ways of thinking about a problem that help solve a problem, rather than strictly formed legos in the toy chest that must be assembled in a particular way.

So, for example, “separation of concerns” is a design pattern in its own right: a way to think about code that involves separating it into distinct separate components which are responsible for their own, more limited jobs.

Take the TCP/IP software stack, for example. The power of the stack comes from the fact that each layer in the protocol is responsible for a very limited job. But when assembled into a stack it creates a rather powerful communications paradigm that underlies the Internet today.

So, for example, the link layer is responsible for talking to the actual physical hardware. The IP layer is responsible for routing; in essence it is responsible for converting an IP address into the appropriate hardware device to talk to on the local network, and for receiving incoming IP packets addressed to this computer.

But the IP layer makes no guarantees the message is sent successfully; instead, that lies on the TCP layer, which chops large messages up into packets that fit into an IP frame, and which tracks which packets have been successfully sent and received, sending an acknowledgement when a packet is received successfully. This allows TCP to note when a packet goes missing and trigger a resend of that packet.

And on top of these three simple relatively straight forward components a global Internet was built.

The thing is about patterns like the Separation of Concerns is that because it’s so fundamental to the way we think about software development we forget that it is yet another technique, yet another design pattern that developers use. In fact, we’ve taken it so for granted we no longer really teach the concept in school. We just assume new developers will understand how to break their code into separate distinct modules, each reflecting a specific concern.

Other techniques we’ve simply dropped on the floor, forgetting their value.

For example, we’ve forgotten the power of finite state machines to represent computational state when performing a task. Yet the tool YACC rests on the work done on finite state machines, by converting a representation of a language into a state machine which can be used to parse that language. Similar state machine representations have been used when building parsers for ASN-1 communication protocols, are often used to represent the internal working of IP, and are implicit in the design of virtual machines, such as the JavaVM system.

But because there is no One True Way to implement a state machine, it’s seldom thought of as a Design Pattern, if it is even thought of at all.


Let’s go back to MVC for a moment.

The original idea behind Model View Controller was simply as a technique to think about how to organize your code into separate concerns: one which handles the data model, one which handles the views the user sees, and one which controls how views and models interact.

Think of that in the context of the following article: Model-View-ViewModel for iOS

Missing Network Logic

The definition of MVC – the one that Apple uses – states that all objects can be classified as either a model, a view, or a controller. All of ‘em. So where do you put network code? Where does the code to communicate with an API live?

You can try to be clever and put it in the model objects, but that can get tricky because network calls should be done asynchronously, so if a network request outlives the model that owns it, well, it gets complicated. You definitely should not put network code in the view, so that leaves… controllers. This is a bad idea, too, since it contributes to our Massive View Controller problem.

HEADBANG!

If you think of MVC as an organizational principle, the question “where should you put your network code?” becomes painfully obvious. It belongs in the model code.

But it also assumes the model code also may contain business logic which affects how objects within the model may be manipulated, as well as alternate representations of the data within the model.

But if you think of MVC in the way we’ve grown accustom to, then the “model” is a bunch of passive objects, no better than a file storage system. And if you think of the model code as a passive collection of data objects to be manipulated and perhaps serialized–then of course “where should you put your network code?” becomes a pressing concern.

Just as if you think of a kitchen as being a room that only contains a stove, refrigerator and a microwave, the question “where should I store my pots” becomes a pressing question.

“Well, in the cabinets.”

“But kitchens don’t have cabinets! They only have stoves, refrigerators and microwaves!”

HEADBANG!


But okay, I guess we’re in the world that tries very hard to squeeze the World Wide Web into the Model-View-Controller paradigm (which, when you think about it, doesn’t make a whole lot of sense outside of a Javascript-based AJAX style web page–and please, don’t talk to me about FuBar XYZ framework that promises to allow you to write HTML style pages which use the MVC pattern without redefining the terms “view” and “controller” beyond recognition.), so if we have stupid views which cannot participate in the UI, I guess we also must deal with stupid models which cannot participate in the UI.

Which is why, when you think about it, why MVC now seems to stand for “Massive View Controllers”–because if you don’t allow any logic in your view and you don’t allow any logic in your model, then you’re stuck slamming everything in the controller code, including shit that doesn’t belong there, like model business logic.


And into this world, we see MVVM.

After two or three ill-considered days of staring at this for the project I’ve come to some conclusions:

First, MVVM makes sense if you consider the responsibility of a controller to both handle the interactions of views within a view controller, and to handle the business logic for communicating with the model code.

Again, I believe this distinction is only necessary because we’ve come to think of views as stupid (and in an iOS application, generally tinker-toys we drag off the Xcode storyboard palette), and because we’ve come to think of models as stupid–at best serializable collections of Plain Ordinary Objects.

(Personally I like thinking of a model as being more than a pile of POO, but that’s just me.)

MVVM, as handled in environments like iOS, is really MVCVM.

In other words, you don’t get rid of view controllers. Instead, you separate the code so that view controllers handle the user interface (such as making sure table views are populated, that events repopulate changed controls, etc), and the “ViewModel” code handles the view-specific interaction logic with the model code.

Again, I believe the model code should be more than a pile of POO. But as an organizational principle it’s not a bad one, putting the code which manipulates the model separate from the code which manipulates the views, and having them communicate through a well described interface.

MVVM assumes a “binder”, or rather, a means by which changes in the ViewModel are sent to the View Controller/View combination.

So inherent in the design of MVCVM is the notion that changes made to a publicly exposed value in the ViewModel will be announced to the view controller so the views handled by the view controller will be automatically updated. Some documents describe using ReactiveCocoa, though one page suggested something as simple as a delegate could be used, though one could also use KVO to observe changes.

In some example code I’ve seen the View Controller instantiate the ViewModel object. In others, I’ve seen the ViewModel object passed to the initializer of the View Controller.

I gather that (a) there is supposed to be a “correct” (*ugh*) way to do this, but (b) if you want to use Storyboards or NIBs in order to create your view controllers, you’re sort of stuck with having the View Controller create the ViewModel. (Besides, being able to instantiate the ViewModel without the View Controller is supposed to allow us to supposedly test our user interface without having a user interface…)

On the other hand, you can always attach your ViewModel to the View Controller in the prepareForSegue:sender: method.


And finally:

This feels like it’s solving a problem we shouldn’t have if we weren’t being such pedantic assholes.

Meaning if we hadn’t forgotten that MVC is an organizational principle rather than a strict formula, and hadn’t forgotten that our Views don’t need to be stupid and our Models doesn’t need to be a pile of POO, then we wouldn’t be left wondering where our network code belongs or wondering where to put our business logic.

Because the answer to that question would be immediately obvious.

But since this is where we are, separating out what properly belongs in the model code and calling it something new may help a new generation of developers realize they don’t need to build a big pile of sphagetti code to make their applications work.

This doesn’t guarantee better code.

The real problem, of course is that code will be no better than the programmer who writes it, no matter how many different techniques they try. A good, well organized programmer will produce good, well organized code. A poor, disorganized programmer will produce poor, disorganized code.

Random thoughts.

  1. You have to write the code to understand if the algorithm is correct.
  2. There are no one-size fits all solutions, especially when it comes to design patterns. Solutions which solve most problems are general, and the more specific the solution (that is, the more restrictions placed on how the solution is to be implemented), the fewer problems that solution will solve.

Why I hate Cocoapods.

(1) As a general rule I dislike any build system which requires a list of third party libraries to be maintained separate from the source base you’re building.

This violates the rule that you should be able to rebuild your source kit at any time so long as you have the compiler tools and the contents of your source repository, and get the exact same executable every time–as part of your source kit is stored elsewhere, in off-line databases stored across the Internet.

(2) Because your libraries are stored elsewhere, unless you are very careful with your configuration settings you cannot know if, in the future when you rebuild your project, you won’t get the exact same application. This makes final testing impossible because you cannot know the code you compile when you build for shipping your product is the exact same as the code you compiled when entering the final QA phase.

(3) Cocoapods in particular works by manipulating the settings file and project files of your project in invisible ways. I currently have a project where Cocoapods broke my ability to view inspectable views, and it required some fairly obscure setting hacking.

Unless a tool like Cocoapods is built into Xcode and is made an integral element of the Xcode ecosystem as it ships from Apple (as Gradle is a part of Android Studio), it is inevitable that future releases of Xcode will be broken by Cocoapods, and broken in ways which are nearly impossible for all but the most skilled hacker (or user of Google Search) to resolve.

(4) There is an entire mentality that has evolved around Cocoapods and Maven and other such library management tools that if you don’t have at least a dozen different libraries included in your project, you’re not engaged in software engineering.

I’ve seen projects which didn’t need a single third-party library, and once I handed them off to someone else (or worse, in one project, another developer started working on the code I was working on without management telling me he immediately gutted a bunch of my working code and replaced it with a handful of third party libraries that added nothing to the party.

Now it’s not to say there aren’t a lot of very good third party libraries. In the past I’ve used SLRevealViewController and iCarousel with great success. But in both cases I’ve simply downloaded the class which provides the implementation and included the sources directly into my application.

But I’ve also seen people include libraries which may have been useful during the iOS 4 era but which provide little value over the existing iOS 7/8 API. For example, there are plenty of network libraries which provide threaded networking access–nearly a necessity when the only APIs available to users to do HTTP requests was the event-driven version of NSURLConnect class, which required an intelligent implementation of NSURLConnectionDataDelegate, or rolling your own calls using Berkeley Sockets and POSIX Threads.

However, today we have Grand Central Dispatch and, when combined with the synchronous version of NSURLConnect, can make downloading a request from a remote server as simple as:

	dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
		NSURLRequest *req = [[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://www.google.com"]];
		NSData *data = [NSURLConnection sendSynchronousRequest:req returningResponse:nil error:nil];
		dispatch_async(dispatch_get_main_queue(), ^{
			[self processResult:data];
		});
	});

Do we need CocoaAsyncSocket anymore? FXBlurView given that the current iOS tools do not build before iOS 7? Do we need the other various iOS networking solutions given that it now takes 6 lines of code to perform an asynchronous network connection?

And really, do we need SLRevealViewController? The problem with the UI design that SLRevealViewController allows you to implement makes the problem of “discoverability” hard for users: by hiding a screen in a non-obvious place, it makes it hard for users to know what he can do with his app–which is why Facebook’s mobile app, which first promoted the side bar model that SLRevealUserController implements, has moved away from that design in favor of a standard UITabBarController. (Yes, they’ve kept the same UI element with the list of users on-line hidden on a right-side reveal bar–but frankly that could have also been handled by a simple navigation controller push.)

By the way, Maven is worse, as is the entire Java ecosystem: I’ve seen projects that rely on no less than 50 or so third party libraries–which creates an inherently fragile application in that all of those libraries must be compatible with the other libraries in the collection. And all it takes are two of those 50 or so which require completely different and incompatible versions of a third party library–which I’ve seen, when two libraries suddenly required completely different versions of the same JSON parser library.

(Ironically, the incompatible third party library which triggered this issue was a replacement of Java’s built in logging facility. So we had a completely broken and somewhat unstable build simply because some developer working on the project wanted a few more bells and whistles with an internal logging tool that was not actually used in production.)


Most programmers out there that I’ve encountered today are “plumbers”; rather than build an application with the features requested they immediately go searching for libraries that implement the features they want and glue them together.

While there is something to be said about that when it comes to doing something that is either tricky (such as what SLRevealViewController does), or which requires integration to a back-end system that you don’t control (such as Google Analytics), it has been my experience that programmers who reduce the problem of programming to finding and snapping together pre-fabricated blocks they barely understand using tools they hardly understand and the most cursory understanding of how components work does not result in the best quality applications.

Today’s bitch: Source Of Truth

One of the minor side projects I’m working on has an interesting problem. It turns out that under some circumstance, when a setting within the application is changed, the setting doesn’t “take”–meaning somehow the setting isn’t stored in a location which is then picked up and used by other parts of the application.

And this gets to one of my bitches about a lot of software I’ve had to maintain, and that is the concept (or lack thereof) of Source Of Truth.

The idea behind the Source Of Truth is simple: what class, library, method, global variable, or object is the definitive owner of a piece of data, such as a setting, a preference or the text that a user is editing?

And while this seems quite simple in theory, it can be quite difficult in practice, especially when software is maintained across multiple different teams or across multiple different programmers–because it is just sooooo temping to put some bit of data somewhere that’s easy to track.


Now sometimes the source of truth for a value is actually on a remote server. This does not give you a license to just make a call from one part of your code, stash the value away, then use (and edit) the value from there–because it means someone else may come along and follow the pattern, and now suddenly you have two copies of the value, and no assurances those two values are in sync.

Better instead to create a common class which can cache the value as appropriate. And that may mean that in order to retrieve the value for the current setting you have to perform an asynchronous call with a callback to finish populating your interface.

In concrete terms, suppose to get the value we need to reach out to a remote server and call the api/setting endpoint. Because we cannot guarantee that we will have the data locally we can create an API call which, if the data is local, calls our callback method–but if not, reaches out to get the current value.

- (void)getSettingWithCallback:(void (^)(NSString *str))callback
{
	if (self.value) {
		callback(self.value);
	} else {
		void (^copyCallback)(NSString *str) = [callback copy];
		dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
			/*
			 *	GET http://eample.com/api/setting for our setting
			 */

			NSMutableURLRequest *req = [[NSMutableURLRequest alloc] initWithURL:[NSURL URLWithString:@"http://example.com/api/setting"]];
			[req setHTTPMethod:@"GET"];
			NSData *data = [NSURLConnection sendSynchronousRequest:req returningResponse:nil error:nil];

			NSString *str = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
			self.value = str;

			dispatch_async(dispatch_get_main_queue(), ^{
				copyCallback(str);
			});
		});
	}
}

Note there are two things wrong; first, we’re not handling errors gracefully. Second, if we have multiple callers attempting to get the same value, we may wind up with multiple simultaneous network calls to the back end. In that case it may be useful to track if we have an API call in flight, and if so queue up the callback into an array, so that when we get the value we need we can iterate the array and call all the callbacks.

If it turns out that it is possible the value could change on the remote server–because the user could log into a web interface, for example–then it is easy to associate this with a timer, and invalidate the value, forcing us to get the latest and greatest after some idle time has passed.

But the general gist is here: if we want our setting, we simply call

[[APIObject shared] getSettingWithCallback:^(NSString *value) {
        ... do something interesting here...
}

Our save method would both update the back end–remember: our back end is the source of truth. We also store the value locally so we’re not constantly hitting the back end:

- (void)setSetting:(NSString *)value withCompletion:(void (^)(void))callback
{
	self.value = value;

	void (^copyCallback)(void) = [callback copy];
	dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
		/*
		 *	PUT http://eample.com/api/setting to save our setting
		 */

		NSData *data = [value dataUsingEncoding:NSUTF8StringEncoding];

		NSMutableURLRequest *req = [[NSMutableURLRequest alloc] initWithURL:[NSURL URLWithString:@"http://example.com/api/setting"]];
		[req setHTTPMethod:@"PUT"];
		[req setHTTPBody:data];
		[NSURLConnection sendSynchronousRequest:req returningResponse:nil error:nil];

		dispatch_async(dispatch_get_main_queue(), ^{
			copyCallback();
		});
	});
}

Again, the error checking is not here, but it serves to sketch the general idea: send the updated value to the back end in a separate thread, then notify when our update is complete.

Objective C subscripting operators

I keep forgetting where this link is, so I’m putting it here: CLang Reference: Objective C, Object Subscripting.

This documents the methods we need to define if we are declaring our own Objective C class and we want to implement array-style subscripting or dictionary-style subscripting.

Extracting from the text:

… Moreover, because the method names are selected by the type of the subscript, an object can be subscripted using both array and dictionary styles.

Meaning if the compiler detects the subscript is an integral type, it uses the array-style subscript method calls, and when the subscript is an Objective C pointer type, it uses a dictionary-style subscript method call.

For array subscripting, use either or both of the methods

- (id<NSObject>)objectAtIndexedSubscript:(NSInteger)index;
- (void)setObject:(id<NSObject>)value atIndexedSubscript:(NSInteger)index;

For dictionary-style subscripting, use either or both of the methods

- (id<NSObject>)objectForKeyedSubscript:(id<NSObject>)key;
- (void)setObject:(id<NSObject>)value forKeyedSubscript:(id<NSObject>)key;

This goes hand-in-hand with my earlier post Objective C declaration shortcuts, and has been a public service announcement.

Unknown Knowns.

While Donald Rumsfeld was blasted for discussing “known knowns” and the like, it is a core principle that you see in things like the CISSP security training materials and in other areas where security is a factor. It also factors into things like scientific research, where we constantly push the boundaries of what we know.

And the idea goes something like this:

There are known knowns, that is, things we know that we know. For example, I know the lock on the back door of the building is locked.

There are known unknowns, that is, things we know that we don’t know. For example, I don’t know if the lock on the back door of the building is locked–and knowing I don’t know that, I can go down and check.

And there are unknown unknowns, that is, things we don’t know that we don’t know. For example, I’ve never been in this building before, and I have no idea that there is even a back door to this building that needs to be locked.

Unknown unknowns can often be uncovered through a systematic review and through imagining hypothetical scenarios. We could, for example, on moving into this building, walk the perimeter of the building looking for and noting all the doors that go in and out of the building: we use the fact that we don’t know the building (a known unknown) to uncover facts about the building which then help us make informed decisions in the future–converting the unknown unknown about back doors into a known quantity.

Though that doesn’t help us if there is a tunnel underneath.


If we put these into a graph it may look something like this:

 
Known knowledge
Unknown knowledge
What we know:
Known knowns

Things we know we know. (Example: we know the back door is locked.)

Unknown knowns

???

What we don’t know:
Known unknowns

Things we know we don’t know. (Example: we don’t know if the back door is locked.)

Unknown unknowns

Things we don’t know we don’t know. (Example: we don’t know there is a back door to lock.)

Now the forefront of scientific knowledge lives in the lower right quadrant: we’re constantly pushing the frontier of knowledge–and part of that means figuring out what are the right questions to ask.

The forefront of security disasters also live in that lower right quadrant: this is why, for example, safety regulations for airplanes are often written only after a few hundred people die in a terrible airplane accident. Because until the accident took place we didn’t even know there was an issue in the first place.

But what is the upper right quadrant? What is a “unknown known?”


The side axis: what we know and what we don’t know–this is pretty straight forward. I either know if the back door is locked or I don’t.

The top axis: I would propose that this is really a discussion about our self-awareness or self-knowledge: do we even know the right question to ask? Do we even have a full enough mental model to know what we should know?

Do we even know there is a back door on the building?

Have you ever gone to a lecture and, at the end of a long and involved technical discussion, the presenter turns to the audience and asks if there are any questions–and there is nothing but silence? And afterwards you realize the reason why you’re not asking questions is not because the information presented was so obvious that you just managed to absorb it all–but instead found that you didn’t even know the right question to ask? That’s “unknown knowledge”–you don’t really know what you just learned, and of course are unable to formulate a question, because to do so would require that you knew what you knew and what you didn’t know, and that you knew what you didn’t understand.

It takes time to learn.


So I would propose that an unknown known is something we know–but we are unaware of the fact that we know it. Our self-knowledge does not permit us from knowing that we know something–or rather, from knowing that we have learned something or are aware of something that perhaps others may not be aware of.

Meaning “unknown knowns” are any bit of knowledge that, when someone asks about it, we respond with “Well, of course everyone knows that!”–which is a genuine falsehood, since clearly the person asking didn’t know.

Unknown knowns are things we are unaware that we have to communicate. Unknown knowns are things we don’t know require documentation or, when we are made aware of them we think is either stupid or obvious.

Unknown knowns include things like:

Undocumented software which has no comments or documentation in them, because “of course everyone should be able to read the code.”

Undocumented corporate procedures, which is just “part of the corporate culture” that “everyone” should just understand.

Anything we think just “should be obvious.”

Nothing is obvious.

Yes, we need signs in a bathroom reminding workers at a restaurant to wash their hands–because some people may not know the effects of hygiene on food preparation and it is better to constantly remind people than to suffer the consequences. Yes, we need corporate policy manuals, ideally abstracted for ease of reading, which reminds people that sexual harassment is not welcome–and defines what sexual harassment actually is. Yes, developers need to document their code: code is not “self-documenting.”

And that arrogant feeling that rises up in most people when they respond “well, of course everyone knows that!” is a natural response to the embarassment on discovering an “unknown known”–on discovering that you were so unaware of yourself that you couldn’t catalog a piece of vital knowledge and share it properly with someone who needed it.

And by placing the blame on someone else with your “well, of course!” statement you deflect blame from your own lack of self-awareness.

Worse: because we don’t know what we know, and because for many of us the natural reaction is to place blame on the person who genuinely didn’t know, we create barriers when training new hires or when teaching new developers or when bringing on new players onto the team.

Because by telling new people “it’s your fault that I didn’t tell you what I didn’t know I should tell you” we diminish others for what is, essentially, our own lack of self-awareness.

1 2 3 4 38