Announcing a new version of SecureChat

I’ve just checked in a new version of SecureChat on the main branch at GitHub.

New features include:

  • A working Android client.
  • Various notification bug fixes.
  • Various iOS bug fixes.

Why am I doing this?

Even after all these months, since the Apple v FBI fight began, I’ve been hearing way too much stupidity about encryption. The core complaint I have is the idea that somehow encrypted messaging is the province of large corporations and large government entities, entities that must somehow cooperate in order to assure our security.

And it’s such a broken way to think about encryption.

This is a demonstration of a client for iOS, a client for Android and a server which allows real-time encrypted chatting between clients. What makes chatting secure is the fact that each device generates its own public/private key, and all communications are encrypted against the device’s public key. The private key never leaves the device, and is encoded in a secure keychain with a weak checksum that would corrupt the private key if someone attempts a brute-force attack against the device’s secure keychain.

Meaning there is no way to decrypt the messages if you have access to the server. Messages are only stored on each device, encrypted using the device’s private key–meaning a data dump of the device won’t get you the decrypted messages. And a brute force attempt to decode the device’s keychain is more likely to corrupt the keychain than it is to reveal the private key.


Security is a matter of architecture, not just salt that is sprinkled on top to enhance the flavor. Which is why there are so many security breaches out there: because most software architects are terrible at their job: they simply do not consider the security implications of what they’re doing. Worse: many of the current “fads” on designing client/server protocols are inherently insecure.

This is an example of what one person can do in his spare time to create a secure end-to-end chat system which cannot be easily compromised. And unlike other end-to-end security systems (where a communications key is generated by the server rather than on the device), it is a protocol that cannot be easily compromised by compromising the code on the server.

Things to remember: broken singletons and XCTests

Ran into a bizarre problem this morning where a singleton (yeah, I know…) was being created twice during the execution of an XCTestCase.

That is, with code similar to:

+ (MyClass *)shared
{
	static MyClass *instance;
	static dispatch_once_t onceToken;
	dispatch_once(&onceToken, ^{
		instance = [[MyClass alloc] init];
	});
	return instance;
}

During testing, if you set a breakpoint at the alloc/init line inside the dispatch_once block, you would see instance being created twice.

Which caused me all sorts of hair pulling this morning.


The solution? Well, the unit test code was including the main application during linking.

And the MyClass class was also explicitly included (through Target Membership, on the right hand side when selecting the MyClass.m file) in our unit tests as well.

What this means is that two instances of the MyClass class is being included. That means two sets of global variables, two sets of ‘onceToken’, two sets of ‘instance’. And two separate calls to initialize two separate instances, causing all sorts of confusion.

The answer?

Remove the MyClass.m class from Target Membership.


Well, I guess the real solution is to design the application without singletons, but that’s an exercise for another day.

Besides, there are times when you really want a singleton: you really want only one instance of a particular class to exist because it represents a common object shared across the entire application–and the semantics creates confusion if multiple objects are created. (This is also the case with NSNotificationCenter, for example.)

Things to remember: opening an SSL socket using CFStream

This was a pain in the neck, but I finally figured out the answer.

So I was attempting to open a connection to a server from iOS which may have a self-signed certificate installed.

The specific steps I used to open the connection (from my test harness) was:

1. Set up the variables

	NSString *host = @"127.0.0.1";
	NSInteger port = 12345;
	CFReadStreamRef readStream;
	CFWriteStreamRef writeStream;

2. Set debugging diagnostics. Noted here so I can remember this trick.

	setenv("CFNETWORK_DIAGNOSTICS","3",1);

3. Open the sockets.

	CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault, (__bridge CFStringRef)host, (uint32_t)port, &readStream, &writeStream);

	NSInputStream *inStream = (__bridge_transfer NSInputStream *)readStream;
	NSOutputStream *outStream = (__bridge_transfer NSOutputStream *)writeStream;

4. Set up SSL. The part that tripped me up: you must set the properties for kCFStreamPropertySSLSettings after setting NSStreamSocketSecurityLevelKey. It appears the NSStreamSocketSecurityLevelKey setting overwrites the kCFStreampropertySSLSettings parameter.

	// Note: inStream and outStream are linked by an underlying object, so
	// parameters only need to be set on one of the two streams.

	NSDictionary *d = @{ (NSString *)kCFStreamSSLValidatesCertificateChain: (id)kCFBooleanFalse };
	[inStream setProperty:NSStreamSocketSecurityLevelNegotiatedSSL forKey:NSStreamSocketSecurityLevelKey];
	[inStream setProperty:d forKey:(id)kCFStreamPropertySSLSettings];

5. Open and initalize as needed. Here I’m just opening (because I don’t care if this blocks; this is test code). If using blocking API, use threads. Otherwise use the runloop and delegate APIs.

	[inStream open];
	[outStream open];

On the Java (server) side, the way I set up my server socket for listening to incoming connections was:

1. Set up the variables. Note that my Config class is an internal class that reads properties, and is beyond the scope of this exercise.

	Properties p = Config.get();
	String keystore = p.getProperty("keystorefile");
	String password = p.getProperty("keystorepassword");

	int port = 12345;

2. Load the keystore and key manager. This can be a signed or (in my case) self-signed certificate.

	FileInputStream keyFile = new FileInputStream(keystore); 
	KeyStore keyStore = KeyStore.getInstance(KeyStore.getDefaultType());
	keyStore.load(keyFile, password.toCharArray());

	KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
	keyManagerFactory.init(keyStore, password.toCharArray());

	KeyManager keyManagers[] = keyManagerFactory.getKeyManagers();

3. Create an SSLContext. Note that you cannot use “Default” for getInstance below, because that returns an already initialized context, and we want to initialize it with our parameters above. Also note that iOS 9 prefers TLS 1.2.

	SSLContext sslContext = SSLContext.getInstance("TLSv1.2");
	sslContext.init(keyManagers, null, new SecureRandom());

4. Open a ServerSocket class to listen for incoming connections. Note the constant 50 below is arbitrary.

	SSLServerSocketFactory socketFactory = sslContext.getServerSocketFactory();

	socket = socketFactory.createServerSocket(port, 50);

Note that the way I loaded the keystore is sort of the “hard way” to do this; my eventual goal is to have the Java startup code generate a self-signed certificate internally if a keystore is not provided, but I haven’t figured out how to do that yet. (There are plenty of pages out there that show how, but most of them rely on internal Java APIs, and I’m sort of allergic to using undocumented stuff.)

Thinking about mobile, tablets, desktops and TVs

So I got an Apple TV and the necessary cables in order to sideload software to it. It’s a very interesting product.

But it’s a product which I’m having a hard time wrapping my head around, so here are my thoughts.

Think, for a moment, about how you interact with your mobile device. You may be waiting for a bus or you may be waiting at an airport–so you pull your mobile device out and maybe kill 5 minutes surfing the web or playing a game. (Thus, games that are easy to learn and which have a short play cycle–meaning a game you can play a level in 30 seconds or so–are quite popular. What makes games like Candy Crush or Caesar III).

Now a tablet combined with a keyboard would make a good device for creating some content–and in some ways it occupies the same space as a small laptop computer, which is also equally hard to pull out, and set up. So a tablet with a keyboard is like a laptop computer: you’re not pulling it out of your pocket like a cell phone. You’re not pulling it out of a backpack and holding it like a paperback book. Instead, you’re pulling it out, putting it together (a tablet with a keyboard) or opening it up, and you’re setting it on a desk.

At which point it’s time to start creating content–even if that’s just a blog post or a long response to an e-mail from work.

Desktop computers, of course, sit on your desk; they’re ideal for creating content, and since they are not mobile, they can be far more powerful since there are fewer constraints on power consumption and size. And being the most powerful, they are ideal for high powered games–games which require far more computational power than can run on a laptop computer. (Though today most processor manufacturers are concentrating on energy efficiency over raw performance, so the gap between desktop and laptop computers are not as wide as they used to be.)

Desktop computers are ideal for software developers, for running video and photo editing, and for sophisticated music editing. (I have a MacPro with 64gb RAM as my primary development computer, and it can compile a product like JDate’s mobile app in moments, where my 13 inch Mac Air takes several minutes to do the same task. I also have a 21″ monitor and a 27″ monitor attached to my MacPro–which means I can easily open Xcode on one monitor, have the app I’m debugging on the second, and have documentation open while I’m debugging the code.)


The Apple TV is not something you pull out of your pocket and fiddle with for 5 minutes. It’s not something you pull out of your backpack or purse and open up like a paperback book. It’s not even a laptop or tablet with a keyboard that you pull out of your backpack and set up on a convenient desk. It isn’t even a desktop computer, since the monitor is across the room and being watched by several people rather than sitting a couple of feet from your face on your desk.

And that makes the use case of the Apple TV quite different than the device you pull out and fiddle with for 5 minutes while waiting for a train, or pull out of your backpack and read like a paperback book.


Think of how you use your TV. You may pop some popcorn, or grab something to eat (my wife and I routinely eat dinner in front of the TV), sit down and eat while watching the TV. You may play a video game in front of the TV–but ideally the best video games for a TV can be a social experience.

But sitting down in front of the TV is not as trivial a process for many of us than even sitting down in front of a desktop computer. (A desktop computer you may sit down in front of in order to check your e-mail, but chances are you’re not sitting down for the long haul. So you’re not relaxing as you would on a couch, settling in and leaning back, sometimes with pillows or a blanket. Your desktop chair is probably far more utilitarian than your couch.)

That means you’ve sat down for the long haul, and you’re seeing some degree of entertainment. Even if it is interactive–a video game–you’re not sitting down on a couch for 5 minutes to check your e-mail.


So I would contend that the Apple TV is ideal for the following types of things:

(1) Watching video content. (Duh.) And it’s clear the primary use case Apple has with the Apple TV is to permit individual content providers help Millennials “cut the cord” by allowing content providers build their own content apps.

(2) Playing interactive games with high production value and deep and involving storylines. (Think Battlefront or Fallout 4.)

On this front I’m concerned Apple’s limits on the size of shipping apps may hinder this, since a lot of modern games have memory requirements larger than the current Apple TV app size constraints.

People are complaining that Apple is also hobbling app developers by requiring all games to also work in some limited mode with the Apple remote–but realize that the serious gamer that is your target market will have quickly upgraded to a better input device, so think of using the Apple control as a sort of “demo mode” for your game. Yeah, you can play with the Apple remote, but to really enjoy the game you need a joystick controller.

(3) Browsing highly interactive content that may also work on other form factors. (I could envision, for example, a shopping app that runs on your TV that is strongly tied with video content–such as an online clothing web site with lots of video of models modeling the clothing, or a hardware store web site tied with a lot of home improvement videos. Imagine, for example, a video showing how to install a garbage disposal combined with online ordering for various garbage disposals from the site.)

I think this is a real opportunity for a company with an on-line shopping presence to provide engaging content which helps advertise their products, though it does increase the cost of reaching users in an era where margins are getting increasingly thinner.

(4) Other social content which may involve multiple people watching or flipping through content. Imagine, for example, a Domino’s Pizza Ordering app for your Apple TV, or a version of Tinder that runs on the TV.

Why I hate Cocoapods.

(1) As a general rule I dislike any build system which requires a list of third party libraries to be maintained separate from the source base you’re building.

This violates the rule that you should be able to rebuild your source kit at any time so long as you have the compiler tools and the contents of your source repository, and get the exact same executable every time–as part of your source kit is stored elsewhere, in off-line databases stored across the Internet.

(2) Because your libraries are stored elsewhere, unless you are very careful with your configuration settings you cannot know if, in the future when you rebuild your project, you won’t get the exact same application. This makes final testing impossible because you cannot know the code you compile when you build for shipping your product is the exact same as the code you compiled when entering the final QA phase.

(3) Cocoapods in particular works by manipulating the settings file and project files of your project in invisible ways. I currently have a project where Cocoapods broke my ability to view inspectable views, and it required some fairly obscure setting hacking.

Unless a tool like Cocoapods is built into Xcode and is made an integral element of the Xcode ecosystem as it ships from Apple (as Gradle is a part of Android Studio), it is inevitable that future releases of Xcode will be broken by Cocoapods, and broken in ways which are nearly impossible for all but the most skilled hacker (or user of Google Search) to resolve.

(4) There is an entire mentality that has evolved around Cocoapods and Maven and other such library management tools that if you don’t have at least a dozen different libraries included in your project, you’re not engaged in software engineering.

I’ve seen projects which didn’t need a single third-party library, and once I handed them off to someone else (or worse, in one project, another developer started working on the code I was working on without management telling me he immediately gutted a bunch of my working code and replaced it with a handful of third party libraries that added nothing to the party.

Now it’s not to say there aren’t a lot of very good third party libraries. In the past I’ve used SLRevealViewController and iCarousel with great success. But in both cases I’ve simply downloaded the class which provides the implementation and included the sources directly into my application.

But I’ve also seen people include libraries which may have been useful during the iOS 4 era but which provide little value over the existing iOS 7/8 API. For example, there are plenty of network libraries which provide threaded networking access–nearly a necessity when the only APIs available to users to do HTTP requests was the event-driven version of NSURLConnect class, which required an intelligent implementation of NSURLConnectionDataDelegate, or rolling your own calls using Berkeley Sockets and POSIX Threads.

However, today we have Grand Central Dispatch and, when combined with the synchronous version of NSURLConnect, can make downloading a request from a remote server as simple as:

	dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
		NSURLRequest *req = [[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://www.google.com"]];
		NSData *data = [NSURLConnection sendSynchronousRequest:req returningResponse:nil error:nil];
		dispatch_async(dispatch_get_main_queue(), ^{
			[self processResult:data];
		});
	});

Do we need CocoaAsyncSocket anymore? FXBlurView given that the current iOS tools do not build before iOS 7? Do we need the other various iOS networking solutions given that it now takes 6 lines of code to perform an asynchronous network connection?

And really, do we need SLRevealViewController? The problem with the UI design that SLRevealViewController allows you to implement makes the problem of “discoverability” hard for users: by hiding a screen in a non-obvious place, it makes it hard for users to know what he can do with his app–which is why Facebook’s mobile app, which first promoted the side bar model that SLRevealUserController implements, has moved away from that design in favor of a standard UITabBarController. (Yes, they’ve kept the same UI element with the list of users on-line hidden on a right-side reveal bar–but frankly that could have also been handled by a simple navigation controller push.)

By the way, Maven is worse, as is the entire Java ecosystem: I’ve seen projects that rely on no less than 50 or so third party libraries–which creates an inherently fragile application in that all of those libraries must be compatible with the other libraries in the collection. And all it takes are two of those 50 or so which require completely different and incompatible versions of a third party library–which I’ve seen, when two libraries suddenly required completely different versions of the same JSON parser library.

(Ironically, the incompatible third party library which triggered this issue was a replacement of Java’s built in logging facility. So we had a completely broken and somewhat unstable build simply because some developer working on the project wanted a few more bells and whistles with an internal logging tool that was not actually used in production.)


Most programmers out there that I’ve encountered today are “plumbers”; rather than build an application with the features requested they immediately go searching for libraries that implement the features they want and glue them together.

While there is something to be said about that when it comes to doing something that is either tricky (such as what SLRevealViewController does), or which requires integration to a back-end system that you don’t control (such as Google Analytics), it has been my experience that programmers who reduce the problem of programming to finding and snapping together pre-fabricated blocks they barely understand using tools they hardly understand and the most cursory understanding of how components work does not result in the best quality applications.

Things to remember: Apple Store and TestFlight Edition.

  1. The ID you sign in on developer.apple.com/ios and the ID you sign in using itunesconnect.apple.com must be the same ID. If they are different, you cannot submit through Xcode.
  2. You cannot get around this by doing an ad-hoc export of your .IPA file and using Apple’s Application Loader to get around this. For some reason your .IPA will not be signed with the correct endorsements to enable TestFlight.
  3. Until you get all the stars to correctly align:
    • Use the same Apple ID for your developer.apple.com and itunesconnect.apple.com accounts,
    • Set up a new application in itunesconnect.apple.com for your application you wish to test,
    • Correctly build your .IPA with a distribution certificate and a provisioning profile which is set up correctly,
    • Correctly submit your .IPA through Xcode (rather than exporting the .IPA and submitting with Apple’s Application loader),
    • Correctly update your build numbers as you submit incremental builds,

    then the Apple TestFlight UI makes absolutely no fucking sense.

    There is no hint how to get through the happy path, or why things go wrong. You just see weird errors that make no sense, and instead of seeing a line in your pre-release software list saying why something cannot be tested, you instead just see a blank line and errors which make little sense and give zero hint as to what you did wrong.

Apple sure can suck the wind out of the fun of writing iOS software.

Update: Even if you do manage to finally work the ‘happy path’ correctly, (a) Apple still does a review on the application before they allow it to go out to “external testing” (really?!? isn’t the point of ad-hoc distribution for your own people to test features for incomplete applications???), and (b) it’s not entirely clear why some of your members do not have the ability to become “internal testers.”

Combine this with the fact that each Apple device generally is held by a single owner with a single Apple ID, and that makes even accepting invites (assuming invites are sent out) for a consultant who may have multiple Apple IDs a bit problematic. Especially if I want to test for a client on my own private iPhone or iPad.

Why design patterns from web development makes crappy iOS or Android apps.

Increasingly I’ve been encountering software developed for iOS which was written by someone who cut their eye-teeth on writing UI software for the web in Javascript. And I’ve noticed a common pattern which arises from that code which makes their code overly complex and hard to maintain on iOS.

Basically, they treat UIView-based objects as passive objects.

And if you started live writing software on the web using Javascript, this makes sense. After all, the view structure represented in the DOM hierarchy is by and large a passive participant in the user interface. When you want to present a list of items in a scrolling region, for example, you would create the scrolling region, then populate the items in that region, setting the attributes of each of the <div> sections and <li> sections with the appropriate CSS styles for overflow and font information.

(In something like GWT it’s not hard to create your own custom view objects, just as you can create custom view objects in Javascript. But at the bottom of the stack, the Menu class or the Table class you built simply declares a <div> and manipulates the subset of the DOM within that <div> tag. The view hierarchy is still a passive participant; you simply wrap part of that passive content into something with some logic.)


The problem is when you move all this onto iOS or onto Android or onto Windows or onto MacOS X, you lose one of the most powerful elements of those platforms–and that is Views are active objects rather than passive participants.

Meaning that a child class of a UITableViewCell in iOS can actively know how to present a data record passed to it; you do not need to put the code to populate the table view cell within the UITableViewController class. You can build an Image View which knows how make a network call and load and cache an image without putting image loading and caching code into the UIViewController. You can create a single view which has complex behavior–without having to put the behavior code inside the view controller.

And this allows you to create very complex user interface elements and reuse them throughout your code.

Of course this also needs to be balanced with the available tools. For example, it makes no sense to create a custom UIButton which presents a button with a different font, when you can set the font and appearance of a default button within the NIB.

But it does indicate that in many cases the proper thing to do is to push functionality down into the view, rather than make the view controller responsible for that logic.


You can’t do this in Javascript. You cannot create a new <myTableTag> which takes special arguments and presents the contents in a unique way, which can be reused throughout your site.

And there is nothing inherently “wrong” with putting all the logic for your button’s behavior, the way table view cells are laid out, and the responsibility for loading images in the background into your view controller.

It just makes life far more complex than it has to be, because you’re leaving one of the most important tools in your arsenal on the floor.

Things to remember: resize icon script for iOS

On iOS there seems to be about a billion different icon sizes, depending on if you’re on iOS 7 or iOS 6, on iPhone or iPad, or for search icons or whatever.

This is a script which I created to help automatically resize all the icons that are used.

if [ $# -gt 0 ]; then
    sips $1 -Z 29 --out icon-29.png
    sips $1 -Z 58 --out icon-29@2x.png
    sips $1 -Z 40 --out icon-40.png
    sips $1 -Z 80 --out icon-40@2x.png
    sips $1 -Z 50 --out icon-50.png
    sips $1 -Z 100 --out icon-50@2x.png
    sips $1 -Z 57 --out icon-57.png
    sips $1 -Z 114 --out icon-57@2x.png
    sips $1 -Z 60 --out icon-60.png
    sips $1 -Z 120 --out icon-60@2x.png
    sips $1 -Z 72 --out icon-72.png
    sips $1 -Z 144 --out icon-72@2x.png
    sips $1 -Z 76 --out icon-76.png
    sips $1 -Z 152 --out icon-76@2x.png
    echo "Done."
else
    echo "You must provide the name of an image file to process."
fi

To use, in the terminal execute this script with the name of the .png file to be resized, and this will generate all the different icon sizes used.

How to suck down iOS memory without even trying.

If you build custom UIView or UIControl objects, it’s important to remember that all of your views are backed by a CALayer object, which is essentially a bitmap image of the rendered view object. (This is how iOS is able to very quickly composite and animate views and images and the secret behind iOS’s UI responsiveness.)

However, this has some consequences on memory usage.

For example, if you create a view which is 2048×2048 pixels in size (say you are scrolling through an image), and you don’t switch the backing layer to a CATiledLayer object, then for that view iOS will allocate 2048x2048x4 bytes per pixel = 16 megabytes for the CALayer backing store. Given that the memory budget for the backing store on a small device (such as the previous generation iPod Touch) is only about 8 megabytes, your application will die an ignoble death pretty quickly.

Now not all views are the same. If you create a view that is 2048×2048 pixels in size, you set the background color to a flat color (like ‘gray’), but then you don’t respond to the drawRect method to draw your view’s contents, then the backing store in CALayer is very small. (Internally I believe iOS is allocating odd-sized texture maps in the graphics processor–and if the view is a flat color the graphics processor allocates a single pixel texture and stretches it to fit the rectangle. Also if you’re setting the background to one of the predefined textures, I believe it’s not allocating any memory at all, but instead setting up a texture map from a predefined texture and setting the (u,v) coordinates to make the texture fit.)

(An empty view, by the way, can be useful for laying out the contents inside that view; just respond to layoutSubviews, grab the bounding rectangle, and do some rectangle math to lay the contents out.)

So if you need to create a large container view with a border around it, you may be better off allocating two UIViews: one with the background color inset a pixel from the other view which is the border color. Otherwise, the moment you invoke drawRect to draw the frame, on an iPad with a retina display your container view–if it fills the screen–will require 2048 x 1536 x 4 bytes per pixels = 12 megabytes, just to represent a bordered container.

(Footnote: I have a feeling CAGradientLayer is doing something behind the scenes to cut down on memory usage; otherwise why have this class at all? And if that’s the case, in the event where you need to populate a bunch of controls over a background with a subtle gradient effect, you may want to use this as your view’s backing store rather than overriding the drawRect method and filling the entire view with a gradient.)