Welcome to the wonderful world of liability.

So the next project I’m taking on requires that I carry general liability insurance. And one of the requirements of the general liability insurances was to take down and stop selling E6B software for pilots.

Great.

Well, it’s not like I sold a lot of copies of the software to the public. And heck, the Android version wasn’t even active; the software had been taken down from the Play store because I forgot to renew some contracts with Google Play.

But it is a shame that something like that cannot be offered to the public.


I’ve got a couple of weeks; I’ll probably publish all of the code open source on my personal GitHub account instead, because why not?

Two Factor Authentication

Credential Stealing as an Attack Vector

Basically if someone can steal your password, they often can get into your system and do a lot of damage.


A quick review: authentication “factors” involve who you are, what you know, and what you have. A one-factor authentication technique involves just one of these three things: a key to your front door involves what you have, a password involves what you know. We can bolster the strength of any of these “factors” by requiring longer passwords or better designed keys and key locks, but you only have one thing to bypass to get into the system.

For “two factor” authentication to work, it involves two of these three factors: routinely it involves “what you have” and “what you know.” So your ATM card is a two-factor system: you must first present your ATM card, then enter your password PIN.

What makes two factor authentication more powerful is that now you have two systems (working on concert) to bypass. This also allows you to weaken one factor if the other factor must also be present–such as an ATM PIN which is an easily guessable 4 to 6 digit number.


The nice thing about cell phones is that it easily allows us to distribute software that allows your cell phone to participate in the login process: making “what you have” your cell phone as part of the login process.

And there are a number of applications out there which can be used as part of a two-factor authentication process, such as Google Authenticator, which implements TOTP and HOTP one-time passwords. It should also be relatively easy to implement your own one-time password application, though ideally unless you know what you’re doing, just download and drop in the libraries from someone else’s implementation.

One way to secure your communications…

Security is, in part, about making it more expensive for a hacker to crack your system and obtain secure information.

Yesterday I noted that just because you wrap your protocol in SSL/TLS doesn’t make it secure.

Today I’ve been playing with Diffie-Hellman key exchange, using the 1024 Bit MODP key from RFC 4306 as the constants G and P in the algorithm described in the Wikipedia article. I’ve implemented this in Java using BigInteger, in code that compiles using GWT to compile to Javascript, in order to secure a conversation between a web front end and a server back end. The resulting key generated by the Diffie-Hellman exchange is used to seed a Blowfish encryption scheme which also compiles to GWT; packets are thus encoded using Blowfish and the shared secret from a DH exchange, then sent wrapped in JSON using Base64 encoding.

And just now I got the whole thing to work: I know have secure packets between a web client and a web server back-end.

That is the sort of stuff that makes me happy.

Your protocol is not safe just because you wrap it in SSL/TLS.

A common thing I’ve encountered over the years at various companies is the implicit belief that it doesn’t matter what their protocol looks like; as long as it is wrapped in SSL/TLS (say, by making sure your endpoint is an https:// server instead of http://), then your protocol is secure from hackers and prying eyes.

And if your protocol is secure from prying eyes, then the actual security of the packets being sent is somewhat immaterial; since a hacker cannot discover the format of your packets they cannot hack your system.

The justification, of course, being the standard diagram of Adam talking to Beth while Charlie is in the dark as to what’s going on:

Adam and Beth with Charlie Confused.

… but …

But what if your protocol supports your app, which can be downloaded from the iTunes store for free? What if your protocol is the protocol between your web client and the back-end server? What if, for free, Charlie can get his own client?

Charlie Controls the Client

Well, there are a number of tools out there which Charlie can use to sniff the packets between a client under his control (such as your client app running on his phone, or your web front-end running on his computer), and your back-end server. I’ve used Charles, an HTTP proxy/monitor tool to successfully sniff the contents of a packet–and Charles will even create a self-signed certificate and install it so that the SSL/TLS certificate is considered valid, so that Charles can look inside the SSL/TLS stream to read the contents of the packets.

I suspect this is how the Nissan Leaf was hacked. I’m sure there are countless other applications out there being similarly hacked.


So how do you deal with this attack?

First, realize you have a problem. Realize that simply changing to SSL/TLS will not save a poorly designed protocol. Realize that someone can easily deconstruct the packets being sent between your client and the server, and that may expose you to spoofing.

Second, REST is dead. I’m sorry for those who worship at the altar of REST, but if the server is purely stateless, there is no state which represents the access controls a particular user may have been granted by the server when he originally connected.

It’s not to suggest we shouldn’t strive for a REST-like protocol; if the same command is sent at different times it should yield the same results, all other things being equal. But we need to consider some server-side state in order to regulate who has access to which features or which devices. (By the way, a traditional Unix file system, on which HTTP’s ‘get’, ‘put’ and ‘delete’ verbs are modeled after, and which form the basis of the REST architectural style, also contain access control lists to guarantee someone performing a read (or ‘get’ command) has access to the file (or URI) requested. Consider your logging into your desktop computer “state” from the file system’s perspective.)

Had Nissan incorporated access controls to their application–which could have been as simple as the user logging in and receiving a connection token to be used to verify ownership of the car he is controlling–then the Nissan Leaf hack would never have happened. Simply hiding the protocol behind SSL/TLS doesn’t help, as we’ve seen above.

Third, consider encrypting sensitive data, even if it is being sent via SSL/TLS. The problem is that there are several brands of dedicated firewalls which create their own valid SSL/TLS certificate so the firewall can sniff the contents of all https:// transactions–which means if you are sending your credit card, it is possible someone’s firewall hardware has sniffed your credit card information. And in an environment where there are legal requirements to provide proper access controls (such as mandated by HIPPA), this provides an added layer of security against proxy servers which are sniffing all the packets on your network, and which may be inadvertently logging protected information.


Update: This also applies to the Internet of Things. In fact, it applies even more to the Internet of Things, given the number of devices running software that will seldom (if ever) be updated.

Flaws in Samsung’s “Smart” Home Let Hackers Unlock Doors and Set Off Fire Alarms

SecureChat: an update.

Once you have the infrastructure for secure chatting between clients, expanding it to handle more than just text is a matter of encoding.

And so I’ve extended the client to allow sending and receiving photographs. Securely.

A new version has been pushed to the master branch of GitHub.


The interesting part about all this is learning the limitations of encryption using an RSA public/private key architecture.

The upside: unless the encryption key is generated on your device and shared using a secure mechanism (such as public/private key architecture), your protocol is not secure. Sure, your specifications may claim that the server-generated encryption key used by the clients is not stored on the server after it is generated–but we only have your promise.

RSA public/private key encryption, however, is computationally expensive. Even on Android, where the BigInteger implementation has been seriously optimized over my first pass attempt at building my own BigInteger code, encrypting and decrypting an image can be quite costly. It’s why, I suspect, most commercial applications use a symmetric encryption mechanism–because most symmetric encryption systems are significantly faster (as they involve bit-wise operations rather than calculating modulus products on large integers.

Which is why I’ll be focusing my attention on the iOS implementation of BigInteger.

But even on Android, decrypting a message using a 1024 bit RSA key can be acceptably fast enough, especially if encryption security is more important to you than efficiency.


In practice I can see using something like Diffie-Hellman to mediate exchanges between two devices. Nothing behind the mathematics of that protocol require the protocol to be completed in one session; there is no reason why the server couldn’t store the first part of the exchange (the value A = ga mod p) on the server, so that later another client B couldn’t then complete the exchange. It implies each device stores the shared secret of all the devices it is communicating with–but with today’s modern devices, storing a thousand shared secrets doesn’t represent a lot of memory.

It may also be possible to use a variation of Diffie-Hellman in conjunction with a symmetric encryption algorithm (perhaps by using the shared secret as a key to such an encryption scheme).

Announcing a new version of SecureChat

I’ve just checked in a new version of SecureChat on the main branch at GitHub.

New features include:

  • A working Android client.
  • Various notification bug fixes.
  • Various iOS bug fixes.

Why am I doing this?

Even after all these months, since the Apple v FBI fight began, I’ve been hearing way too much stupidity about encryption. The core complaint I have is the idea that somehow encrypted messaging is the province of large corporations and large government entities, entities that must somehow cooperate in order to assure our security.

And it’s such a broken way to think about encryption.

This is a demonstration of a client for iOS, a client for Android and a server which allows real-time encrypted chatting between clients. What makes chatting secure is the fact that each device generates its own public/private key, and all communications are encrypted against the device’s public key. The private key never leaves the device, and is encoded in a secure keychain with a weak checksum that would corrupt the private key if someone attempts a brute-force attack against the device’s secure keychain.

Meaning there is no way to decrypt the messages if you have access to the server. Messages are only stored on each device, encrypted using the device’s private key–meaning a data dump of the device won’t get you the decrypted messages. And a brute force attempt to decode the device’s keychain is more likely to corrupt the keychain than it is to reveal the private key.


Security is a matter of architecture, not just salt that is sprinkled on top to enhance the flavor. Which is why there are so many security breaches out there: because most software architects are terrible at their job: they simply do not consider the security implications of what they’re doing. Worse: many of the current “fads” on designing client/server protocols are inherently insecure.

This is an example of what one person can do in his spare time to create a secure end-to-end chat system which cannot be easily compromised. And unlike other end-to-end security systems (where a communications key is generated by the server rather than on the device), it is a protocol that cannot be easily compromised by compromising the code on the server.

Things to remember: broken singletons and XCTests

Ran into a bizarre problem this morning where a singleton (yeah, I know…) was being created twice during the execution of an XCTestCase.

That is, with code similar to:

+ (MyClass *)shared
{
	static MyClass *instance;
	static dispatch_once_t onceToken;
	dispatch_once(&onceToken, ^{
		instance = [[MyClass alloc] init];
	});
	return instance;
}

During testing, if you set a breakpoint at the alloc/init line inside the dispatch_once block, you would see instance being created twice.

Which caused me all sorts of hair pulling this morning.


The solution? Well, the unit test code was including the main application during linking.

And the MyClass class was also explicitly included (through Target Membership, on the right hand side when selecting the MyClass.m file) in our unit tests as well.

What this means is that two instances of the MyClass class is being included. That means two sets of global variables, two sets of ‘onceToken’, two sets of ‘instance’. And two separate calls to initialize two separate instances, causing all sorts of confusion.

The answer?

Remove the MyClass.m class from Target Membership.


Well, I guess the real solution is to design the application without singletons, but that’s an exercise for another day.

Besides, there are times when you really want a singleton: you really want only one instance of a particular class to exist because it represents a common object shared across the entire application–and the semantics creates confusion if multiple objects are created. (This is also the case with NSNotificationCenter, for example.)

SecureChat: an open source secure chat system.

The Apple v FBI clash left a bitter taste in my mouth. Not just because the FBI wants to punch holes in Apple’s security for their own benefit; at some level this is just a natural reaction of an investigative agency whose goal is to build cases against terrorists and to stop terrorism before it happens.

What left the bitter taste in my mouth were the pundits who claimed Apple was committing treason. What left the bitter taste were the politicians and political candidates who kept saying “let’s open the hole, and deal with the consequences later”–meaning they were simply not willing to look at the issue.

But what really left the bitter taste in my mouth was the presumption that somehow encryption is the property of large corporations and large governments–and even those on the far right sounded a lot like socialists when they demanded the two cooperate to make our world a safer place.

That really bothered me–because cryptography is not the exclusive domain of large corporations and large governments.


Which is why I put together SecureChat, an open source Java server/iOS client which provides end-to-end RSA encryption of messages.

This perhaps isn’t the best way to provide end-to-end encryption; certainly there are undoubtedly holes that in the next few months those who look at this code may find.

But my point was to demonstrate a couple of things:

Encryption is not the exclusive domain of a handful of large corporations and government agencies. Working from first principles I built an RSA encryption engine from scratch–even going so far as to bypass Apple’s built-in security classes (except for their SecureRandom function–but that could also be replaced), on the presumption that a future administration forces Apple to open back doors in their built-in encryption classes.

Please note I do not believe this will come to pass, and I believe Apple has security as a primary goal. This is more of a what if? exercise.

This is a demonstration of what one motivated developer can do in the span of a couple of months part-time work. If I can do it, undoubtedly there are others who have also done this.

The design provides complete end-to-end encryption of messages from device to device; only encrypted messages exist on the back-end server. Further, old messages are deleted as they are delivered; this prevents a record of messages from accumulating on the server. The design also keeps messages encrypted on the device; while messages are stored in SQLite (and could be easily scraped), messages can only be decrypted using the RSA key kept in an encrypted keystore that requires a correct passcode to be entered in the app. And the checksum used to determine if the keystore was correctly decrypted uses a CRC-8 checksum–a deliberate design which (for a 4 digit passcode) means someone randomly picking passcodes is 37 times more likely to destructively decode the keystore (and lose the private RSA key).


SecureChat is now hosted on GitHub, and is open sourced using the GNU GPL.