Securing Driverless Cars from Hackers Is Hard. Ask The Ex-Uber Guy Who Protects Them
Now, Miller has moved on, and he’s ready to broadcast a message to the automotive industry: Securing autonomous cars from digital attacks is a very difficult problem. It’s time to get serious about solving it.
Eye roll.
Look, securing cars or so-called IoT devices is not hard.
But it requires three things–which in today’s Agile-driven cowboy-hacking development culture, is fucking damned near impossible:
- Planning: Security is not something you bolt on. It’s not an add-on feature like adding a new button to a web app which carries you to a disclaimer page. Security is something that has to be planned for from the bottom-up–and in today’s broken development culture, “planning” is just something we don’t do well.
- Understanding: You need to understand the core tools available to you in the encryption toolkit and use them correctly. You need to understand that every security system for authentication, including tools like single-sign on, access control authorization, and securely storing information are based on the three encryption patterns I covered yesterday: one-way hashing, symmetric-key encryption, and asymmetric-key encryption. You need to understand the limits of these patterns, and understand that no matter how you wrap them in some architecture, the architecture only works because it either relies on the features of each of these patterns (and does it either well, or poorly), or it relies on “security through obscurity.”And you have to remember that “security through obscurity” in today’s world is simply not enough.
- Proper testing: You need to have a QA team who specializes in testing security. You need a team who specializes in penetration testing: in trying to figure out how to break the existing system. You need to verify your security architecture before you build everything else on top of that architecture–otherwise you may find yourself re-engineering major chunks of your code in response to security lapses. You need to review your architecture with your security team–because eventually hackers will also understand your architecture, and hackers will have a hell of a lot longer to try to break into your system than your QA team will have. (You may spend six months developing the software for the on-line systems in your car–but your car will be out there for 10 years.)
Now if you’re the type of developer who doesn’t understand why salting passwords is important–and why, while you may not want that salt string to get public, it doesn’t matter as much as having a fundamentally insecure system that uses obscurity to hide access–then you really need to hire someone who does. That person needs to be a system architect: someone who is in charge of designing and specifying the complete architecture of your system. (Security cannot be championed by Project Managers; he has no ability to specify how things work. And a senior developer doesn’t have the authority to force fundamental architectural changes over the objections of management.)
And if you think simply switching on SSL will do the trick–please step the fuck away from your IDE.
Because while SSL may help prevent eavesdroppers from sniffing traffic between two devices he doesn’t control (though not always; Symantec’s Endpoint Security and other similar AV scanners have the ability to crack the contents of HTTPS packets)–I fucking guarantee you a hacker will download your application and can sniff the contents of the packets from his desktop or mobile device so he can reverse engineer your API.
And if your idea of security is to put the car’s VIN as a parameter in the HTTPS GET and POST requests to the server–then you are a fucking moron.
Security really isn’t hard. But it requires thought, dedication and a willingness to properly test–all things we no longer do in a development environment.
Frankly if I were to put together a team which was creating secure software for a bank or a car company or an Internet device, I’d hire a technical writer (to help keep all the documentation straight), and I’d keep at least a 1:1 ratio of developers to QA. I’d also hire a separate security QA team dedicated to penetration testing: people who know how to do a security audit, who know what “fuzzing” is, who can also test the security of our physical systems. I’d draft a set of policies and procedures for our customer support, our back-end deployment team, the guys who maintain the software–and those policies and procedures would cover everything from access to the source kit to physical security of the servers to the length of passwords used internally. And let my security QA team audit that as well.
And I’d use a hybrid waterfall/agile process with a very long (at least 6 month) release cycle except for emergency hot-fixes.
Contrast to a typical development team today handling web or mobile development: we skimp on QA hiring, I haven’t seen a technical writer embedded with a development team in decades. We also tend to use a short (2 week) agile development cycle where long-term planning and architectural design take a back burner to adding the latest feature–which may be fine if you’re creating a web site but sucks if you need to think deeply about anything.
So really, security isn’t hard.
It just takes a fundamental change in development culture.