Security. Not “Cyber Security.”

Answering a question found on Marginal Revolution, an economics blog which I periodically follow:

Long-time MR reader here. I have a question: what is the appropriate framework to think about incentives (economic or otherwise) for electric power utilities to beef up their cybersecurity?

Here are some random thoughts, in no particular order. If you think you already understand security processes, you can skip down to “incentives.”


First, you can’t understand the incentives of a thing without understanding the thing itself. So it’s worth drilling down into what we mean by “cybersecurity.”

Second, I wish the hell we’d get away from the term “cybersecurity” and call it what it really is: security.

Talking about “cybersecurity” is like focusing on the locks of a door: how many pins it has, the type of the lock, if the lock is electronic or driven by a physical key, if it’s brass plated or plated in steel–all the while the window right next to your front door is wide open inviting anyone who wants to to climb right through.

Don’t believe me? The earliest hackers who hacked into the AT&T long-distance system in order to make free long distance phone calls discovered how the AT&T system worked by someone breaking into AT&T’s offices and stealing the specifications for their system.

And they did that by someone dressing up as a befuddled new employee who was wandering the offices on his first day of work trying to figure out where his desk was.

Never underestimate social engineering. It does’t matter if you have the best firewalls in the business if someone can simply walk through the front door dressed as a janitor, and obscond with a laptop computer containing all your company’s secrets.


Now security itself is a complex topic. But basically:

1. Security is not ‘salt’ which can be sprinkled into the recipe to make things taste better. Security is a corporate-wide principle that must be architected into all aspects of your business, your software, your processes, your practices. Security must start with the C-suite: if your CIO or CTO or CSO (Chief Secuirty Officer) cannot understand security, create policies and procedures to correctly implement security, and has the authority to implement these policies and procedures, then you will never have a secure company.

2. Security does not, however, require that your corporation be “paranoid.” You don’t need armed guards following around everyone in the building. But it does mean you should start at the front door–both virtually and physically–and check to make sure everyone entering has the proper authority to be there.

And note some forms of security can actually make things worse: requiring employees to change their passwords every 3 months and never use any duplicates means the person who has been there for 10 years would have had to create 40 separate secure passwords during his tenure–which implies he’ll be writing his passwords down on a sticky note somewhere on his desk. This form of paranoia has made things worse, not better.

3. Security also requires constant training. Computer security–including social hacks, e-mail attacks and hackers trying to guess passwords–require its own training. If we were just as dilligent about computer training as we were about HR training for sexual harassment, most companies would be far more secure than they are now.

4. Once we get passed the policies and procedures (making sure employees are properly identified at the front door–logging access as appropriate, making sure employees are properly trained not to open malicious e-mails, making sure employees leave sensitive documents at work, or at least use a separate corporate-controlled laptop when traveling abroad) — then we can talk about specific products. But even there, those products must be integrated properly throughout the system; otherwise, we’re back to discussing the best door lock for your front door, neglecting the open window right next to it.


All of this also applies to how we engineer software products: security is not a salt that gets sprinkled onto your software product in the hopes things are better. They must be architected into the system from the ground up.

Unfortunately most software developers don’t understand this. Worse, they think certain things–like certain protocols (such as SSH) are sufficient enough to protect confidential information, as if somehow your browser only makes a secure connection after the user has logged in.

(For example, I once worked on a mobile product with a back-end component. It turned out the back end was not checking the credentials on each transaction; it relied on the mobile app to only make API calls the user was authorized to make. And it relied on SSH for people not to be able to ‘sniff’ the packets to understand the API calls being made. The thing is, while SSH makes it harder (but not impossible) to sniff out other people’s transactions, there are a number of products which you can buy to allow you to sniff the packets between your own device logged into your own account and a back end. And that allows you to easily figure out the format of the packets–and create your own application which can use the back-end API.

Long story short: a dedicated hacker and a few hours of time could have easily hacked that system enough to create his own mobile app. And because our back-end was not validating the security of each transaction–they could have easily hijacked our system as a result.

No firewall, no authentication manager, no third-party product could fix that glaring oversight.)

So when architecting a product:

5. Engineer the product for security at every phase of processing data. Every part of the system: the front-end, the back-end, the database side–all should have an opportunity to refuse to process data because of a failed security check. Think of it in the same way as building physical security into a building: if everyone has a key card, it’s easy to put key card readers throughout different areas of your building to make sure the person is allowed to be there.

6. Separate rules follow the processing of credit cards and other sensitive financial data. Basically anything that contains sensitive data needs to be put into its own secure enclave. Only certain people should be permitted to access that enclave for testing and development purposes–and they should be held to higher standards of trust (forced to sign separate contracts that contain specific penalties for violating that trust, such as criminal prosecution for stealing credit card data).

7. All transactions must be logged. That goes double for anything accessing secure financial data, and the logs should be easily accessed for security review. And they should be regularly reviewed by someone on the team for any financial or access irregularities.

8. And I can’t believe I have to add this, but I must: REST architecture (that is, “stateless back-end architecture”) is a fucking disaster if you don’t allow the back-end to represent client security access in the back end. In other words, there are those who think that “REST” implies that all state–including client access permissions–should be represted in a state object on the client side. This is a fucking disaster–because that implies the client has the chance to change its own permissions illegally (see my story about the mobile app which was responsible for API security above)–and even if the “state object” is represented as an encrypted opaque object that you think a hacker can’t crack (*cough*), it does subject your API to replay attacks.


So after this long wind-up, here’s the pitch about “incentives.”

Because “security” is a corporate process and not a magic salt that can be sprinkled onto your bland food to make it taste better, and because we never think about security until our house is broken into and our brand-new TV set is stolen–customers never think about security until something breaks.

You probably have never considered for a millisecond if the assembly plant that made the parts for your car has a lock or a gate or armed security partrolling the grounds; you only care that your car handles well. Until the moment the on-board computer goes haywire and blows up the engine–at which point you’re angry your car doesn’t work.

And you probably never considered the possibility that it blew up because some hacker acting like a befuddled employee swapped the software being programmed into the car’s onboard engine management system downstream in some parts supplier to your car company.

Worse: sometimes the software fails because the company’s own software developers were idiots. Meaning it didn’t even require a hacker at all; just poor quality assurance processes during the development cycle. (Don’t believe me? Consider Boeing’s latest 737 issues caused by a softare glitch.)

All of this is to say that security is “invisible” to the average consumer.

And the incentives of all corporations is to sell a product to the consumer at the cheapest price point possible while providing the consumer a decent experience. (Meaning, from a manufacturing perspective, the balance is between “how cheap can I make it” while “how do I maximize sales” and simultaneously “how do I minimize points of contact with customer service, and how do I minimize product returns.”)

Security does not factor into any of this at all.


So, given that customers don’t actually care about security until their credit cards are stolen, what can we possibly do to incentive companies to improve their security?

Well, first, we can get away from calling it “cybersecurity” and get away from telling stories about Russian hackers creating DDOS attacks and the “threat to our power grid.”

A lot of this is bullshit being used to sell consulting time and software products from companies like Symantec (full disclaimer: I used to work for them) to companies who don’t want to go through the process of a security audit, because their C-suite hasn’t a fucking clue.

Second, we need to treat security in the same way we currently treat sexual harassment. That is, we need to educate the C-suite as to the need to understand security as a process–and just as we strive to make women secure at the workplace from abusive men, we need to strive to make the plant or office secure from unwanted intrusions.

This means security training–and the C-suite needs to take security training as seriously as it now takes sexual harassment training. That includes training products covering e-mail, fishing attacks, social engineering attacks, properly identifying employees, secure access points through the building using card readers.

And that also includes a number of potential software products depending on the line of business the company is in, including properly integrating two factor authentication, and having an IT department properly manage corporate laptops or desktop computers. That also includes putting some teeth behind some of these requirements: being willing to repremand employees for lax security practices in the same way we repremand employees for sexual harassment.

Third, we software developers need to properly segregate software and consider “enclaves” of information, challenging data at every point in the system in the same way we put card readers at various points inside an office building. That includes special attention to mission-critical systems and systems containing financial data–and that “attention” includes physical security: limiting access to those systems, logging card readers, regular security audits and review.

Fourth: if the Federal Government wants to get into the business of incentivizing security at corporations such as power plants, then they should encourage ‘security audits’ of those corporations–including encouraging corporations who lack the proper skill set within its C-suite to at least hire a ‘director of security’ who has the authority to mandate corporate change for all physical and computer security, including security audits of all software used by that corporation or audits of software developed for that corporation.


Unfortunately I don’t think we’ll get this from the Federal Government.

Instead, I suspect they’ll just throw money at the situation, subsidizing a bunch of high-priced security “experts” to recommend companies buy a bunch of security “toys”–like fancy front-door locks which look really nice, but fail to address that open window next to the front door.

A case where we need to think very deeply about security.

Have you seen this dialog when using your iPhone or iPad?

//platform.twitter.com/widgets.js


Now a very simple solution to this problem (and it occurs on a number of other operating systems as well) is to extend the API so that the alert warns the user which application is asking for the password:

NewImage

But the problem with this variation (or any other variation which explains which task is asking for the password) is that the prompt could be a lie.


My own thinking is that alerts of this nature which ask for a response should never be handled through a pop-up alert that may appear over other applications. This destroys discoverability; it prevents you from figuring out which application asked for the prompt.

Instead, I would rather see that we use notifications for asking for a password.

A notification that an application requires a password to continue has several nice properties.

  • A notification does not get in the way. It does not force the user to provide a password in order to continue whatever it was he was doing.
  • A notification provides greater space for information explaining why the password is needed, and perhaps even to provide alternative actions if the user does not wish to provide the password.
  • In responding to the notification, the application actually asking for the password can be brought forward, so the user can evaluate which application is asking, and if the request is reasonable.

Now of course I don’t mean we should do away with using the UIAlertController class to obtain the user’s password. The API behind that is far simpler than constructing a complete navigation controller, especially when the prompt occurs as part of a network request deep in the network stack of the application.

But those UIAlertController objects should never surface outside of the application requesting the alert.

And this also applies to Apple’s own applications.


You know the principle that you never give your credit card to someone who calls you?

Well, the same principle applies to passwords; you never give your password to an application that alerts you. Instead, you bring up the application and you give it your password.

And given how common this operation is, I wouldn’t mind if Apple were to take the lead and provide an API to do all of this for the application developer. A standard API has a way of standardizing application behavior–and this is a place where application behavior standardization is desirable.