A Tale of Two Startups

Imagine two dev teams. They’re working on comparably scoped, comparably funded products in a startup environment. These products are competing for the same well-defined market niche.

Team A cranks out clean, beautiful interfaces, chock full of functionality. They release new features at a rate of about one a month, and they get rave reviews from most of their customers. The product they produce isn’t just adopted by customers, but loved by them. The company quickly grows to absurd valuations. What engineers don’t retire are still wealthy enough that they don’t need to worry about their finances. The CEO gets fabulously wealthy, and the VCs get enough of a return to keep their scaly hides clad in a fresh human costume every day for years to come.

Six months after IPO the Team A’s network is infiltrated by hackers who exfiltrate a bunch of customer data, including credit cards, SSNs, the name of their middle school crush, etc.

The employees win, the C-levels win, the investors win, they’ve got theirs. But the customers lose. Now everyone who cares to can apply for a mortgage in your name and knows you had the hots for Molly McMurtry in 6th grade. Unfortunately, the company’s executives can’t hear customer complaints over the sound of the workman installing a submarine bay in their yacht.

They offer public contrition and a year of credit monitoring. The stock price hit degrades a few of them from “Sickeningly Wealthy” to merely “Obsenely Wealthy” and the world moves on as before.

Team B is carefully analyzing the safest, most reliable way to store the data they’re collecting from customers. They spend hours implementing cert pinning in their smartphone app, and insist on out-of-band authentication. But that means they also need a whole separate account recovery flow in case someone loses their token or smartphone or etc. And since security questions don’t provide security, they end up with a manual intervention in which you send a scan of your ID to a customer service rep. But then they need a secure way to transport the scans, and they fall down a security rabbit hole.

By the time they’ve mostly figured out how to authenticate their users, Team A already has happy customers tattooing the Team A logo on their asses and Team B’s VCs have run out of patience and eaten the CTO as a sign of displeasure.

So they push through, cut only a few of the least critical corners and put up a solution with 95% of the security they wanted. They congratulate themselves on the launch, and share a beer as the reviews roll in.

All of those reviews read “Meh, Team A’s product does this same thing better, prettier, and faster, with more features.”

The VCs eat the rest of the C-level execs, the engineers brush up their resumes, and all move on to other work (many of them at Team A). Everyone loses, except that the VCs got a good meal and the engineers got a Team B mug that they’ll get mocked for at their next company.

Broken Windows™

Everyone get out your +3 Veil of Rawlsian Ignorance. Which of these companies do you hope you end up working for? You might be a CEO, you might be manager, you might be an engineer. But you’re gonna end up working for one of these two companies.

You either selected A, or you’re a venture capitalist that’s feeling a bit peckish.

But of course, you might be a customer. All things being equal, as a customer you want a secure product more than an insecure one, but as is usually the case, all things aren’t equal. Time and resources spent on security are time and resources not spent on building cool features or getting the design just skeuomorphic enough. Given a finite number of resources, time spent improving security generally means that your product is less attractive to paying customers. That means that your product is less likely to succeed at all, much less reach “my yacht has a sub bay” levels of success.

And as a customer you can’t accurately assess the security of the applications you use. Sure, Team B can tell you that they’re more secure, but every company claims they’re secure right up until they’re publicly humiliated by hackers. Sometimes they keep on claiming it for weeks, months, or years afterwards, despite no meaningful changes to their security posture.

So as a customer you don’t have a choice between “more features” or “secure”, you have a choice between two apps, one that is better and form and function, both of which claim to be secure.

Noted security expert and glazier Frederic Bastiat wrote about this kind of thing over 150 years ago. The problem is that you have visible benefits and hidden costs. Everyone can see that Team A’s product is better. No one can see that Team B’s product is more secure. And lest you think that education alone can solve this problem, Bastiat wrote 150 years ago and is still being studiously ignored. If you think you can get customers to base their purchasing decisions on security arcana like presence of perfect forward secrecy or the proper usage of HSMs, then I’m afraid you’re mistaken.

So what hope do we have?

Eyemasks and Blinders

What if, at every stage of Team A’s development cycle, the technologies they used ensured that they were using at least sensible security practices. What if all of their Random() calls only used cryptographically secure PRNGs? What if the iFrames they used were automatically safe from clickjacking? What if their cloud service provider sourced hardware that used built-in HSMs so that they didn’t need to manage crypto keys?

Suddenly you have a world where Team A builds just as fast as before, their designs as gorgeous and drop-shadowy as ever, except the security posture of all of their features is inherently heightened. So that maybe when they do get breached, their database design and use of HSMs means that the attackers can’t get to non-public data. Or can exfiltrate the DB, but can’t decrypt it. Or maybe smarter networking algorithms mean that they didn’t have as many ready entrypoints as they would have otherwise.

Of course, this approach is by no means novel. People like Adrienne Porter Felt and Dan Kaminsky are doing an amazing job of pushing just this kind of usable security. But too often I see usability considered as just one aspect of security. And unfortunately I sometimes see it considered as an unimportant aspect of security.

In a very real sense, though, usability is the only aspect of security that really matters to most developers, and so it’s the only aspect of security that will actually help end users. Secure defaults and ease of use are the only way that we’ll ever win at the security game. As long as “being secure” and “building cool stuff fast” are in any way a tradeoff, the iron law of incentives will mean that we’ll lose far more often than we’ll win. And right up until they’re owned, our customers will have no way of knowing whether they were secure or not.

Remember: in markets incentives rule. In choice the Seen beats the Unseen. Security will win not when every dev team is security conscious, but when dev teams no longer need to be. So while it’s great that we have strong ciphers and HSTS and X-Frame-Options and everything, unless we’re streamlining these things to the point where they’re the default, we’re just building tools that the Next Big App will fail to use.

So “easy” is good, but the only thing that actually wins is “just works.” Chrome gets heat occasionally for turning off insecure features. But Chrome (and their awesome security team) are doing the only thing that actually secures users: making it so that the default experience is as secure as possible. And they’re doing so as aggressively as they can. And the result is a meaningfully more secure web for anyone using Chrome.

We need to do a whole lot more of this. So the next time you’re working on a tool or a library, and you find a way to make it more secure, don’t just hide it behind a config or a –be-secure flag, sack up and make it the default. Better yet, make it the default and then rip out the broken, insecure code three months later. You will catch heat. But in the end, that heat is only coming from people who aren’t bothering to look for the unseen.

And if we are ever to win the security game, it’s up to us force security on those who refuse to see, so that we can secure those who are unable to.