Archive for the ‘Science and Technology’ Category

MockSec Interview, Question 1 – Analysis and Errata

Link to Question 1

There are a number of different salient details here that need some consideration. First of all, there’s the data assets involved. From the problem description, we have some that are explicit in the question (name, medical info, emergency contact info, etc.), but there’s also clearly some other data involved that the devs assumed, but didn’t mention. There’s no mention made here, for instance, of a user authentication system. Presumably there’s some kind of way for users to authenticate themselves to the app. Related to this is the interesting question of user authorization. What access should users have to their own data? Do we, for instance, really need users to be able to read the home address or medical information they’ve previously entered into the app? Or is there a way to mask that data in order avoid disclosure in case the phone is compromised. A good candidate will be able to reason about various tradeoffs here, from both a security and product perspective.

More senior candidates should be able to talk about how to deal with the fact that you have data from people who aren’t the customer (emergency contact details). Are there any additional controls that should be put in place due to the fact that the people represented in the contact details never consented to having their data stored by our system? More senior candidates should also be able to talk about the possible regulatory risk. This needn’t be in depth legal analysis, but they should be aware of what broad regulations will apply to the application in their country. (E.g. would a US company building this app be bound by HIPAA due to the presence of health information?)

In addition to user authentication, there’s also service authentication questions. How will the application authenticate the services it’s communicating with to ensure it doesn’t send sensitive customer information to the wrong endpoints. TLS is one possible solution to this. Good candidates will be able to talk in detail about the TLS configuration would look like. More advanced candidates would also be able to talk in depth about related controls like cert pinning, including the tradeoffs that cert pinning requires for operations and updates. These can get especially tricky in mobile applications. What do you do about the (non-zero) number of users who will install the app, but never update it? Does your proposed authentication system work both for the information reporting (client-initiated) and the emergency push notifications? And does that change depending on the architecture of how the push notifications are triggered?

The interviewee should also be able to take in varying levels of depth about the cryptographic requirements for the data. For Junior applicants, a strong understanding of the cryptographic primitives might be enough, but more senior candidates should be able to talk in detail about what cryptographic controls to use (including identifying appropriate algorithms) and to help design a workable key management system for the application.

Candidates should be able to talk in depth about a data sanitation and validation strategy, as well as well talk about what vulnerabilities might arise from a failure to implement it properly.

Candidates should be able to identify reasonable, actionable risks to the system, including both confidentiality risks to the data, but also be able to reason about data integrity and how it might impact customers. They should also be able to identify negative use cases for application logic and design sensible tests for them. (E.g. what happens if I can insert arbitrary users into the service with spoofed GPS codes using an emulator? What threat scenarios does this bring up in case of an emergency?)

Once we’ve identified several of the risks inherent in the system and the controls required, a candidate should be able to talk about testing methodology. Depending on the role, this might involve walking through a pen test plan, or it might just involve identifying some test cases for the dev team to include in their own QA.

More senior candidates will also have to consider a lot of areas that the devs apparently haven’t, such as Disaster Recovery or Security Operations. In other words: what happens if the natural disaster also wipes out the data center hosting the servers that do the push notifications? How do you deal with disasters that also take down power or knock out the mobile data infrastructure? What do you do when a customer who hasn’t paid their service bill is in a natural disaster where your application might help? What requirements will you have for the team for when their service is breached? What level of access will you recommend the devs to grant themselves? What insider threats do you need to be worried about and what controls can be put in place to mitigate them?

One advantage to the interviewer is that this question, though compact, covers a lot of different areas. This allows the interviewer to dig into areas that are particular to the role being considered. For a Senior Application Security candidate, I’d probably focus a lot on analyzing or improving the design of the application to remove classes of risk, and then drilling in depth into some of the more tricky threats and controls. For a Junior pen test person, I’d probably focus on identifying a bunch of concrete threat scenarios and negative use cases and then ask them to detail how they would test those if they had to pen test the system. This flexibility is why I personally prefer these kinds of described systems problems. They’re easy to generate (especially once you’ve practiced a few of them) and can be easily tweaked, refined, or refocused to the role and level under consideration.

MockSec Interview, Question 1

Skill(s): Threat modeling
Level: Entry-level to Senior Engineer

A dev team wants to develop an emergency response app. They want users to enter their personal details, including name, home address, and emergency contact details, as well as medical information (blood type, allergies, existing medical conditions, etc.) into a smart phone app. They plan to combine this data with GPS information from the user’s phone and store it in via a service layer into a centralized database. That way, when a natural disaster occurs, first responders can know who was in the effected area, as well as access up-to-date health information about them, in case they need medical attention.

The information will also be persisted locally on the device and, if the user is detected to have been in an area effective by a natural disaster, the back-end service will send a push notification so that the user’s personal information will be displayed on the lock screen. This is so that anyone who finds the phone can help that person appropriately.

They expect the app to be free, but service-enabled features to cost a modest monthly fee. User’s provide their credit card details via the app when they sign up, but the payment functionality is handled by a third party.

There is no web interface of any kind.

The devs have come to you to help them develop their threat model. What are some threats that the system (as described) needs to account for? What controls would you tell the developers to put in place to help mitigate those threats?


Disclosure Notice

Disclaimer

Analysis and Errata

Alright, we’re in

An Idoru dispels misconceptions about Japan

William Gibson sadly unavailable for comment.

Dan Carlin on the Great Filter

Fermi Paradox from versa on Vimeo.

“Hell below me, stars above” – On evolution and Alexander on AI Risk

“If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.

Worse, the reasons we humans aren’t more intelligent are really stupid. Like, even people who find the idea abhorrent agree that selectively breeding humans for intelligence would work in some limited sense. Find all the smartest people, make them marry each other for a couple of generations, and you’d get some really smart great-grandchildren. But think about how weird this is! Breeding smart people isn’t doing work, per se. It’s not inventing complex new brain lobes. If you want to get all anthropomorphic about it, you’re just “telling” evolution that intelligence is something it should be selecting for. Heck, that’s all that the African savannah was doing too – the difference between chimps and humans isn’t some brilliant new molecular mechanism, it’s just sticking chimps in an environment where intelligence was selected for so that evolution was incentivized to pull out a few stupid hacks. The hacks seem to be things like “bigger brain size” (did you know that both among species and among individual humans, brain size correlates pretty robustly with intelligence, and that one reason we’re not smarter may be that it’s too annoying to have to squeeze a bigger brain through the birth canal?) If you believe in Greg Cochran’s Ashkenazi IQ hypothesis, just having a culture that valued intelligence on the marriage market was enough to boost IQ 15 points in a couple of centuries, and this is exactly the sort of thing you should expect in a world like ours where intelligence increases are stupidly easy to come by.”

Reading Scott Alexander on AI risk this morning, and the above passage jumped out at me. It relates to my own personal transhumanist obsession: aging. The reasons that we don’t live longer are predominately that there’s never been evolutionary pressure to live longer. (Viz. Robert Heinlein’s Lazarus Long for a fictionalized, overly-rosy view of the implications there.) But we know that lifespan is partially genetic and that ceteris paribus people with longer-lived parents and grandparents will themselves live longer.

The reasons why we aren’t all Jeanne Calment is because most of our evolution exists in an era when once your grandkids were hunting on the Savannah, you’d done your piece and were unlikely to pass on long-lived, long-health-span genes any more effectively than someone who died at 60.

But it’s interesting to see that that might be changing. People are having kids older, living and staying healthy older (thanks largely to environmental factors, yes), and it could be that one’s ability to stay active later might push out our natural lifespans a few years. Even if you’re not having kids into your 50s, like an increasing number of people are, you’re still contributing to society and providing resources and support for the next generation. Population effects can be just as strong as individual genetic effects sometimes.

So it’ll be interesting to see if lifespan absent medical intervention starts to creep upwards over the next few decades.

Of course, all of that assumes that something like SENS doesn’t take off and just sort this aging thing out for good. In which case, I will be perfectly content knowing if I’m right about the evolution of aging.

As for Alexander’s essay, it’s an interesting take on the question. I like his point that the AI risk model really matters in the hard-start singularity scenario. Tech folks tend to default to open in open vs closed scenarios, but Alexander makes the most compelling argument for closed, careful, precautionary AI I’ve seen yet. I’m not convinced, entirely, though, for a few reasons.

One is my doubt in a hard-start singularity (or even just AI) scenario. I just don’t find it feasible that creating a roughly-human machine intelligence somehow implies that it could easily overcome barriers to higher levels of intelligence. The only reason we can’t see the (I suspect many, serious) barriers to drastically super-human intelligence is because we haven’t run into them yet. What if we create a six-delta-smart human-like AI and it suddenly runs into a variety of nearly insurmountable problems? This is where thinking about IQ as an undifferentiated plane can really be problematic. There are certain intellectual operations that are qualitatively different, not just a reflection of IQ, and we don’t yet know what some of those are as you start surpassing human intelligence. (Think, e.g., of a super-human intelligence that, for whatever reason, lacked object permanence.)

Second, I think there are bound to be strong anthropic effects on AI as it’s being developed. This cuts off a lot of the scenarios that particularly worry AI risk writers (e.g. paperclip optimizers and such). Simply put: if there’s no reason that an AI researcher would ever build it, we’re probably better off excluding it from our threat model.

Finally, the dichotomous structure of Alexander’s Dr. Good vs. Dr. Amoral argument misses a lot of important shade. I work in security and all the time see instances where smart people overlook really easy-to-exploit vulnerabilities. AI is going to be heinously complex, and any view of it as monolithic, perfectly-operational super-brains is missing the reality of complex systems. Hell, evolution is the most careful, brutally efficient design mechanism that we know of, and yet human intelligence still has a number of serious unpatched vulnerabilities.

This can be taken in a couple of ways. The hopeful one is that good people outweigh bad, and good uses of AI will outweigh bad, so we’ll probably end up with some uneasy detente as with the modern Internet security ecosystem.

The cynical one is that we’re as likely to be killed by a buffer overflow vuln in Dr. Good’s benevolent AI as we are by Dr. Amoral’s rampaging paperclip optimizer of death.

Welcome to the future!

In the art of rationality there is a discipline of closeness-to-the-issue –trying to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.

The Wright Brothers say, “My plane will fly.” If you look at their authority (bicycle mechanics who happen to be excellent amateur physicists) then you will compare their authority to, say, Lord Kelvin, and you will find that Lord Kelvin, is the great authority.

If you demand to see the Wright Brothers’ calculations, and you can follow them, and you demand to see Lord Kelvin’s calculations (he probably doesn’t have any apart from his own incredulity), then authority becomes much less relevant.

If you actually watch the plan fly, the calculations themselves become moot for many purposes, and Kelvin’s authority not even worth considering.

The more directly your arguments bear on a question, without intermediate inferences –the closer the observed nodes are to the queried node, in the Great Web of Causality –the more powerful the evidence. It’s a theorem of these causal graphs that you can never get more information from distant nodes, than from strictly closer nodes that screen off the distant ones.

Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.”

Just as it is superior to argue physics than credentials, it is also superior to argue physics than rationality. Who was more rational, the Wright Brothers or Lord Kelvin? If we can check their calculations, we don’t have to care! The virtue of a rationalist cannot directly cause a plane to fly.

“Say ‘police’ if you have to”

Scene: A group of “detectives” are trying to find someone who meets the right criteria for someone to steal their identity and take over their life.

Back in Tokyo,… Funaki took the day off work and the Isakas joined in as well, searching the printout for women in their twenties.

“Say ‘police’ if you have to,” Funaki instructed. “Ask the women listed if two years back some close relation might have met with an accident or been badly injured somehow. Get them talking, no matter what it takes.”

It was past eleven, time to call it a day, when they got a break.

Funaki cupped his hand over the receiver. “We’re in business!” he called to Honma, who was over by the window, tentatively stretching his legs. Then, speaking into the phone again, he said, “Hold on, I’ll turn you over to the officer in charge.”

Emi Kimura was twenty-four years old. The printout gave her occupation as “freelancer.” At first she spoke in a sweet, almost child-like voice. She interrupted Honma to ask, “Is this for real? This isn’t Candid Camera or something?”

“No. Look, I’m sorry to bother you like this. I don’t know if you’ll be able to help us or not, but let me explain. We traced you through some customer data provided by a company called Roseline. I believe you know the name?” Honma paused. “Ms. Kimura, I’m sorry, but these questions are important for an investigation we’re working on. You don’t come from a large family, and you live by yourself, is that correct? And both your parents have passed on.”

Emi’s voice trembled. “How do you know all that?”

So far so good, Honma nodded to Funaki. “My colleague, the person you spoke to a minute ago, asked if you had any close relatives who might have had an accident or some kind of personal tragedy in the last two years. You said you had. Could you tell me more about that?”

It took a moment for Emi to Answer. “It was my sister.”

“Your sister.”

“Ye-e-es.”

Honma quietly repeated, “Yes?”

Emi was clearly getting upset. “Listen, I’m going to hang up. I mean, how do I know this isn’t some kind of crank call? How do I know you’re actually detectives?”

Honma hesitated. Funaki grabbed the phone away from him and rattled off the number of the direct line to Investigation. “Got that? Here’s what I want you to do. Ring up and say our names. Ask if there are any detectives by those names on the force. Tell whoever answers that you need to get in touch with Inspector Honma immediately. Ask them to have him call you back as soon as he can. Only give a totally made-up name and phone number. Don’t give your real ones. The officer will contact us to say you called. The we’ll call you back at your real number and give you the false name and number he tells us. Just to make sure there’s no mistake, that we are who we say we are. Fair enough?”

Emi agreed and hung up.

“When you’re in a hurry, take a side road,” Funaki said. He reached for a cigarette and lit up. …

Emi picked up on the first ring. Honma kept his voice as neutral as possible. “Hello? Is this Akiko Sato? At 5555-4444?”

“You’ve got to wonder about that girl’s powers of imagination,” Funaki whispered.

But Emi Kimura was in no mood for flip remarks. She burst into tears.

I love this scene (from Miyuki Miyabe’s All She Was Worth) as a model for social engineering. Imagine you’re Emi Kimura. You’re being asked about an emotional topic: the death of a loved one. The callers say they are cops and gives you a way to authenticate them. The authentication check succeeds. You’re talking about difficult-to-confront, emotional material with an authority figure who has authenticated themselves successfully.

Consider:

  • After the person at the Investigations precinct confirms their names and the two detectives are able to relay the fake name and number back to you, are you now convinced that they’re actually cops?
  • What’s the issue with the authentication challenge they presented? What revision to the proposed process would you give to have better certainty of their identities?
  • If you did start divulging personal details to them, what wouldn’t you say? Or more importantly, how would you know if you’d already said too much or to the wrong people?
  • Now, pretend you’re actually cops who need to interview Emi as a witness to a potential crime. Time is of the essence. What could you do to better convince Emi that you’re legitimate?
  • And now, as an attacker. You’re a social engineer trying to find out details so you can steal Emi’s identity. What revisions, if any, would would you make to the approach above?

Brief Reflections on Five Years at Amazon

As of Sept. 7th this year, I’ve worked at Amazon for five years. I’ve heard people say that a year at Amazon is worth two at most other companies, and while I’m not sure the exchange rate is correct, the principle is. I’ve learned more in the past five years than in over a decade of programming that came before it. I’ve worked with some of the smartest people in the world, and gotten to work on some amazing projects, only a few of which actually worked out. This is my half-assed attempt to distill those five years full of work and learning into a bulleted list.

Like all such endeavors, it is doomed to failure. That’s never stopped me before.

  • Strive to be afflicted with more important problems. Always seek out problems you don’t know how to solve. Eventually you’ll end up working on problems that no one knows how to solve.
  • Seek out people who know more than you. Learn from them, but don’t be afraid to challenge them. You’ll usually be wrong, but you’ll always learn. And sometimes you’ll be right. And then they will learn.
  • You are a terrible judge of your own abilities. Instead of wondering how far you’ve come, focus instead on where you’re going next.
  • Even the best make mistakes. Hold people to high standards, but be empathetic and forgiving of fallibility. And don’t be surprised when the people you idolize turn out to be less than perfect.
  • Have guts. Any group that punishes you for fighting the good fight isn’t worth being part of.
  • Figure out what you want to work on next before you’re ready to lay down your current project.
  • Admit that your works will be there long after you leave. Don’t let them hold you back. Build them with others in mind so that they don’t hold others back after you leave.
  • Don’t fear failure. Anyone who always succeeds is either a liar or is straining to hold themselves down in the bush league.

I don’t know how many more years I’ll choose to stay at Amazon. But I am absolutely certain that those years will make me a much better hacker.


Disclosure Notice

Bastiat on Security

A Tale of Two Startups

Imagine two dev teams. They’re working on comparably scoped, comparably funded products in a startup environment. These products are competing for the same well-defined market niche.

Team A cranks out clean, beautiful interfaces, chock full of functionality. They release new features at a rate of about one a month, and they get rave reviews from most of their customers. The product they produce isn’t just adopted by customers, but loved by them. The company quickly grows to absurd valuations. What engineers don’t retire are still wealthy enough that they don’t need to worry about their finances. The CEO gets fabulously wealthy, and the VCs get enough of a return to keep their scaly hides clad in a fresh human costume every day for years to come.

Six months after IPO the Team A’s network is infiltrated by hackers who exfiltrate a bunch of customer data, including credit cards, SSNs, the name of their middle school crush, etc.

The employees win, the C-levels win, the investors win, they’ve got theirs. But the customers lose. Now everyone who cares to can apply for a mortgage in your name and knows you had the hots for Molly McMurtry in 6th grade. Unfortunately, the company’s executives can’t hear customer complaints over the sound of the workman installing a submarine bay in their yacht.

They offer public contrition and a year of credit monitoring. The stock price hit degrades a few of them from “Sickeningly Wealthy” to merely “Obsenely Wealthy” and the world moves on as before.

Team B is carefully analyzing the safest, most reliable way to store the data they’re collecting from customers. They spend hours implementing cert pinning in their smartphone app, and insist on out-of-band authentication. But that means they also need a whole separate account recovery flow in case someone loses their token or smartphone or etc. And since security questions don’t provide security, they end up with a manual intervention in which you send a scan of your ID to a customer service rep. But then they need a secure way to transport the scans, and they fall down a security rabbit hole.

By the time they’ve mostly figured out how to authenticate their users, Team A already has happy customers tattooing the Team A logo on their asses and Team B’s VCs have run out of patience and eaten the CTO as a sign of displeasure.

So they push through, cut only a few of the least critical corners and put up a solution with 95% of the security they wanted. They congratulate themselves on the launch, and share a beer as the reviews roll in.

All of those reviews read “Meh, Team A’s product does this same thing better, prettier, and faster, with more features.”

The VCs eat the rest of the C-level execs, the engineers brush up their resumes, and all move on to other work (many of them at Team A). Everyone loses, except that the VCs got a good meal and the engineers got a Team B mug that they’ll get mocked for at their next company.

Broken Windows™

Everyone get out your +3 Veil of Rawlsian Ignorance. Which of these companies do you hope you end up working for? You might be a CEO, you might be manager, you might be an engineer. But you’re gonna end up working for one of these two companies.

You either selected A, or you’re a venture capitalist that’s feeling a bit peckish.

But of course, you might be a customer. All things being equal, as a customer you want a secure product more than an insecure one, but as is usually the case, all things aren’t equal. Time and resources spent on security are time and resources not spent on building cool features or getting the design just skeuomorphic enough. Given a finite number of resources, time spent improving security generally means that your product is less attractive to paying customers. That means that your product is less likely to succeed at all, much less reach “my yacht has a sub bay” levels of success.

And as a customer you can’t accurately assess the security of the applications you use. Sure, Team B can tell you that they’re more secure, but every company claims they’re secure right up until they’re publicly humiliated by hackers. Sometimes they keep on claiming it for weeks, months, or years afterwards, despite no meaningful changes to their security posture.

So as a customer you don’t have a choice between “more features” or “secure”, you have a choice between two apps, one that is better and form and function, both of which claim to be secure.

Noted security expert and glazier Frederic Bastiat wrote about this kind of thing over 150 years ago. The problem is that you have visible benefits and hidden costs. Everyone can see that Team A’s product is better. No one can see that Team B’s product is more secure. And lest you think that education alone can solve this problem, Bastiat wrote 150 years ago and is still being studiously ignored. If you think you can get customers to base their purchasing decisions on security arcana like presence of perfect forward secrecy or the proper usage of HSMs, then I’m afraid you’re mistaken.

So what hope do we have?

Eyemasks and Blinders

What if, at every stage of Team A’s development cycle, the technologies they used ensured that they were using at least sensible security practices. What if all of their Random() calls only used cryptographically secure PRNGs? What if the iFrames they used were automatically safe from clickjacking? What if their cloud service provider sourced hardware that used built-in HSMs so that they didn’t need to manage crypto keys?

Suddenly you have a world where Team A builds just as fast as before, their designs as gorgeous and drop-shadowy as ever, except the security posture of all of their features is inherently heightened. So that maybe when they do get breached, their database design and use of HSMs means that the attackers can’t get to non-public data. Or can exfiltrate the DB, but can’t decrypt it. Or maybe smarter networking algorithms mean that they didn’t have as many ready entrypoints as they would have otherwise.

Of course, this approach is by no means novel. People like Adrienne Porter Felt and Dan Kaminsky are doing an amazing job of pushing just this kind of usable security. But too often I see usability considered as just one aspect of security. And unfortunately I sometimes see it considered as an unimportant aspect of security.

In a very real sense, though, usability is the only aspect of security that really matters to most developers, and so it’s the only aspect of security that will actually help end users. Secure defaults and ease of use are the only way that we’ll ever win at the security game. As long as “being secure” and “building cool stuff fast” are in any way a tradeoff, the iron law of incentives will mean that we’ll lose far more often than we’ll win. And right up until they’re owned, our customers will have no way of knowing whether they were secure or not.

Remember: in markets incentives rule. In choice the Seen beats the Unseen. Security will win not when every dev team is security conscious, but when dev teams no longer need to be. So while it’s great that we have strong ciphers and HSTS and X-Frame-Options and everything, unless we’re streamlining these things to the point where they’re the default, we’re just building tools that the Next Big App will fail to use.

So “easy” is good, but the only thing that actually wins is “just works.” Chrome gets heat occasionally for turning off insecure features. But Chrome (and their awesome security team) are doing the only thing that actually secures users: making it so that the default experience is as secure as possible. And they’re doing so as aggressively as they can. And the result is a meaningfully more secure web for anyone using Chrome.

We need to do a whole lot more of this. So the next time you’re working on a tool or a library, and you find a way to make it more secure, don’t just hide it behind a config or a –be-secure flag, sack up and make it the default. Better yet, make it the default and then rip out the broken, insecure code three months later. You will catch heat. But in the end, that heat is only coming from people who aren’t bothering to look for the unseen.

And if we are ever to win the security game, it’s up to us force security on those who refuse to see, so that we can secure those who are unable to.

Return top

Magic Blue Smoke

House Rules:

1.) Carry out your own dead.
2.) No opium smoking in the elevators.
3.) In Competitions, during gunfire or while bombs are falling, players may take cover without penalty for ceasing play.
4.) A player whose stroke is affected by the simultaneous explosion of a bomb may play another ball from the same place.
4a.) Penalty one stroke.
5.) Pilsner should be in Roman type, and begin with a capital.
6.) Keep Calm and Kill It with Fire.
7.) Spammers will be fed to the Crabipede.