The State of Incident Response by Bruce Schneier

0
229

Bruce Schneier This series of articles reflects a Black Hat talk by prominent computer security expert Bruce Schneier where he covers the current state of incident response.

I’m going to talk about incident response. I’m going to talk about it in kind of a meandering fashion. I’m going to talk about three trends in cybersecurity. I’m going to talk about five pieces of science: four from economics, one from psychology. I’ll talk a bit about the current state of incident response and try to tie this all together with some systems theory from the U.S. Air Force. That’s the plan for the next hour.

So, trends first. First trend is that we are losing control of our IT infrastructure. And I think this is really interesting to watch, because it’s really a function of the way technology is working right now. The first thing that’s happening is that the rise of cloud computing means we have a lot less control of our data: our email, our photos, calendar, address book, messages, documents – they’re all on servers belonging to Google, Apple, Microsoft, Facebook, these different companies. Probably in this room is going to be the greatest concentration of people who actually still have their stuff on their computers. Go out of this room, out of this conference – everybody else is going to have their data on someone else’s cloud. That’s the way the world is working. It’s true for individuals, and it’s also becoming true for organizations. Organizations now are outsourcing communications, CRM systems, applications, desktops – entire IT infrastructure – into the cloud.

As we do that, we lose control over the tech details of those things. We often can’t affect the security of those systems; I mean, we can, depending on virtualization, but the core security we simply have to trust. You don’t actually know visibility of this security. I mean, I can’t tell you what kind of operating system Facebook uses. I have no idea, and I pretty much don’t care.

Less user control with portable devices
Less user control with portable devices

Also, we’re increasingly accessing all this data through devices where we have much less control. We’re using these things (holding up a smartphone): iPhones, iPads, Android phones, Chromebooks – devices where we don’t have as detailed a control of the configuration as we do on our computers. I cannot run arbitrary software on this machine unless I break it, which, you know, normal people aren’t going to do.

And if you look at the operating systems, you look at Windows 8, you look at Apple’s Mountain Lion, now Yosemite – both of those are moving in the same direction of more vendor control, less user control. And again, corporations are using these things just as much as individuals are, because people want them, people like them. So, again, it gets to less control of the infrastructure. Organizations are doing this pretty much for solid financial reasons. It makes a lot of sense to outsource all this, it’s cheaper, it’s better, more reliable – all the reasons you do it. And in general we in society always outsource infrastructure. IT is catching up here. But as security people, this means we have much less control. That’s the first trend.

Second trend: attacks are getting more sophisticated. There’s a lot of lousy news out there. What’s reported on seems to be a function of what editors find interesting, and less – what’s real. But we are seeing increasing attacker sophistication. This is variety of attackers – nation state, non-nation state, hobbyists, criminals; and we are seeing increasing sophistication across all levels.

I have debates on cyber war, and people are talking about some of these major attacks as examples of cyber war. I think that’s nonsense. I think what’s really going on and the really important trend is that we’re increasingly seeing war-like tactics being used in broader cyber conflicts. This is important. Technology spreads capability, especially computing technology which can automate attacks and capabilities.

And it used to be you could tell the attacker from the weaponry. And if you walked outside on the street and you saw a tank, you knew that the U.S. Army was involved, because only armies could afford tanks. The weaponry told you who the attacker was. That shortcut doesn’t work anymore. Everyone is using the same tactics, everyone is using the same technologies; they are all across the threat spectrum. A lot of this is “advanced persistent threat”, a buzzword that I started out hating and have come round to like, because I think it describes something really important about IT security that as an industry we’ve largely missed.

So you could think of attackers along two different axes: skill and focus. A low skill, low focus attacker – that’s a script kiddie, that’s an opportunist, that’s someone who is attacking everything and anything, what I think of as the background radiation of the Internet. High skill, low focus – think of those as identity theft attacks, zero day exploits, the kind of stuff we also see pretty regularly. Low skill, high focus is you typical targeted attack. High skill, high focus – that’s APT; that’s advanced persistent threat.

The reason this is an important distinction is that the way you look at your defense is different. In a normal criminal attack, what matters is relative security. If your security is better than people around you, you are safe. The typical criminal wants a database of credit card numbers – it doesn’t really matter where he gets them. It will be you or somebody else. If you’re better, you’re fine. Against an APT, the attacker for some reason wants you. And there it doesn’t matter how much better you are than your neighbors; what matters is whether you are better than the attackers. And we all know in this room that if the attacker is sufficiently motivated, skilled and funded, they will get in. You all know somebody who does pen testing for a living, and you know they never fail not to get in.

The question is how we deal with it. I think these are politically motivated attacks, and I define politics very broadly here: nationalistic, religious, ethical, against institutions, against governments. So you think of companies in politically charged industries – big oil, big tobacco, big pharma. You think of companies everyone loves to hate – used to be Microsoft, now it’s Google. And these are the targets for these politically motivated attackers.

Financially motivated hacking has also gotten more sophisticated, better funded, more international. I think what’s really happened is that cybercrime has finally matured as an industry. There’s now an entire supply chain in place for cybercrime: thieves, fences, mules; all the pieces are there, anything that you can’t do you can outsource, you can chain it all together. And the process from theft to monetization has become very fast and very efficient. That’s trend two.

Government-run defense becoming common
Government-run defense becoming common

Trend three is the increased government involvement in cyberspace. Long gone are the days when governments didn’t understand the Internet and when they left the Internet alone. The regulatory environment is getting much more sophisticated. This is domestically and internationally. There are a lot more rules involving personal data, especially outside the U.S. On the attack side, there’s a lot of nation state-sponsored espionage and attack. A few months ago we were seeing some attackers from China compromising U.S. corporate targets. We are seeing nation state attacks conducted by the U.S., by other countries. And a lot of time organizations are really collateral damage that were sort of in the way, more than anything else. There’s a lot more talk about critical infrastructure which is using more government-run defense. As countries realize that their power grid and their transportation infrastructure are all dependent on the Internet, they’re going to start saying “Hey, we need to be in charge of its defense.” We’re going to see that more and more.

Also, we have a cyberwar arms race going on. There’re 27 countries now with cyber commands. They are all building cyber weapons, they are all stockpiling vulnerabilities and they’re all looking at each other with suspicion and then doubling their efforts to make sure they are stronger than before. And I think this is increasingly destabilizing.

Alright, those are the trends.

Now I want to give you some IT economics that’s relevant to security. I have four pieces of economics that matter for IT and matter for security, and I think the more we understand them the more we understand the weird stuff that happens in our industry.

Metcalfe’s law
Metcalfe’s law

First one is network effect. You’ve all heard of Moore’s law. There’s a lesser known law called Metcalfe’s law, which says that the value of a network equals to square of the number of users. Basically, the value of a network equals the pairwise connections between the nodes. This is true for real networks – phones, email, SMS, Skype, Facebook users. And the intuition is pretty simple: one fax machine is useless, two are boring; you have a million – suddenly you have a network. Same for email, for everything else. The more people that have the system the more valuable the system is. The more people on Instagram the more you want to be on Instagram.

This is also true for virtual networks: the network of Windows versus Mac users or iOS versus Android users. The more people in the virtual network the more apps, the more user groups, the more stuff happens. This phenomenon means you tend to have a single dominant player in the marketplace. Big get bigger because being big is valuable to the new users. Think of Facebook, think of Skype, think of Windows. Most people are on Windows because most people that most people know are on Windows. People are on Mac because people they know are on Mac.

Second piece of economics that’s relevant is the notion of fixed costs versus marginal costs. When you look at any product, there are two types of cost: there’s the development cost to develop the thing and the cost to make each individual thing. In IT, most cost is in development. So, for this wonderful hat of mine there’s some design cost and there’s the cost for making another one. But for this thing – it’s a random disk up here – the cost of the first one might be a bunch of million dollars, and the cost of the second one is free, 10 cents, it’s a DVD.

Big get bigger because being big is valuable to the new users.

This weird economics that all the cost of the thing is front-loaded in the development means stealing the result of the development because of a very powerful attack. Think of movies, think of music, think of pharmaceuticals. Being able to make a lot of these (disks) and have someone pay to make the first one can be very valuable. And this is why you see a lot of security going into making this not happen – fundamentally, mechanisms that break the market, because we have to artificially make it harder to make a lot of these so that whoever made the first can recover his money.

In other cases, the high fixed cost becomes a barrier to competition. Think of Google Maps, or actually Google Street View is a better example. Once someone drives a car around the entire planet taking pictures, it’s much harder for a competitor to do the same thing. And then you have these dynamics where vendors will cut costs of sale dramatically to drive out of competition. So if the maker of this thing (disk) sees a competitor on the line, they can drop the cost to almost nothing, where the competitor can’t compete because he hasn’t recovered his fixed cost yet.

at-t
Switching costs can be high

Third piece of IT economics that matters a lot is the notion of switching costs. Switching cost is the cost to switch to a competitor’s product. In some cases switching costs are very low. If you drank a Coke today and you didn’t like it, you can drink a Pepsi tomorrow. The switching cost is zero. That means that Coke has to taste good, or you’re going to switch. Sometimes switching costs are high. If AT&T pisses me off today, I’m likely going to keep them tomorrow, because getting a different phone carrier is expensive, it’s time-consuming, it’s annoying. And in IT, switching from one product to another can be a lot of things: retraining staff, rewriting applications, converting data.

What’s relevant to us is that the higher the switching costs the more a company can piss you off before you’ll switch. In industries where switching costs are high, customer service is lousy because you have what’s called lock-in. Customers are locked in. And this is why you have in our industry companies doing everything they can to keep switching costs high. That includes proprietary file formats, that includes non-compatible accessories, not letting you take your data with you when you leave. Apple wants it to be really hard for you to take your music with you when you leave iTunes. All game companies want it to be very expensive, or impossible, for you to run one set of games on someone else’s console. A good decade and a half ago, the cell phone companies fought really bitterly cell phone number portability, because the whole cost of you reprinting your business cards telling people your new phone number kept the switching costs high.

All these three things tend to lead to a dominant market structure. Big get bigger. And big stay big. It’s not guaranteed, but these are trends in that direction.

When the buyer can’t tell the difference between a good product and a mediocre product, the mediocre products win.

Fourth piece of IT economics that’s especially relevant to security is the notion of a “lemons market”. This is work by economist named George Akerlof. He actually won a Nobel Prize for this. What he studied was markets where there was asymmetric information between the buyer and the seller, specifically markets where the seller knew a lot more about the product than the buyer. The specific example he used was a used car market. And this is his thought analysis: suppose there is a town with 100 used cars for sale; 50 of them cost $2000, and 50 of them are lemons that cost $1000. In that market, the medium price for a car is $1500. And in that market, all the lemons sell and none of the good cars sell.

'Lemons market' principles apply to IT economics as well
‘Lemons market’ principles apply to IT economics as well

And so, what he proposed is that in a market where the buyer can’t tell the difference between a good car and a lemon, lemons drive the good cars out of the market. When the buyer can’t tell the difference between a good product and a mediocre product, the mediocre products win. Since he came up with this theory it’s been verified experimentally, it’s been verified observationally – this is true. This is what happens. This is a lemons market. And I think this explains quite a lot about IT security. If you think about antivirus companies of the 1980s, the firewalls of the 90s, the IDS’s of the 2000s, the companies that won weren’t the best products. Because the buyers couldn’t tell the difference. I can hold up two encryption products that use the same algorithms, the same protocols; that make the same security claims; one is really good, one is kind of mediocre; you can’t tell the difference. What are you going to buy? You’re going to buy the cheaper one.

The real problem here is that the requirements are non-functional. It’s actually easy to tell if, I don’t know, a word processor does italics – you hit the Italics button and see what happens. That is a functional requirement easy to test. Whether an encryption product is secure is much harder to test. So security, availability, reliability are what I think of as the “why” requirements. Buyers have trouble telling the difference.

This is why you see a lot of effort going into what economists call “signals”. And signals are ways that sellers signal the buyers that their products are actually not lemons. In the used car market, they tend to be warranties: take the car home, drive for a month, if you don’t like it bring it back and get your money back. In IT, signals tend to be certifications, awards, references – all the ways that our bosses buy IT products, not by knowing what they’re doing but by finding someone else that they can rely on. Remember in the 1960s no one ever got fired for buying IBM – that’s what that meant. “I don’t know what to buy, but they all buy IBM; I’m just gonna buy IBM,” or today’s best practices: “I don’t know what to do, but everyone says “Do this,” so I’m gonna do this.” And that’s a lemons market.

Alright, that’s the economics.

Daniel Kahneman, Nobel Prize winner for prospect theory research
Daniel Kahneman, Nobel Prize winner for prospect theory research

Now my one piece of psychology. I am going to try to explain security in terms of one psychological theory. And the theory is “prospect theory”. You’ll also hear this called “loss aversion”, “framing effects”. Basically, it is a way that we as humans look at risk. The quintessential experiment in prospect theory is to take a room full of subjects, you know, in the beginning it’s usually college undergrads because that’s who you got; you divide the room in half and you ask one side of the room to make a choice, and the choice is between $1000 – here’s the cash – or a coin flip at $2000. Kind of appropriate experiment for Las Vegas. And if you survey a room full of people you will find that about ¾ will take the sure thing. Despite everything Las Vegas says, more people would rather have $1000 than a coin flip chance of $2000.

The second half of the experiment is you take the other half of the room and you give them a very similar but importantly different choice. I can either take $1000 from you right now – take it from your bank account – or I will let you have a coin flip chance at me taking $2000 or nothing. And it turns out, if you ask a room full of people to make that choice, about ¾ of them will take the chance. Now, this is actually really interesting. The people who came up with this theory also won a Nobel Prize in economics, even though they were psychologists, freaking out everybody because economists said this is impossible, yet this has been proven again and again. And it’s very robust result across ages, across cultures, done with little money, with real money. This experiment has been done a lot.

As a species, we are risk-averse when it comes to gains and risk-seeking when it comes to losses.

But basically, as a species, we are risk-averse when it comes to gains and risk-seeking when it comes to losses. And it’s not just us, someone figured out how to do this experiment with other primates; and we kind of all are. There has been a bunch of explanations of this. I think the best one comes from evolutionary psychology. This is the basic story: if you imagine yourself as an individual living at the edge of survival, even a small win means you’ll live to see tomorrow. So, for this half of the room, if they take coin flips at $2000 or nothing, half of them will take nothing, half of them will die. But if they take a sure thing of $1000, they’ll all live. But for the other half of the room also living at the edge of survival, a sure loss of $1000 means you’re all dead. But a coin flip loss of $2000 means half of you lose nothing, half of you survive. So, our brains are primed to have this bias. And the real interesting part of this experiment is you can actually take the exact same choice, frame it in the language of gains or the language of losses – and you still see the result. Even just a semantic difference causes the change.

What this means is security is hard to sell, because security is always a small loss (“buy my product”) versus a risk at a larger loss (what will happen if you don’t have the product). And you have probably experienced this when you went to your boss and said: “Hey, we need to buy this security thing because we are at the risk of this bad thing.” And your boss looks at you and says: “We didn’t have that product last month and we didn’t have the bad thing last month, maybe we should take the chance.” Betting losses is how we work.

Unity of protection, detection and response
Unity of protection, detection and response

Okay, so how does this affect incident response? I told you I’d get to this eventually… We all know that security is a combination of protection, detection and response – three steps. And we need response because protection isn’t perfect. We need it more and more today especially, because 1) we’ve lost control over our computing environment, there’s a lot of protection we can’t do; 2) attacks are becoming more sophisticated, we need more response; 3) we’re increasingly affected by other people’s fights; and 4) we’re living in a world where companies will naturally under-invest in protection and detection.

In the 1990s, I used to say “security is a process, not a product.” And by that I meant something very strategic. What I meant is that you can’t buy a bunch of stuff and be done; you have to continually evaluate your security and continually re-evaluate, re-purpose your stance. Tactically, security becomes a product and a process. Really, it’s people, process, and technology. What’s changing is the ratios.

'Usability guru' Lorrie Faith Cranor
‘Usability guru’ Lorrie Faith Cranor

The conventional wisdom in IT security is that people don’t generally help, that people are a liability and people need to be removed from the system. I have a quote from usability guru Lorrie Faith Cranor, she writes: “Whenever possible, secure system designers should find ways of keeping humans out of the loop.” And we all know this: people are a problem, people are the biggest security problem. And, you know, we’ve been doing pretty well at this: entirely automated prevention systems – antivirus, patching; lots of automated and semi-automated detection systems. We’re pulling people out of the loop left and right, and we’re doing pretty well.

The problem with response is that you cannot fully automate it; you can’t remove people from the loop by definition, it’s response. And if you think about it as you move from protection, detection to response, the people-to-technology ratio goes up, you need more people and less technology, for a whole bunch of reasons. All attacks are different; all networks are different; all security environments are different; and all organizations are different; the regulatory environment of the organization it’s in is different; the political and economic considerations in organizations are different.

Those differences are often more important than the technical considerations. This affects the economics of IT security. The products and services for response are different. There are less network effects, there are much higher marginal costs, there are lower switching costs, and there’s less of a lemons market. This will be interesting for us, because it means that unlike a lot of other areas in IT security, better products, better services, better companies will do better. There’s less of a first mover advantage, there are far fewer natural monopolies, and again, this is a new thing for us in the industry, this is going to be a surprise. And I think it’s a good thing. I think it’s something we are all going to benefit from.

Alright, so, people, process, and technology. The key here is making it scale. I’m at the follow-on sentence from Lorrie Cranor, she wrote: “However, there are some tasks for which feasible, or cost effective, alternatives to humans are not available. …” She means we don’t have robots yet. “… In these cases, system designers should engineer their systems to support the humans in the loop, and maximize their chances of performing their security-critical functions successfully.” So in places where you can’t remove humans from the loop, you have to build technology to support humans in their critical tasks. Think of any emergency response system – think of police, think of fire, think of medical, think of military. That’s what technology does: technology supports the humans who are critical in the response system.

In IT security, in response, we need technology that aids people, and not the other way around. And the goal here is resilience. Very strongly, the goal here is to build resilient systems. We are not going to build impenetrable systems, and we shouldn’t build fragile systems. And a lot of the response strategies echo resilience: mitigation, survivability, recoverability, adaptability. These are all ways to achieve resilience. And because response is real-time, because response is the closest thing we have in IT to dogfighting, this is all about feedback loops.

OODA loop, in a nutshell
OODA loop, in a nutshell

And there is a really nice piece of systems theory coming from the U.S. Air Force that talks about this; actually it comes from dogfighting. It’s a notion of “OODA loops”. OODA loops are starting to be talked about in IT. I think there’s a danger we are going to overuse this concept, but I think it’s extraordinarily valuable and something we need to think about. OODA stands for “observe, orient, decide, act”, and it is a cycle. This was developed by Air Force military strategist John Boyd, and he developed it for thinking about dogfights, that a pilot in a dogfight is continuously going through this OODA loop in his mind – observe, orient, decide, and act. This is the process of collecting, evaluating, and then doing. And this type of process is widely applicable in any real-time adversarial situation. You’ll see articles that talk about not only airplane dogfights, but strategic military planning, business competition, anything else.

And it is, by definition, an iterative process. Someone in this kind of situation is continually going through OODA loops in their head. And what Boyd observed is that speed is essential here, that if you can make your OODA loop faster than the adversaries, if you can get (the phrase he uses) inside the other person’s OODA loop, then you have an enormous advantage. You can respond effectively faster than he can react to your response.

There’s some good writing about applying this to cybersecurity and incident response. There are papers – I recommend just googling the term and wandering around a bit. The reason I like this framework is it gives us a way of discussing effective tools for incident response. Really, what this talk is at this point is a plea for tools. We need good IR tools to facilitate all of these steps. And we can break them down.

First step is “observe” – knowing what’s happening on our networks in real time. That’s real-time threat detection from IDS’s, that’s log monitoring, log analysis tools, network performance analysis tools, network management tools, physical security information – pretty much everything. We need to be able to get all of that data in a place where it can be monitored in real time, both before and during an attack.

“Orient” – understanding what this information means in context. And context is critical in any response. So, in the context of the organization – what’s happening in the company at the time? In the context of the greater Internet community – what kind of malware is out there? What kind of zero-days are we just seeing? What kind of geopolitical situation is going on? Is there some new vulnerability that was just discovered or announced? Is the organization rolling out a new piece of software? Are they planning layoffs? Is there a merger? Has the organization seen attacks from this IP address before? Has the network been opened up to a partner? So, you are thinking of data feeds from the news, from intelligence feeds, from the rest of the organization, just ways to put what’s going on in context.

We need good IR tools to facilitate all of these steps.

Third is “decide” – just figuring out what to do in the moment. This is actually hard. Who has the power to make a decision? How do they make the decision? What sort of executive input is required? Is there marketing input? Is there PR input? Is there legal input? How do you justify the decision? Because after the fact you’ll have to hold off in front of some investigative body, either in your company or some lawsuit, to justify why you did what you did. That’s all part of the decision process.

And then “act” – being able to make changes quickly on the network. And again, here a lot of organizations fall down, because the people in the IR team might not be authorized to make changes all the way over there, they might not have the right authorities. And we won’t know what authorities they need until this all starts. So it’s going to require broad access, continual training. We need tools for all of these things. We need tools that are powerful, flexible, intuitive, tools that aid people. And we need a lot of them. This isn’t one thing. This is a whole ecosystem of incident response products and services that do this basket of things.

Incident response is getting more important. It’s getting more important for a lot of reasons. Attacks are getting more sophisticated. The regulatory environment is getting more complicated. Litigations are getting much more common. Geopolitical factors are major. And again, organizations underspend on prevention. The neat thing and the reason I’m really optimistic about this in the next few years is that IR software is not going to be like the rest of security software; that the requirements – none of that stuff I just listed – none of them are those non-functional “why” requirements. They are all stuff that products and services have to do. And this means the good stuff is going to beat out the mediocre stuff.

Co3 Systems' vision of end-to-end incident response
Co3 Systems’ vision of end-to-end incident response

And we as engineers need to start building the good stuff, because it’s important. I’ve started doing this. I have a company, Co3 Systems, and I’m trying to build a management platform to coordinate incident response. That’s just one piece of it. I think it’s an important one and cool one, but a lot of things have to feed into it, it has to feed into a lot of things – it all has to work together. The goal here is to bring people, process, and technology together in a way that hasn’t been done before, in a way that is going to mirror less IT and more things like generic crisis management. We have a lot to learn from other disciplines that have been doing this sort of thing for decades. And this how we’re going to defend against threats, this is what’s going to work.

So, with that, I’m happy to take questions. Or not, but that seems odd. Alright, so, the way this works is one person has to raise their hand, and then everyone else will. So someone has to sort of be the guinea pig.

Question: What do you think needs to be done for companies to start realizing they are under-investing in incident response?

Answer: I don’t know. I mean, companies have been under-investing for a long time, and there have been lots of frameworks. I think, fundamentally, we will always under-invest. Even if you think of politics, it is much more politically effective to respond to the attack than to invest before the attack. You get credit for doing something, it’s focused, it’s targeted, people are impressed. We will under-invest always, and then after the fact try to fix things. I think it’s something very fundamental about us as people and what we are going to do. So I don’t have a lot of hope that anything will change that to any real extent.

Question: With regard to keeping people out of the loop, especially when it comes to incident response, is there any way to completely do without humans?

Incident response can be outsourced
Incident response can be outsourced

Answer: I don’t think you can automate incident response because it’s too strategic, it’s not like a patch that you just install. It depends so much on human things, it depends so much on coordination. You can automate some of the getting the ins and the outs. Getting all the data – you can automate. And when you figure out what to do, you can automate doing it. But you’d still need that step of analyzing it and figuring it out that requires a human brain. So, pieces can be automated, and it makes sense to do it because the speed is essential here. But the actual process of IR will always require a human team. Companies might outsource that team, there are lots of companies out there that will handle your IT infrastructure, including incident response. But they are going to have to coordinate with you because it’s your business. So, no, I don’t think we can ever pull humans out of the loop here. Alright, when we invent AI then maybe, but until then – no.

Question: What ways are there to strike back in the context of incident response?

Answer: I’m really not in favor of strikeback. Vigilante justice tends not to work well in our society. That being said, I think we are going to see more of it, because companies are getting fed up. But then you are going to have attackers mask their origins: “I’ll attack via his network, and you strike back against him – that’s great!” So, we have a lot of trouble targeting attacks back. We can identify attacks sometimes, but it takes weeks and months. The great reports from Mandiant take a while to generate. You are not going to figure out who attacked you in 10 milliseconds. This is what you need for strikeback. So, I do not think that’s going to be the answer. I think it’s a dangerous trend. I’m really not in favor of it, because it’s too easy to get wrong and to go after innocence.

Question: Are there any government requirements for incident response?

Answer: There’re a lot of questions about government’s requirements for security in general. I think that is coming, especially in industries that matter. Just like there are government requirements for food safety, we are going to see government requirements for data safety. It will depend, you know, U.S. versus Europe, on politics; but that sort of thing – minimal required security – is something that’s very common in other areas, and I think we will get it in IT eventually. I’m not sure it’s a good thing, mind you, but I do think it’s coming. I think the era of governments saying: “You know, let industry handle it, it will all work out okay,” – is coming to an end, and there is going to be more government involvement. Whether it’s going to be tax incentives or sort of like on FTA or FTC (penalties and rules), it will actually depend on politics. But I do think more government involvement is coming.

Question: Have you seen effective patterns of cross-functional incident response teams?

Answer: If you go to some of the incident response conferences you hear about companies that are doing this very well. They tend to come from industries that have had incidents way before computers: oil and gas, or companies that have to deal with hurricanes, where incidents happen and they are good at convening teams ad hoc and figuring out what to do and doing it. The IT is just the tech add-on to a much broader type of system. So yes, there’re a lot of places that we in IT can learn from, who have been doing this since forever, to reach the same level of tech. And again, I think that’s a really good thing because we are going to see a lot of good cross-fertilization between.

Question follow-up: Any examples of references?

Answer: I don’t want to name companies, just because I don’t know if I can, and I forget, and if I get it wrong people get mad at me.

Question: Can you comment on how cloud companies see their security?

Answer: It’s kind of interesting to watch. The bigger cloud companies’ security model is “trust us”. You know, you outsource your email to Google and you go to Google and say: “We need to audit our mail system,” and they say: ““No.” And if you say: “Well, we’re not going to use you,” they say: ‘Okay.” So, their model is “take it or leave it”, and they are big enough, they can do that. Some of the smaller companies do have some ability to audit. I think what is going to happen is some kind of waterfall audit model. The way this matures is that if you host your infrastructure – I’m making this up – on Rackspace, Rackspace will have an audit that you can get a copy of and staple in your audit. And then you hand that to the people who are outsourcing their stuff to you, and they staple it in their audit. I think that’s the way it will go. It’s going to take a while to get there, but I see no other way to make this work. There are going to be audit requirements, but no company is going to allow someone else to go and audit their stuff. But they will give you the piece of paper saying: “We’ve been auditing and here are the results.” You can use that to satisfy your regulators. I think it’s long time coming, but that’s going to be the way this has to end, I don’t see any other way to make this work.

Question: How can an enterprise compete with a threat actor that has superior resources?

Answer: They probably can’t. The attacker is going to get in. The question is what happens afterward. And it’s not always a matter of “who has more money wins”, it has to do with risk aversion, it has to do with the legal environment. Often I can compete with a hacker that’s better than me by calling the police – that’s a way I can temporarily get some more resources. So we have a lot of mechanisms in society for dealing with attackers that are more skilled, scarier, bigger and better than us. We can actually make them apply here. But in general, I think an attacker that has more resources is going to get in. The question is “Now what?” How do we kick them out? How do we regain security? I mean, how are we resilient against that? That’s the whole notion of response. We are responding because the attacker has broken into our network. We are not responding to a potential, we are responding to an actual. And there are lots of things that we can do. It’s not a matter of saying: “Well, they’re going to get in, so we’re done.” It’s a matter of containing some of the damage.

Question: I’m curious what you imply when saying “resilience”?

Answer: By “resilience” I mean several things. How do we mitigate the threat? Survivability – how do we survive, how do we make sure that we don’t just fall over dead? How do we recover? How do we kick them out and regain security? This can be really difficult. And how do we adapt? How do we continually improve ourselves to make our security better? So I think of resilience as a whole kind of basket of, again, these different “why” requirements to make us secure even in the face of “the bad guys are going to get in”. And there are a lot of examples. As a species, we are resilient to disease. Disease will kill some of us and won’t kill all of us, and we have genetic diversity. That is the security mechanism by which we are resilient to a disease. That’s not actually a good strategy for IT. I mean, it’s not going to be good to day: “10% of you will be hit by this virus, but the rest of us will survive,” – that’s kind of lousy. But that is what species do. So I sort of think of a broad basket of ways that we can build security that can deal with the fact that there are threats that will penetrate our defenses.

Question: A couple of years ago there were two bug bounty programs. Do you see that as part of this trend?

Bug bounty programs are fairly useful
Bug bounty programs are fairly useful

Answer: Bug bounty programs, I think, are interesting and useful. I want them to help more. Unfortunately, both the military market and the black market pay more than the bug bounties. I think these programs attract different sorts of people and they certain help. I like seeing it, it’s companies taking this stuff seriously. But we are just seeing more and more cyber weapons manufacturers finding bugs and then not turning them in, because they can make more money selling them to governments around the world. So I like the programs, I think they’re good, I’m just not really holding out hope that they’re going to make that much of a difference. It’s frustrating.

Question: In terms of the prospect theory you mentioned, how do we help change the minds of people?

Answer: This is hard. The way you overcome psychological biases, in general, is to know about them and compensate. If you know that you are risk-averse when it comes to gains and risk-seeking when it comes to losses, you can notice the behavior in yourself and compensate for it. Just like there’re a whole lot of psychological biases that a casino floor is praying on, if you know them you can walk by those tables, look at them and say: “Well, that’s attacks on people who don’t know math,” and keep going. The way we overcome them is by knowing them, and this is hard. Fear or terrorism – there’s an enormous number of psychological biases that terrorists pray on, that we as a country have fallen completely for. It’s talking about them that gets us to think about them and hopefully get beyond them. It is possible. And it’s not just IT. This is people trying to sell insurance; this is people trying to sell burglar alarms. If you ask anyone who sells burglar alarms they will tell you that the only people who buy burglar alarms is after they have been robbed or after their neighbors have been robbed, that’s when they buy them.

Question: How do false positives tend to affect incident response systems?

Answer: False positives do kill a lot of these systems, but we’re getting better at that. And that’s detection. Detection still has a lot of room for good technology. The response is going to be what you do. When you talk about response you usually mean the things I need or ways to clear false positives quickly. I’ll give you two examples. One is airport security. You walk through the metal detector, it goes off, and the screener can very quickly figure out what’s going on. If it’s a false positive they will learn it quickly. On the other hand, when the NSA sends a tip to the FBI – they sent 10,000 over the five years after September 11th – it took hundreds of man-hours to clear every one of those false positives, they were much more expensive. So the trick, really, is a fast secondary test that lets you clear the false positives. And if you can do that, then you can deal with a much faster and noisier primary test. This is, basically, how we deal with medical tests. So, that’s the sort of thing we need.

Thank you very much!

LEAVE A REPLY

Please enter your comment!
Please enter your name here