Bruce Schneier takes questions from the Black Hat attendees about issues related to incident response such as under-investing in defense, striking back, etc.
So, with that, I’m happy to take questions. Or not, but that seems odd. Alright, so, the way this works is one person has to raise their hand, and then everyone else will. So someone has to sort of be the guinea pig.
Question: What do you think needs to be done for companies to start realizing they are under-investing in incident response?
Answer: I don’t know. I mean, companies have been under-investing for a long time, and there have been lots of frameworks. I think, fundamentally, we will always under-invest. Even if you think of politics, it is much more politically effective to respond to the attack than to invest before the attack. You get credit for doing something, it’s focused, it’s targeted, people are impressed. We will under-invest always, and then after the fact try to fix things. I think it’s something very fundamental about us as people and what we are going to do. So I don’t have a lot of hope that anything will change that to any real extent.
Question: With regard to keeping people out of the loop, especially when it comes to incident response, is there any way to completely do without humans?Answer: I don’t think you can automate incident response because it’s too strategic, it’s not like a patch that you just install. It depends so much on human things, it depends so much on coordination. You can automate some of the getting the ins and the outs. Getting all the data – you can automate. And when you figure out what to do, you can automate doing it. But you’d still need that step of analyzing it and figuring it out that requires a human brain. So, pieces can be automated, and it makes sense to do it because the speed is essential here. But the actual process of IR will always require a human team. Companies might outsource that team, there are lots of companies out there that will handle your IT infrastructure, including incident response. But they are going to have to coordinate with you because it’s your business. So, no, I don’t think we can ever pull humans out of the loop here. Alright, when we invent AI then maybe, but until then – no.
Question: What ways are there to strike back in the context of incident response?
Answer: I’m really not in favor of strikeback. Vigilante justice tends not to work well in our society. That being said, I think we are going to see more of it, because companies are getting fed up. But then you are going to have attackers mask their origins: “I’ll attack via his network, and you strike back against him – that’s great!” So, we have a lot of trouble targeting attacks back. We can identify attacks sometimes, but it takes weeks and months. The great reports from Mandiant take a while to generate. You are not going to figure out who attacked you in 10 milliseconds. This is what you need for strikeback. So, I do not think that’s going to be the answer. I think it’s a dangerous trend. I’m really not in favor of it, because it’s too easy to get wrong and to go after innocence.
Question: Are there any government requirements for incident response?
Answer: There’re a lot of questions about government’s requirements for security in general. I think that is coming, especially in industries that matter. Just like there are government requirements for food safety, we are going to see government requirements for data safety. It will depend, you know, U.S. versus Europe, on politics; but that sort of thing – minimal required security – is something that’s very common in other areas, and I think we will get it in IT eventually. I’m not sure it’s a good thing, mind you, but I do think it’s coming. I think the era of governments saying: “You know, let industry handle it, it will all work out okay,” – is coming to an end, and there is going to be more government involvement. Whether it’s going to be tax incentives or sort of like on FTA or FTC (penalties and rules), it will actually depend on politics. But I do think more government involvement is coming.
Question: Have you seen effective patterns of cross-functional incident response teams?
Answer: If you go to some of the incident response conferences you hear about companies that are doing this very well. They tend to come from industries that have had incidents way before computers: oil and gas, or companies that have to deal with hurricanes, where incidents happen and they are good at convening teams ad hoc and figuring out what to do and doing it. The IT is just the tech add-on to a much broader type of system. So yes, there’re a lot of places that we in IT can learn from, who have been doing this since forever, to reach the same level of tech. And again, I think that’s a really good thing because we are going to see a lot of good cross-fertilization between.
Question follow-up: Any examples of references?
Answer: I don’t want to name companies, just because I don’t know if I can, and I forget, and if I get it wrong people get mad at me.
Question: Can you comment on how cloud companies see their security?
Answer: It’s kind of interesting to watch. The bigger cloud companies’ security model is “trust us”. You know, you outsource your email to Google and you go to Google and say: “We need to audit our mail system,” and they say: ““No.” And if you say: “Well, we’re not going to use you,” they say: ‘Okay.” So, their model is “take it or leave it”, and they are big enough, they can do that. Some of the smaller companies do have some ability to audit. I think what is going to happen is some kind of waterfall audit model. The way this matures is that if you host your infrastructure – I’m making this up – on Rackspace, Rackspace will have an audit that you can get a copy of and staple in your audit. And then you hand that to the people who are outsourcing their stuff to you, and they staple it in their audit. I think that’s the way it will go. It’s going to take a while to get there, but I see no other way to make this work. There are going to be audit requirements, but no company is going to allow someone else to go and audit their stuff. But they will give you the piece of paper saying: “We’ve been auditing and here are the results.” You can use that to satisfy your regulators. I think it’s long time coming, but that’s going to be the way this has to end, I don’t see any other way to make this work.
Question: How can an enterprise compete with a threat actor that has superior resources?
Answer: They probably can’t. The attacker is going to get in. The question is what happens afterward. And it’s not always a matter of “who has more money wins”, it has to do with risk aversion, it has to do with the legal environment. Often I can compete with a hacker that’s better than me by calling the police – that’s a way I can temporarily get some more resources. So we have a lot of mechanisms in society for dealing with attackers that are more skilled, scarier, bigger and better than us. We can actually make them apply here. But in general, I think an attacker that has more resources is going to get in. The question is “Now what?” How do we kick them out? How do we regain security? I mean, how are we resilient against that? That’s the whole notion of response. We are responding because the attacker has broken into our network. We are not responding to a potential, we are responding to an actual. And there are lots of things that we can do. It’s not a matter of saying: “Well, they’re going to get in, so we’re done.” It’s a matter of containing some of the damage.
Question: I’m curious what you imply when saying “resilience”?
Answer: By “resilience” I mean several things. How do we mitigate the threat? Survivability – how do we survive, how do we make sure that we don’t just fall over dead? How do we recover? How do we kick them out and regain security? This can be really difficult. And how do we adapt? How do we continually improve ourselves to make our security better? So I think of resilience as a whole kind of basket of, again, these different “why” requirements to make us secure even in the face of “the bad guys are going to get in”. And there are a lot of examples. As a species, we are resilient to disease. Disease will kill some of us and won’t kill all of us, and we have genetic diversity. That is the security mechanism by which we are resilient to a disease. That’s not actually a good strategy for IT. I mean, it’s not going to be good to day: “10% of you will be hit by this virus, but the rest of us will survive,” – that’s kind of lousy. But that is what species do. So I sort of think of a broad basket of ways that we can build security that can deal with the fact that there are threats that will penetrate our defenses.
Question: A couple of years ago there were two bug bounty programs. Do you see that as part of this trend?Answer: Bug bounty programs, I think, are interesting and useful. I want them to help more. Unfortunately, both the military market and the black market pay more than the bug bounties. I think these programs attract different sorts of people and they certain help. I like seeing it, it’s companies taking this stuff seriously. But we are just seeing more and more cyber weapons manufacturers finding bugs and then not turning them in, because they can make more money selling them to governments around the world. So I like the programs, I think they’re good, I’m just not really holding out hope that they’re going to make that much of a difference. It’s frustrating.
Question: In terms of the prospect theory you mentioned, how do we help change the minds of people?
Answer: This is hard. The way you overcome psychological biases, in general, is to know about them and compensate. If you know that you are risk-averse when it comes to gains and risk-seeking when it comes to losses, you can notice the behavior in yourself and compensate for it. Just like there’re a whole lot of psychological biases that a casino floor is praying on, if you know them you can walk by those tables, look at them and say: “Well, that’s attacks on people who don’t know math,” and keep going. The way we overcome them is by knowing them, and this is hard. Fear or terrorism – there’s an enormous number of psychological biases that terrorists pray on, that we as a country have fallen completely for. It’s talking about them that gets us to think about them and hopefully get beyond them. It is possible. And it’s not just IT. This is people trying to sell insurance; this is people trying to sell burglar alarms. If you ask anyone who sells burglar alarms they will tell you that the only people who buy burglar alarms is after they have been robbed or after their neighbors have been robbed, that’s when they buy them.
Question: How do false positives tend to affect incident response systems?
Answer: False positives do kill a lot of these systems, but we’re getting better at that. And that’s detection. Detection still has a lot of room for good technology. The response is going to be what you do. When you talk about response you usually mean the things I need or ways to clear false positives quickly. I’ll give you two examples. One is airport security. You walk through the metal detector, it goes off, and the screener can very quickly figure out what’s going on. If it’s a false positive they will learn it quickly. On the other hand, when the NSA sends a tip to the FBI – they sent 10,000 over the five years after September 11th – it took hundreds of man-hours to clear every one of those false positives, they were much more expensive. So the trick, really, is a fast secondary test that lets you clear the false positives. And if you can do that, then you can deal with a much faster and noisier primary test. This is, basically, how we deal with medical tests. So, that’s the sort of thing we need.
Thank you very much!