Securing the Campaign 4: Risk Mitigation

On outlining the prevalent threats at the campaigns, Ben Hagen now proceeds with description of the tools and methods applicable for mitigating these perils.

The sad truth about the campaigns Keeping in mind all the threats we faced, let’s talk about what we did to actually mitigate them at the campaign. In terms of enterprise security, in terms of social attacks and that kind of thing – at the campaign we saw targeted spear phishing attacks targeting very select individuals within the campaign itself. These were of a higher caliber than I’d ever seen at any other organization, specifically targeting small groups of people with very, very targeted messages.

For example, a policy analyst, or a group of policy analysts, would be emailed by their manager – of course, from a random Yahoo email account or something – but the manager would say: “Hey, look, I’ve got this new information about this event that happened yesterday. You guys need to read this and have a report for me tomorrow.” In fact, it’s a spear phishing campaign; the attachment is, of course, malware designed to infect a computer, following the typical infection cycle of having a dropper that downloads a rootkit and then it’s communicating back to a central resource. We had this kind of thing on a weekly basis.

We had lots of attempts from opposing activists at fraud and sabotage.

We had several instances of denial-of-service or attempts of denial-of-service; most of these we could trace back to some sort of online activism. Lots of threats were Anonymous, where you’d see people in IRC talking about how they hate Obama: “Let’s denial-of-service him!” A couple of hours later they’d have some sort of script ready for people to download, and they’d start attacking us. We never really saw a huge effect from that; it more or less came down to: if you could keep them at bay for 30 minutes or so, people would lose interest and just stop trying. And using Amazon’s infrastructure, using things like Akamai caching and the like, it was really pretty easy to avoid any noticeable effect from that.

Example of the call to sabotage We got a pretty big footprint on the Internet; we saw constant attempts to scan us, lots of security tools being used to try and find vulnerabilities. Similarly, we had lots of attempts from opposing activists at fraud and sabotage. Here is one example (see left-hand image): this was a doctor in Florida who was a Teaparty activist; and for those of you unfamiliar, the Teaparty is kind of a very conservative, very vocal group that generally aligns with the Republicans. This guy was urging people online to log on to Obama’s call tool and either kind of click through numbers and get them removed from the system or to actually to call people, and instead of calling with a message of support for Obama, you call them and you try to convince them to not vote for Obama and to vote for Romney or somebody else.

This is kind of the most obvious example, we saw this online, we were able to trace it back to somebody’s account on our systems, and at that point it’s pretty easy to mitigate the effect that guy has. But it did kind of point us in the direction of implementing further features to prevent fraud within the systems. We have some pretty sophisticated algorithms designed to detect abnormal behavior in tools such as the call tool to highlight and mitigate that threat.

Tools used for protecting the enterprise network In terms of the enterprise network and actually all of the things we’re going to talk about, we almost exclusively used free open source software (see image). For the enterprise we didn’t have a ton of money to spend on security hardware; we ended up just deploying a box with Suricata, which is a great Snort-like IDS system. It operates very similar to Snort, the rules are very much the same, the difference being that it’s multi-core, so it’s much more performant and it can be GPU-accelerated, which is really awesome, so you can monitor a ton of traffic with very few resources.

Snorby is an event management system, so you can take the logs out of an IDS; it’s a web application that lets you monitor, make comments, note stuff, and do investigations.

BroIDS is something else we deployed, it’s a great system. It’s not really your typical IDS, it’s more of a monitoring system that can monitor network traffic and kind of log out things that are interesting to you. We had it logging things like DNS queries, and we would push those into a database and do data mining on those, looking for activity related to malware or compromised machines. So, you can do some really creative stuff with BroIDS. Nessus – you have to pay for it, but it’s a good vulnerability assessment tool. Nmap is one of my favorite tools ever invented, so we use that a lot.

Simple Phishing Toolkit helped out a lot In terms of social threats to the campaign, we had a lot of success training people with something called the Simple Phishing Toolkit, and this is a web application designed to let you run your own spear phishing campaigns, and we would run this against our own staff and employees.

It’s a pretty frightening thing to actually use, because once you use it, you realize how effective this thing can be. I think in our first attempts of using this within the campaign we had a 25% click rate on attachments, and we had about 12% hit rate on people entering credentials online into fake websites we had put up.

We registered a random Barack Obama-affiliated web domain and had people to go to a website there, fill in their email, name and password, and click Enter. We would record the people that actually entered their information, and then we would make them take extra training, so we would walk up behind them at their desk and say: “Hey, remember that email you clicked on? You’ve got to come with us and do some extra training now.”

If you have a particular threat you’re worried about, it’s a mistake to hold that information back from people.

It ended up working: by the end of the campaign we had a much lower hit rate on those campaigns. I think it worked out pretty well, and it was kind of fun to be that evil guy who’s tricking your people into clicking stuff. In our online chat channels, when we would launch something, we’d suddenly see people pop up and say: “Hey, I got this really weird email, did you guys get this?” And people would be like: “No, I didn’t get that. What is it?” And it goes to, like, “Barack Obama” with zeros instead of O’s, and it was asking for their login information. Or we would have people emailing me in a panic, saying: “I clicked it, I clicked it, I put my name and password, and I know I did something wrong. What do I do now?” And it’s pretty satisfying.

The other general thing is just keeping people in the loop, so if you have a particular threat you’re worried about, I think it’s a mistake to hold that information back from people. I think it’s always better to inform your staff that you’re worried about something in particular happening, work through something making the rounds of people, kind of giving them all the details you can.

The campaign turned out an interesting experience Campaigns are really interesting; they are a lot like corporate enterprises: they face a lot of the same threats, but they have some interesting idiosyncrasies that I haven’t seen anywhere else. So, the concept of Bring Your Own Device – basically, the campaign wants to spend as little money as possible, and if they can save money by you bringing your own laptop, they’re happy to let you do that. Same thing for phones or USB drives.

We had rapid growth: when I joined the campaign, we had about 100 people; within 6 months we had gone up to over 500 people in the headquarters alone. That doesn’t include field offices or volunteers, so that’s really rapid growth. We had a lot of volunteers coming through, and these are people that have been vetted in some way – they did background checks and that type of thing, but they don’t go through the same training as staff, so it’s a lot of people that come in and out of the office; we had media coming in all the time, people doing interviews constantly, etc.

We also had a very young corporate structure, so the technology team was unique in that we all had experience in our fields, we needed people that could get into the campaign and immediately start building web applications or immediately start coding. A lot of the campaign was made up of younger people who had just graduated from college and didn’t particularly have years of experience, so that youth in an organization is something I hadn’t seen before. People were basically playing their entire lives out online, social media was really popular, and controlling those messages became a challenge.

Read previous: Securing the Campaign 3: Types of Technological Threats Being Faced

Read next: Securing the Campaign 5: Application Assessment

Like This Article? Let Others Know!
Related Articles:

Leave a comment:

Your email address will not be published. Required fields are marked *

Comment via Facebook: