Providing a deeper insight into risk mitigations, James Denaro lists and explains a few options for staying on the safe side when making exploit disclosure.Another thing you want to do is be aware of pre-existing contractual relationships that you as a security researcher might have with the target of whatever it is you’re working on. These contractual agreements could come in the form of Terms of Service, End User License Agreement, Non-Disclosure Agreement, or Employment Agreement. An End User License Agreement might very well have provisions in it regarding reverse-engineering software, for example, and that’s something you might be doing as part of your exploration, and that could give leverage to someone to try to stop you. Pretty much every piece of software you get is going to have some kind of license agreement. You came to it legitimately, right? You’ve agreed to this license that may prohibit you from doing certain things with that software. And there’s not a whole lot you can do about that, but you at least can be aware of the risk, if nothing else.
How far you need to go in trying to mitigate the risk somewhat depends on the techniques that you’ve used in your research. If you’ve done things that clearly look like some of the examples of what people have done that’s got them prison time, that’s something you need to be careful about or take more aggressive mitigation techniques to perhaps hide some of the information about what you’re doing. So, for example, if in the Megamos case no one had identified that it was the VW Group, whether this system had been compromised, VW Group would not have been able to go after a temporary restraining order against the researchers. So, perhaps there’s an opportunity here for the conference-going community to create a track where people could present things that we recognize have to be kept quiet, sort of like a confidential disclosure: “This is going to be really cool, but we just can’t really tell you about what it is because then you won’t get to hear it.” So, maybe that’s one approach.Now I’d like to talk about some of the ways that you might make a disclosure that are relatively less likely to get you in trouble. You can disclose to the responsible party. That’s sort of what the responsible disclosure paradigm is all about: you found a problem with a system – you tell who’s running this system. This is actually, unfortunately, relatively high risk, and that risk scales with the questionableness of whatever technique it was that you used to find out about this vulnerability.
So, if you are connected to the Internet, you access a remote system – “You didn’t have permission? That’s how you did it?!” It may not be a great idea to go tell them about it, because if they don’t like it they’ve got an action against you. If you’re inconvenient, that’s a problem for you. You might think you’re doing them a favor, they might not agree that you’re doing them a favor. If you’re able to submit it anonymously to whoever the vendor is or the responsible party, that’s great. Depends on how good your OPSEC is, I suppose. A lot of times you think you’re maybe anonymous but you’re not as anonymous as you thought you were or hoped you were. So, that’s a risk in itself that you need to consider. You can submit it to Bug Bounty, maybe there you’re at less risk.You can disclose to a government authority perhaps. Maybe you never believe it will ever get to the vendor, but, again, if your techniques are perhaps questionable you might not necessarily want to be submitting it to a government authority separately, you may have an interest in keeping your identity anonymous. Again, you can try to submit it anonymously to the government, but I don’t know how much we can really trust that anymore. You know, this is a somewhat legal talk and you can almost never get to a legal talk where someone will actually tell you something for sure, like “Absolutely, 100% you will not get in trouble if you do this.” But fortunately we are in a case here where there is one group of people who really don’t have to worry about getting in trouble with the Computer Fraud and Abuse Act when they disclose a vulnerability, and here they are (see left-hand image). OK to disclose if you’re one of these people. We’re thinking about ways that we might be able to leverage opportunities for security researchers to make disclosures while keeping the risk as low as possible. So, we’re working on creating a pilot program, where attorney-client privilege can be leveraged to hide the identity and the techniques used by a security researcher in making a disclosure. The concept works like this: the researcher would disclose a vulnerability to a trusted third party, which would be an attorney. Only to the attorney. It’s critical that this be a completely confidential disclosure to maintain the confidentiality of that disclosure so that no one on the outside can get to it. The trusted third party does not publish the vulnerability on behalf of the researcher, however the trusted third party does disclose the vulnerability to whoever the affected party is, whoever has this vulnerability. The researcher remains anonymous throughout the entire process.
This is possibly of use if there’s no better option. It’s a little bit cumbersome and there are some side effects, chiefly that the researcher remains anonymous and doesn’t get public credit for whatever the research was. But it is one possible way for the researcher to be able to disclose and remain about as anonymous as one can possibly get. So, this is a pilot program; we’re currently working on it, kicking out the bugs right now. If anyone is interested in talking to us further about this, we’ll definitely welcome your input, and please see me afterwards.