The subjects being touched upon by Daniel Selifonov here include securing the sensitive data stored in main memory and computer integrity verification issue.Can we do anything about a DMA attack angle? As it turns out, yes we can. Recently, as part of new technologies for enhancing server virtualization, for performance reasons people liked being able to attach, say, a network adapter to a virtual server so it would need to go through a hypervisor. So, IOMMU technology was developed so that you can actually sandbox a PCI device into its own little section in memory, where it can’t arbitrarily read and write anywhere on the system. So, this is perfect: we can set up IOMMU permissions to protect our ‘operating system’ or whatever we’re using to handle keys, and protect it from arbitrary access.
And, again, our friend from Germany, Tilo Muller, has implemented a version of TRESOR on a microbit visor called BitVisor which, basically, does this. It lets you run a single operating system and it transparently does this disk access encryption; the guest doesn’t even have to care or know anything about it, which is great. Disk access is totally transparent to the OS. Debug registers cannot be accessed by the OS, and IOMMU is set up so that the hypervisor itself is secure from manipulation.But as it turns out, there are kind of other things in memory that we might care about other than disk encryption keys (see left-hand image). There’s the problem that I hinted at earlier, where we used to do container encryption and now we all do full disk encryption for the most part. We do full disk encryption because it’s very-very difficult to make sure you don’t get accidental rights of your sensitive data to temporary files or caching in a container encryption system. Now that we are re-evaluating main memory as a not secure, not trustworthy place for storing data, we need to treat it in much the same way. We have to encrypt everything we don’t want to leak, things that are really important, like SSH keys or private keys or PGP keys or even password manager or any ‘top secret’ documents that you’re working on. I had this really-really silly notion: can we encrypt main memory? Or at least most of the main memory where we are likely to keep secrets so that we can at least minimize how much we are going to leak. And, once again, surprisingly or perhaps not so surprisingly, the answer is Yes (see right-hand image). A proof of concept in 2010 by a guy named Peter Peterson actually tried implementing a RAM encryption solution. It wouldn’t encrypt all RAM, it would basically split main memory into two components: a small fixed size “clear” which would be unencrypted, and then a larger sort of pseudo swap device where all the data was encrypted prior to being kept in main memory. It ended up being obviously quite a bit slower in synthetic benchmarks. But in the real world, when you ran, for instance, a web browser benchmark, it actually did pretty well – 10% slower. I think we can live with that. The problem with this proof of concept implementation was that it stored the key to decrypt in main memory, because where else would we put it? The author considered using things like the TPM for bulk encryption operations, but those things are even slower than dedicated hardware crypto system, so it would just be totally unusable.
But you know what? If we have the capability to use the CPU as a sort of pseudo hardware crypto module, it’s right in the center of things, so it should be actually fast enough to do these things. So, maybe we can actually use something like this.
Let’s say we have this sort of system set up. Our keys are not in main memory and our code responsible for manipulating the keys is protected from arbitrary read and write access by malicious hardware components. Main memory is encrypted, so most of our secrets are not going to leak even if someone tries to execute a cold boot attack. But how to we actually get a system booted up to this state? Because we need to start from a turned off system, authenticate ourselves to it and get the system up and running. How do we do this in a trustworthy way? After all, someone could still modify the system software to trick us into thinking that we are running this “great new” system, but in reality we are just not doing anything.So, one of the very important topics is being able to verify the integrity of our computers (see left-hand image). The user needs to be able to verify that the computer has not been tampered with before they authenticate themselves to it. There’s a tool that we can use for this, the Trusted Platform Module. It’s kind of got a bad rap – we’ll talk about it a little bit more – but it has a capability to measure your booting sequence in a couple of different ways to let you control what data will be revealed to the system from the TPM to particular system configuration states. So you can basically seal data to a particular software configuration that you are running on your system. There are a couple of different implementation approaches to do this, and there’s fancy cryptography to make it really hard to get around it. So, maybe we can do this. What is a TPM anyway? It was originally sort of like the grand part solution to digital rights management by media companies. Media companies would be able to remotely verify that your system is running in some “approved” configuration before they would let you run the software and unlock the key to your video files. It ended up being really impractical in practice, and so nobody is actually even trying to use it for this purpose anymore. I think a better way to think about it is, really, just a smart card that’s fixed on your motherboard: it could perform some cryptographic operations, RSA/SHA, has a random number generator, and it has physical attack countermeasures to prevent someone from very easily getting access to the data that’s stored in it (see right-hand image). The only real difference between it and the smart card is that it has the ability to measure the system boot state into platform configuration registers; and that’s usually a separate chip on the motherboard. So, there are some security implications of that.
There’re some kind of fun bits, like monotonic counters – numbers that you can only request the TMP increases, and then it can check what the value is. There’s a small non-volatile memory range that you could use for, really, whatever you want; it’s not very big, like a kilobyte, but could be useful. There’s a tick counter which lets you determine how long the system has been running since last startup. And there are commands that you could issue to the TPM to make it do things on your behalf, which include even things like clearing itself, if you feel the need to.
Read previous: A Password Is Not Enough 2: Crypto Attack Vectors