Posted by david b.
on March 17, 2014
Daniel Selifonov dissects a blueprint reflecting virtually attack-proof system architecture that prevents disk encryption from being seriously compromised.
Runtime blueprint and basic tips
So, let’s look at a blueprint (see right-hand image)
, what I think we should have for getting a system from a cold boot up into when we have our running trustworthy configuration. There are a lot of really vulnerable legacy system components in RPC architecture. You can do all sorts of things in the BIOS, like hooking the factor table and modifying disk reading rights, or capturing keyboard input, masking all CPU feature registers. There are plenty of options when you want to mess with people. In my opinion, you really want to get out of BIOS controlled mode, out of real mode – into protected mode as soon as possible, and really, just do your measurement stuff.
So, once you get into this ‘pre-boot’ mode, which is really just your operating system, like a Linux initial RAM disk, then you start executing your protocol and start doing these things. I mean, once you’re using operating system resources, what someone does at the BIOS level as far as interrupt the tables, doesn’t really affect you anymore. You really don’t care. And you can do sanity checks on your registers, like if you know you’re running on a Core i5, you know it’s going to be supporting things like No-eXecute bit and debug registers and other stuff that people might try to mask out in capability registers.
Preferred running configuration
So, here (see left-hand image)
is the runtime blueprint: what we actually want the system to look like once we’re in the running configuration. There was a previous project, TreVisor, which implemented many of the security aspects of doing disk encryption
using CPU registers and having IOMMU protections on your main memory. The problem is that BitVisor is a very specialty not very commonly used program. Xen is sort of like the canonical open-source hypervisor, where there’s a lot of security research going on, people are making sure it’s not broken. In my opinion, we should use something like Xen as the bare level hardware interface, and then use a Linux dom0 administrative domain on it to actually do your hardware initialization.
Again, in Xen all of your paravirtualized domains are actually running in non-privileged mode, so you don’t actually have direct access to things like the debug registers. So, that’s one thing that’s already done. Xen exposes things like hypercalls that give you access to the source stuff, but it’s something you can disable in software.
In my opinion, we should use something like Xen as the bare level hardware interface.
Master key approach in debug registers
So, the approach that I’m taking (see right-hand image)
is we’ll do that master key approach in debug registers; we’ll dedicate two debug registers, the first two, to store 128-bit AES key, which is our master key. This thing never leaves the CPU registers as soon as it’s entered in by the process that takes the user credentials. And then we use the second two registers as virtual machine specific ‘whatevers’ – it could be either as ordinary debug registers or, in this case, we could use it to encrypt main memory. In this particular case, we still need to have a few devices that are directly connected to the administrative domain. That includes the graphics processing unit, which is a PCI device; you know, the keyboard, the TPM – all this stuff needs to be directly accessible. So you can’t really apply IOMMU protections on this.
Setting IOMMU protections on devices
But things like the network controller, storage controller, arbitrary devices in the PCI bus – you can set up IOMMU protections on those so that they have absolutely zero access to administrative domain or your hypervisor memory spaces. You can get access to things like the network by actually putting things like the network controller into dedicated virtual machines (see right-hand image)
. These things get the devices mapped but have IOMMU protection set up, so that device can only access the memory space of this virtual machine.
Running applications in VMs
You can do the same thing with your storage controller and then actually run all of your applications in virtual machines that have absolutely zero direct hardware access (see left-hand image)
. So, even if someone owns your web browser or sends you a malicious PDF file
, they don’t actually get anything that would let them seriously compromise your disk encryption.
Basis for the architecture
I can’t take the credit for that architecture design. It’s actually the design basis for the really excellent project called the Qubes OS (see right-hand image)
. They basically describe themselves as a pragmatic formulation of Xen, Linux and a few custom tools to do a lot of the stuff I just talked about. It implements these non-privileged guests and does a nice unified system environment. So it feels like you’re really running with one system, but it’s actually a bunch of different virtual machines under the hood. I use this as the implementation of my code base.
Read previous: A Password Is Not Enough 4: Using TPM to Combat Specific Attacks
Read next: A Password Is Not Enough 6: Disk Encryption with the Phalanx Toolset
Like This Article? Let Others Know!
Comment via Facebook: