Monday, September 24, 2012

The joys and hazards of multi-process browser security

Web browsers with some form of multi-process model are becoming increasingly common. Depending on the exact setup, there can be significant consequences for security posture and exploitation methods.

Spray techniques

Probably the most significant security effect of multi-process models is the effect on spraying. Spraying, of course, is a technique where parts of a processes' heap or address space are filled with data helpful for exploitation. It's sometimes useful to spray the heap with a certain pattern of data, or spray the address space in general with executable JIT mappings, or both.

In the good ol' days, when every part of the browser and all the plug-ins were run in the same process, there were many possible attack permutations:

  • Spray Java JIT pages to exploit a browser bug.
  • Spray Java JIT pages to exploit a Flash bug.
  • Spray Flash JIT pages to exploit a browser bug.
  • Spray Java JIT pages to exploit Java.
  • You could even spray browser JS JIT pages to exploit Java if you wanted to ;-)
  • ...etc.

Since the good ol' days, various things happened to lock all this down:

  • The Java plug-in was rearchitected so that it runs out-of-process in most browsers.
  • IE and Chromium placed page limits on JavaScript-derived JIT pages (covered a little in the famous Accuvant paper.)
  • Firefox introduced its out-of-process plug-ins feature (for some plug-ins, most notably Flash) and Chromium had all plug-ins out-of-process since the first release.

The end result is trickier exploitation, although it's worth noting that one worrysome combination remains: IE still runs Flash in-process, and this has been abused by attackers in many of the recent IE 0days.

One-shot vs. multi-shot

The terms "one-shot" and "multi-shot" have long been used in the world of server-side exploitation. "One-shot" refers to a service that is dead after just one crash -- so your exploit had better be reliable! "Multi-shot" refers to a service whereby it remains running after your lousy exploit causes a crash. This could be because the service has a parent process that launches new children if they die or it could simply be because the service is launched by a framework that automatically restarts dead services.

Although moving to a multi-process browser is generally very positive thing for security and stability, you do run the risk of introducing "multi-shot" attacks.

In other words, let's say your exploit isn't 100% reliable. Wouldn't it be nice if you could just use a bit of JavaScript to run the exploit over and over in a child process until it works? Perhaps you simply weren't able to default ASLR and you're in the situation where you have a 1/256 chance of your hard-coded address being correct. Again, this could be brute-forced in a "multi-shot" attack.

The most likely "multi-shot" attacks are against plug-ins that are run out-of-process, or against browser tabs, if browser tabs can have separate processes.

These attacks can be defended against by limiting the rate of child process crashes or spawns. Chromium deploys some tricks in this area.

Broker escalation

Once an attack has gained code execution inside a sandbox, there are various directions it might go next. It might attack the OS kernel. Or for the purposes of this discussion, it might attack the privileged broker. The privileged broker typically runs outside of the sandbox, so any memory corruption vulnerability in the broker is a possible avenue for sandbox escape.

To attack the memory corruption bug, you'll likely need to defeat DEP / ASLR in the broker process. An interesting question is, how far along are you already, by virtue of code execution in the sandboxed process? Obviously, you know the full memory map layout of the compromised sandboxed process.

The answer, is it depends on your OS and the way the various processes relate to each other. The situation is not ideal on Windows; due to the way the OS works, certain system-critical DLLs are typically located at the same address across all processes. So ASLR in the broker process is already compromised to an extent, no matter how the sandboxed processes are created. I found this interesting.

The situation is better on Linux, where each process can have a totally different address space layout, including system libraries, executable, heap, etc. This is taken advantage of by the Chromium "zygote" process model for the sandboxed processes. So a compromise of a sandboxed process does not give any direct details about the address space layout of the broker process. There may be ways to leak it, but not directly, and /proc certainly isn't mapped in the sandboxed context! All this is another reason I recommend 64-bit Linux running Chrome as a browsing platform.

2 comments:

HGEX said...

Great writeup. Another two relevant reasons to use Chrome on Linux:

1) You mention ASLR bruteforcing. Grsecurity/ PaX have implemented measures to prevent those specific attacks where processes get forked and bruteforced over and over.

2) You can sandbox the broker/zygote process with your favorite LSM.

Unknown said...

You mention ASLR bruteforcing. Grsecurity/ PaX have implemented measures to prevent those specific attacks where processes get forked and bruteforced over and over.
security company calgary