Random notes on Unikernels

December 10, 2019

This is an adaptation of a Slack chat explanation from work (slightly reworded to read better for the audience) from work in 2017.

Random notes on unikernels

Introducing unikernels

You might have heard terms like unikernel, nanokernel, or library operating systems. These aren't always used in exactly the same way, but they are highly related terms.

MirageOS is the research project that has received the most industry love of which I am aware. The research team was acquihired by Docker a few years ago. It is language-based (they use OCaml), meaning you need to write unikernel applications in OCaml and it will build an image for various virtualization targets (as well as a Linux and probably macOS/BSD binary to run on existing OSes for development purposes). It strips all aspects of the runtime/OS that the application doesn't need. i.e. if it doesn't need the UDP or SCTP stack, then it isn't shipped in the "library OS" target binaries reducing your attack surface area. These are one of the awesome things that excite me about the possibility of the idea of unikernels.

Some docs on this are here: https://mirage.io/docs/

A non-research oriented introduction can be found here: https://mirage.io/wiki/overview-of-mirage

Other language-based unikernels include:

  • IncludeOS (C/C++)

  • HaLVM (Haskell)

  • LingVM (Erlang, though it is dead, I think)

I have seen unikernels referred to as nanokernels or library operating systems.

Rump kernels

Rump kernels are either a kind of unikernel or a stepping stone to unikernels depending on who you talk to. Rump kernels try to work on top of existing operating systems (OSes) to pare down the build to only what is needed. These are sometimes termed Just-enough Operating Systems (JeOS) and what qualifies varies from context to context.

The main build-level trade-off compared to unikernels defined above is that rump kernels leverage existing mature build tooling ecosystems. However, they are often riddled with legacy decisions that make builds less hermetic or reproducible (outside of the Nix/Guix worlds), and the resulting artifacts are less slimline as unikernels by definition thus the comparative attach surface area is still greater with rump kernels as compared with unikernels though the attack surface area will be less than a typical system build that runs an equivalent application on a typical distribution of the same flavor (e.g. Linux, BSD, etc.). A benefit of rump kernels is the ability to leverage familiar system-level debugging tooling available compared to unikernels.

Rump kernels do allow you to more incrementally migrate your application from typical OS deployments (where everything including the kitchen-sink is installed without manual effort to reduce this down) out of the box to more streamlined rump kernel deployments.

The big win with the language-based unikernels (and the hope with the rump unikernels) is you only ship the code your application uses. Absolutely nothing else, meaning the compiler target optimizes to an extreme.

Related areas

This is related to another area I am intrigued about as a bystander which is whole program optimization. Projects like this whole program optimizer targeting VHDL from a subset of Haskell is exciting work to me: https://github.com/mu-chaco/ReWire

Arguably unikernels would could enable more application of whole program optimization at all levels.

We will wait to see what happens in this space.