Isolate Pods with gVisor
Introduction
This guide describes how to enable and use gVisor in a Switch Cloud Kubernetes (SCK) user cluster.
gVisor is a container sandbox that provides strong isolation for workloads. It intercepts syscalls and implements them in user space, reducing the attack surface compared to the default container runtime.
gVisor is not:
- A full virtual machine.
- A replacement for Kubernetes security features.
Use it when you need additional workload isolation in your cluster.
Prerequisites
Download the following templates and store them in a dedicated folder:
Or copy and paste them from below:
gvisor-runtimeclass.yaml
sample-gvisor-pod.yaml
Create a Cluster with gVisor Enabled
You can enable gVisor when creating a new cluster. There are two steps to enable gVisor on cluster worker nodes:
- Select the
osp-ubuntu-with-gvisor
Operating System Profile (OSP) when configuring a MachineDeployment. This ensures that worker nodes are provisioned with gVisor support. - Create the RuntimeClass for gVisor. You can do this by deploying the
gVisor RuntimeClass
application (see Applications) or manually by applying thegvisor-runtimeclass.yaml
template provided in this guide.
If you have not installed the gVisor RuntimeClass
application in your cluster, apply the gvisor-runtimeclass.yaml
template:
Confirm the RuntimeClass resource exists and points to the gVisor handler:
Run Pods with gVisor
After enabling gVisor at the worker node level, you can use the gvisor
RuntimeClass in your Pod manifest. Apply sample-gvisor-pod.yaml
to run an nginx
container inside the gVisor sandbox.
Verify that the Pod is running:
Confirm the Pod requested gvisor
RuntimeClass:
You can further verify that gVisor is enabled at runtime by running dmesg
inside an ephemeral debug container:
kubectl debug sample-gvisor-pod \
--image=busybox \
--target=sample \
--profile=baseline \
--stdin \
-- /bin/dmesg
Example output
Targeting container "sample". If you don't see processes from this container it may be because the container runtime doesn't support this feature.
Defaulting debug container name to debugger-l42rc.
[ 0.000000] Starting gVisor...
[ 0.433831] Creating bureaucratic processes...
[ 0.494110] Searching for needles in stacks...
[ 0.732697] Letting the watchdogs out...
[ 1.126578] Checking naughty and nice process list...
[ 1.431425] Digging up root...
[ 1.581335] Reticulating splines...
[ 1.789908] Synthesizing system calls...
[ 2.062756] Gathering forks...
[ 2.384966] Rewriting operating system in Javascript...
[ 2.606019] Checking naughty and nice process list...
[ 2.637958] Setting up VFS...
[ 2.744969] Setting up FUSE...
[ 3.177508] Ready!
This boot log is produced by gVisor's user-space kernel. It confirms your container is running under gVisor.
Running dmesg
inside a standard container typically fails with:
Note
You can also define runtimeClassName: gvisor
in the Pod specification of higher-level controllers such as Deployments, StatefulSets, or DaemonSets.
The syntax is the same as in the sample-gvisor-pod.yaml
example above — just place it under the spec.template.spec
section of the controller manifest.
Enable gVisor on a Running Cluster
You can also enable gVisor on a cluster that is already running. To do so, create a new MachineDeployment within that cluster, set the OSP to osp-ubuntu-with-gvisor
and deploy the gVisor RuntimeClass
application or apply the gvisor-runtimeclass.yaml
template provided in this guide.
However, Pods running on existing MachineDeployments will continue to use the default runtime unless they are rescheduled onto gVisor-enabled worker nodes and explicitly set runtimeClassName: gvisor
.
Important
gVisor-enabled worker nodes can run Pods with both runtimes. If you do not specify runtimeClassName: gvisor
in the Pod manifest, the Pod will use the default runtime.
Conclusion
Congratulations! Your cluster now supports gVisor, and Pods using runtimeClassName: gvisor
run inside an isolated user-space kernel.