How to Enable Kubernetes Auditing with Kubeadm

Welcome back! In this post, I want to describe how you can enable auditing in Kubernetes cluster that is going to be deployed with kubeadm.

Auditing is really important in case you’re actively using Kubernetes cluster and you want to know what’s really happenening behind the curtains. With auditing you can answer the following questions:


In Kubernetes kube-apiserver performs auditing. Each request on each stage of its execution generates an event, which is then pre-processed according to a certain policy and written to a backend. The policy then checks what should be recorded to the backend. You can save your audit logs into files on the OS, or create some webhooks.

Each request can be recorded with an associated “stage”. The known stages are:

Also, you need to know what Audit Policy exactly is before we can move further.

Audit Policy

Audit policy tells kube-apiserver what events should be written in the audit log.

The known audit levels are:

Here is the example of minimal audit-policy:

$ cat audit-policy-minimal.yaml
kind: Policy
- level: Metadata

Basically, it logs all requests on the Metadata level.

Kubernetes community recommends using GCE’s audit profile as the reference by admins constructing their own audit profiles.

Let’s take a look at how can we enable auditing.

Kubeadm Config

IMPORTANT: the required version of Kubernetes >= v1.10.x because in the previous versions of kubeadm there is no Auditing featureGate.

I assume, you were playing around kubeadm for a while now, and you want to use it for something more serious so, of course, you want to enable auditing.

Kubeadm has a lot of options and if you don’t like to mess with them all the time you probably have your kubeadm config file somewhere. So we need to add a couple of lines to this file.

The first thing you will need to do is to enable Auditing in featureGates:

“featureGates is the mechanism of k8s to enable/disable new features”

  Auditing: true

Then you’ll need to add auditPolicy-specifc options:

  logDir: "/var/log/kubernetes/"
  logMaxAge: 2
  path: "/etc/kubernetes/audit-policy.yaml"

, where

In the end, your config file should look more or less like this:

kind: MasterConfiguration
  bindPort: 6443
  logDir: "/var/log/kubernetes/"
  logMaxAge: 2
  path: "/etc/kubernetes/audit-policy.yaml"
- localhost
- Node
certificatesDir: /etc/kubernetes/pki
cloudProvider: ""
  dataDir: /var/lib/etcd
kubernetesVersion: v1.10.4
  dnsDomain: cluster.local
  podSubnet: ""
token: foobar
tokenTTL: 0h0m0s
unifiedControlPlaneImage: ""
  Auditing: true
  apiserver-count: "1"

Now let’s initialize Kubernetes master with this config file and let’s see if we start collecting audit logs.

Kubeadm Init

$ kubeadm init --config <name_of_your_config.yaml

Don’t forget to apply your favorite CNI plugin!

Finally, let’s take a look at the logs:



As you can see from this log, I "username":"" requested /api/v1/namespaces/default/services/kubernetes at 2018-07-06T06:05:49.635153Z - pretty awesome, right? And you can use this log to understand what users were doing, for example, to tune up your RBAC policies and add or restrict access for the users to specific namespaces.

That’s all folks! In the next blog post, I will show you how to enable additional backends (fluentd, logstash) for collecting audit logs.

Take care!