Kubernetes: How To Write Admission Controllers in Golang

Kubernetes Admissions Controllers in Golang

What are admission controllers?

Admission controllers in Kubernetes provide extensibility to the Kubernetes API, allowing additional logic to be performed after authentication and authorization, but before schema validation and persistence to the data store. This allows for additional control over various processes, such as deployments for example, and supports a higher security posture. Examples of this being achieved include enforcing pod security contexts, vulnerability scan results, extended RBAC controls, and more. Admission controllers currently come in two varieties: validating and mutating.

Validating Admission Controllers

A validating admission controller is a relatively simple implementation which extends the Kubernetes API to respect additional checks prior to persistence of an operation. The ideal design of this component is small size, secure implementation of credentials, and flexibility of operation. A flexible Kubernetes component that supports developer experience as well as programmatic configuration should be able to accept either environment variables as configuration, or a ConfigMap object representing desired configuration. It should also ideally support Kubernetes annotations, which allow configuration per namespace, as well as multiple enforcement levels (audit or permissive mode for evaluation or emergency bypass, and enforcing mode). All validating admission controllers require a properly configured TLS environment to function. This is injected as a base64 encoded CA bundle, and a matching, signed key pair on the validation controller service itself to establish a trust. A service is generally recommended to front running pods, allowing simple internal DNS resolution and high-availability if desired, as our controller is stateless.

The controller itself is deployed as a container and is managed by Kubernetes. The controller should live in a protected namespace (kube-system or a dedicated controller namespace), and only be accessible to cluster-admins or privileged service accounts for use in automation or an Operator. The business logic of the controller is entirely up to the author. Kubernetes expects a properly formatted response of true or false, and supports an event message. This message is collected by the Kubernetes event system, and should become viewable via kubectl get events. The response determines the eligibility of the request for admission, and again can be applied to any verb against any Kubernetes object. There is no limit to condition handling or application logic performed by the controller, and no limit to what languages or runtimes can be used for the controller itself.

Mutating Admission Controllers

A mutating admission controller has the ability to dynamically re-write request definitions at admission time via the JSON PATCH spec (RFC 6902). Use cases include but are not limited to enforcing a custom scheduler, dynamically allocating a sidecar container at runtime, enforcing security policies, and configuring an automatic inbound proxy. As with Validating Admission Controllers, the Mutating Admission Controller requires a valid TLS trust internally, and runs as a Kubernetes pod. A service is recommended for convenience, as well as using `kube-system` and enforcing least privilege to access the controller. As with the validating controller, the logic is determined by the author, and there are no limitations to what language or frameworks are used in implementation.

Fundamentally, the difference is that a mutating admission controller makes in-flight changes, and a validating admission controller enforces integrity of conditions. At the time of this writing, it is suggested that both are deployed in tandem, to avoid a situation where an additional mutating controller removes a desired change in-flight. All mutating admission controllers are run sequentially, the resulting object schema checked for compatibility, followed by all validating controllers being run to ensure object integrity.

Bringing It All Together

Now that we know what admission controllers are and why they may be desirable, let’s go ahead and implement one. The following is a staged implementation of a Validating Admission controller and is suitable for integration with an external service. What that service is exactly is an exercise left to the author, but could bridge the gap between vulnerability scan results and which nodes in a large cluster may be eligible for scheduling, as an example. In this case, I’ve created an extremely simple hosted web service which supports two endpoints, admit and deny, to demonstrate a use case.

The following sections describe the contents of a main.go file from top to bottom. If you’d like to follow along, append each section to the end of this file in sequence. First, some housekeeping and package imports.

package main

// Import required packages for this project
import (
    "context"
    "crypto/tls"
    "encoding/json"
    "flag"
    "fmt"
    "io/ioutil"
    "log"
    "net/http"
    "os"
    "os/signal"
    "syscall"

    // K8's Libraries to create a structured response
    "k8s.io/api/admission/v1beta1"
    k8sapiv1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

const port = "8080"

var cert, key string

This should be enough to support a basic formatted response (back to the K8s API that is). Next, we need to create a data structure to hold the response object we get from the ‘back-end’ - in this case, an incredibly simple server that just passes back Boolean values as JSON.

// Admission - base admission struct
type Admission struct {
    Admit bool `json:"bool"`
}

Now that that’s out of the way, it would make sense to create a helper function to be as DRY (don’t repeat yourself) as possible in our implementation. We know a response is going to be of type true or type false, so lets create a function to take care of the rest of the JSON scaffolding.

// Helper function to construct a response. Accepts an
// admission decision and event message
func testAdmission(allow bool, msg string) v1beta1.AdmissionReview {
    testAdmit := v1beta1.AdmissionReview {
        Response: &v1beta1.AdmissionResponse {
            Allowed: allow,
            Result: &k8sapiv1.Status {
                Message: msg,
            },
        },
    }
    return testAdmit
}

Great, we now just have to dynamically pass a Boolean and event message. It would also make sense to create a helper function to avoid re-writing code to make the HTTP query twice, and instead just pass in the specific endpoint we want to talk to as arguments to a query constructor.

// Helper function to craft a query. Accepts a request
// and parameter string as input
func testRequest(req string, params string) (*http.Response, error) {

    // Construct an HTTP client
    resp, err := http.Get("https://" + req + "/" + params)
    if err != nil {
        log.Fatalln(err)
    }
    return resp, nil
}

Simple enough. The last common operation between an allow and deny endpoint will be the Marshal and Unmarshal of the JSON response back to the K8s API. We will use this function in both the case of allowing a deployment, and blocking one, by passing a Boolean and message.

func jsonMarshal(allow bool, msg string) []byte {
    // Marshal JSON into objects
    validate, err := json.Marshal(testAdmission(allow, msg))
    if err != nil {
        fmt.Printf("Failed to encode response: %v", err)
    }
    return validate
}

Alright, we’re ready to tie these together in our primary logic. Let’s make something useful.

func main() {
    // Set up TLS cert/key file locations
    flag.StringVar(&cert, "tlsCertFile", "/etc/certs/cert.pem", "The certificate file.")
    flag.StringVar(&key, "tlsKeyFile", "/etc/certs/key.pem", "The key file")
    flag.Parse()
    
    // Load these into a certs object
    certs, err := tls.LoadX509KeyPair(cert, key)
    if err != nil {
        fmt.Printf("Failed to load key pair: %v
", err)
    }

    // Define a server object
    server := &http.Server{
        Addr:      fmt.Sprintf(":%v", port),
        TLSConfig: &tls.Config{Certificates: []tls.Certificate{certs}},
    }

    // Define endpoints and start server
    mux := http.NewServeMux()
    mux.HandleFunc("/admission", getAdmission)
    server.Handler = mux

    // Start new go routine for web server
    go func() {
        if err := server.ListenAndServeTLS("", ""); err != nil {
            fmt.Printf("Failed to listen and serve webhook server: %v
", err)
        }
    }()

    // Log status message to stdout
    fmt.Printf("Server running listening on port: %v
", port)

    // Listen for shutdown
    signalChan := make(chan os.Signal, 1)
    signal.Notify(signalChan, syscall.SIGINT, syscall.SIGTERM)
    <-signalChan

    // Emit a shutdown log
    fmt.Println("Got shutdown signal, shutting down webhook server gracefully...")
    server.Shutdown(context.Background())
}

In order, what we’re doing here is:

  • Define where our certificate and key file will be. As mentioned, functional TLS is going to be a requirement.

  • Log a message to standard out is something is amiss with loading these files

  • Define an object called ‘server’ and configure to use the certificate and key

  • Create handlers to call the getAdmission function we created on a GET request to /admission

  • Fire this up in a new Go routine, and

  • Set up some basic signal handlers stop gracefully

Deploying To Kubernetes

Now that our controller is written, it’s time to operationalize. I will save some of the optimization details for future content, but the following section describes the minimum additional input to K8s to bring these pieces to life.

Certificate Generation

Remember those certs we require? Lets make them with OpenSSL and bash.

#!/bin/bash

# Set up a few guardrails
set -o errexit
set -o nounset
set -o pipefail

# Usual process of CA generation
openssl genrsa -out certs/ca.key 4096
openssl req -new -x509 -key certs/ca.key -out certs/ca.crt -config certs/ca_config.txt
openssl genrsa -out certs/example-validation-key.pem 4096

# Simple generation of key pair. Don't use the default namespace in production.
openssl req -new -key certs/example-validation-key -subj "/CN=admission.default.svc" -out example-validation-CSR.csr -config certs/example-validation-settings.txt
openssl x509 -req -in example-validation-CSR.csr -CA certs/ca.crt -CAkey certs/ca.key -CAcreateserial -out certs/example-validation-certificate.pem

# Make a valid K8's manifest for upload. Will ultimately extend the API to query our controller
export CA_BUNDLE=$(cat certs/ca.crt | base64 | tr -d '\n')
sed "s/CA_BUNDLE/${CA_BUNDLE}/g" manifest-generator.yaml > example-manifest.yaml

We’ll also need a configuration file to define options for the CA, as well as the CSR.

ca_config.txt

[ req ]
default_bits       = 4096
default_md         = sha512
default_keyfile    = ca.key
prompt             = no
encrypt_key        = yes

# base request
distinguished_name = req_distinguished_name

# extensions
req_extensions     = v3_req

# distinguished_name
[ req_distinguished_name ]
countryName            = "MA"                           # C=
stateOrProvinceName    = "Boston"                       # ST=
localityName           = "Boston"                       # L=
postalCode             = "02111"                        # L/postalcode=
streetAddress          = "68 Harrison Ave"              # L/street=
organizationName       = "Consulting"                   # O=
organizationalUnitName = "Consulting"                   # OU=
commonName             = "nightshift.io"      # CN=
emailAddress           = "[email protected]" # CN/emailAddress=

# req_extensions
[ v3_req ]
# The subject alternative name extension allows various literal values to be 
# included in the configuration file
# http://www.openssl.org/docs/apps/x509v3_config.html
subjectAltName  = DNS:nightshift.io 

example-validation-settings.txt

[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[ req_distinguished_name ]
[ v3_req ]
basicConstraints=CA:FALSE
subjectAltName=@alt_names
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

[ alt_names ]
DNS.1 = example-validator
DNS.2 = example-validator.default
DNS.3 = example-validator.default.svc
DNS.4 = example-validator.default.svc.cluster.local

Manifest Generator

A template is useful to dynamically create a yaml manifest with the required CA bundle. The purpose of this effort is to create a trust between the Kubernetes API and the PKI chain we’ve just created. This will not work without TLS. Our bash script above will replace ${CA_Bundle} with an encoded CA generated on the fly.

manifest-generator.yaml

---
apiVersion: v1
kind: Service
metadata:
name: example-validator
namespace: default
labels:
    name: example-validator
spec:
ports:
- name: example-validator-webhook
    port: 443
    targetPort: 8080
selector:
    name: example-validator
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: example-validator
namespace: default
labels:
    name: example-validator
spec:
replicas: 1
template:
    metadata:
    name: example-validator
    labels:
        name: example-validator
    spec:
    containers:
        - name: example-validator-webhook
        image: nightshift/example-validator:latest
        imagePullPolicy: Always
        resources:
            limits:
            memory: 500Mi
            cpu: 300m
            requests:
            memory: 500Mi
            cpu: 300m
        volumeMounts:
            - name: webhook-certs
            mountPath: /etc/certs
            readOnly: true
        securityContext:
            readOnlyRootFilesystem: true
    volumes:
        - name: webhook-certs
        secret:
            secretName: example-validator-secret
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
name: example-validator
webhooks:
- name: example-validator.nightshift.io
    clientConfig:
    service:
        name: example-validator
        namespace: default
        path: "/admission"
    caBundle: "${CA_BUNDLE}"
    rules:
    - operations: ["CREATE","UPDATE"]
        apiGroups: [""]
        apiVersions: ["v1"]
        resources: ["pods"]
    failurePolicy: Fail 

The output of this operation should result in a new, valid K8s manifest called 'example-manifest.yaml’

Adding The Manifest To Kubernetes

Add the Kubernetes components with kubectl apply -f example-manifest.yaml.

If all goes well, the result of kubectl get all -A after a few minutes should show a successful deployment like so:

Hello, World!

Does It Work?

Let’s perform a quick test to ensure what we expect to happen is indeed the outcome.

<span class="cm-tag cm-bracket">&lt;</span><span class="cm-tag">p</span><span class="cm-tag cm-bracket">&gt;</span>Hello, World!<span class="cm-tag cm-bracket">&lt;/</span><span class="cm-tag">p</span><span class="cm-tag cm-bracket">&gt;</span>

Next Steps

This is a great start, but there is significant room for improvement, such as creating the smallest possible container image possible for production, adding support for K8s annotations, and implementing even more functionality with Mutating Controllers and other extensions to other components for deep integration.

Where is this useful? We’ve found this approach to be an invaluable addition in the product development life-cycle for those looking to create deep integrations with Kubernetes, with use cases ranging from the equivalent of cloud-native antivirus to solving security requirements at scale.

If you’ve read this far and would like to see more material, we invite you to join our mailing list, where new content is announced as published. We try to keep our content quality as high as our own delivery expectations.

Contact

See what success looks like today