My personal services like feeds reader and read it later service have been running for a few months on Google Cloud Platform using Kuebernetes. I started using self signed certificates which work fine for using them on desktops, but I could not use them on my phone or tablet without configuring Android keystore. Since the public availability of Let’s Encrypt I decided to use widely trusted certificates instead, also hoping it would simplify setting them up. My architecture is as follow: a single service of replicated webservers listens to the public network and redirect requests to internal services. HTTPS is used on the public network but internally I fallback to HTTP, individual services will not need to manage their own certificates.

------------          -----------------         -------------
|          |          |               |         | service 1 |
|          |          |   webserver   |         |-----------|
| browsers | -HTTPS-> |               | -HTTP-> | service 2 |
|          |          | (n instances) |         |-----------|
|          |          |               |         | service 3 |
------------          -----------------         -------------

The way this is declared is to have a single type: LoadBalancer service for webservers, backed by multiple pods of a replication controller. These pods are a raw Nginx containers configured with a secret containing configuration to make them point to internal services which represent the actual application servers. When needing to create a new host to be serviced or modify one, the config is modified, a new secret generated and a new replication controller to replace the previous one with the only difference being the secret used.

Let’s Encrypt requires to show a file at some place on the public website to verify ownership. The default working for Let’s Encrypt is to have access to the webserver to dynamically create this resource, but this is quite impractical in a Kubernetes environment. For me the certificates needed to be stored in another secret which would be linked to webservers. My process is thus to start Let’s Encrypt manual process, manually create identity validation resource using an Nginx rule, validate the identity, create the secret containing the certificate, create Nginx HTTPS listener and eventually deploy the working server.

I start by executing letsencrypt-auto certonly --manual and entering the target host name. Upon request toverify identity, I create a simple Nginx configuration only responding to the requested resource:

server {
  listen 80;
  server_name example.com;
  location = /.well-known/acme-challenge/aaaaaaaaaaaaaaaaaaaaaaaaaaa { 
    return 200 "bbbbbbbbbbbbbbbbbbbbbbbb"; 
  }
}

I use a script to generate a secret containing this configuration (using openssl for base64 encoding). Each time I change configuration I need to increment the version in the script, run it and create the secret using kubectl create -f web-configuration.yaml.

#! /bin/bash

cat > web-configuration.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
  name: webconfiguration-v1
type: Opaque
data:
  example.conf: $(openssl enc -A -base64 -in conf/example.conf)
EOF

I can then start the controller replicating Nginx instances configured using the secret with kubectl create -f web-controller.yaml:

apiVersion: v1
kind: ReplicationController
metadata:
  name: web-v1
  labels:
    app: web
    version: v1
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: web
        version: v1
    spec:
      containers:
      - name: nginx
        image: nginx:1.9.10
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80
        - containerPort: 443
        volumeMounts:
        - name: config
          mountPath: /etc/nginx/conf.d
          readOnly: true
      volumes:
      - name: config
        secret:
          secretName: webconfiguration-v1

And the public service with kubectl create -f web-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: web
  labels:
    app: web
spec:
  type: LoadBalancer
  ports:
  - port: 80
    name: http
  - port: 443
    name: https
  selector:
    app: web

This creates a load balancer that I need my domain to point at. Once this is booted and replying at the URL Let’s Encrypt requires, I validate the identity. This gives me a privkey.pem and fullchain.pem files that I can now use to create my certificates secret, again using a script creating the resource that I can then deploy to Kubernetes with kubectl create -f web-certificates.yaml:

#! /bin/bash

cat > web-certificates.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
  name: webcertificates-v1
type: Opaque
data:
  example.com.key: $(sudo openssl enc -A -base64 -in /etc/letsencrypt/live/example.com/privkey.pem)
  example.com.cert: $(sudo openssl enc -A -base64 -in /etc/letsencrypt/live/example.com/fullchain.pem)
EOF

I can now add HTTPS server to my configuration. I added configuration recommended by securityheaders.io to enable strict transport security, preventing sites from being displayed in iframes and setting up restrictive content security policy. I also enabled HTTP2. All content is actually redirected to the application server service on HTTP using proxy_pass.

server {
  listen 80;
  server_name example.com;
  location = /.well-known/acme-challenge/aaaaaaaaaaaaaaaaaaaaaaaaaaa { 
    return 200 "bbbbbbbbbbbbbbbbbbbbbbbb"; 
  }
}

server {
  listen 443 ssl http2;
  server_name example.com;
  ssl_certificate /etc/certificates/example.com.cert;
  ssl_certificate_key /etc/certificates/example.com.key;

  add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
  add_header X-Frame-Options "SAMEORIGIN" always;
  add_header X-Xss-Protection "1; mode=block" always;
  add_header X-Content-Type-Options "nosniff" always;

  location / {
    proxy_pass http://backendservice:80;
  }
}

I can generate again configuration (incrementing the -vX in the name) and create it in Kubernetes, then update the controller with the additional secret (also changing its version):

apiVersion: v1
kind: ReplicationController
metadata:
  name: web-v2
  labels:
    app: web
    version: v2
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: web
        version: v2
    spec:
      containers:
      - name: nginx
        image: nginx:1.9.10
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80
        - containerPort: 443
        volumeMounts:
        - name: config
          mountPath: /etc/nginx/conf.d
          readOnly: true
        - name: certificates
          mountPath: /etc/certificates
          readOnly: true
      volumes:
      - name: certificates
        secret:
          secretName: webcertificates-v1
      - name: config
        secret:
          secretName: webconfiguration-v1

In order to add new site I repeat the process by adding a new configuration to validate identity, deploying it, validating identity, adding the additional certificates to the secret, adding the additional HTTP configuration to the backend, deploying the updated controller.

Next step is to automate renewal. I am not planning into putting certificate renewal inside the web containers because each new instance would use a fresh certificates, which would prevent setting up public key pinning in the future and could mess up with identity validation with multiple container instance running at the same time.