Short answer: for rolling updates!

I was surprised to see in all official Kubernetes examples that replication controllers had a version in their name (foo-vX) and had two separate labels with their name and version (app: foo and version: vX). I found no official explanation of why such naming, but discovered it after having to update an application.

One of the strenght of Kubernetes is to allow smoothly deploying an update with no downtime, deploying pods (transient units) of the new application version one at a time and deleting pods from the previous version at the same time. Semantically that operation is performed by replacing a replication controller by a new one, there is no notion of version in Kubernetes itself. For it to be able to perform the update, the replication controller must thus:

  • have a different name (Kubernetes must know which to kill and which to deploy)
  • have at least one label which differs (Kubernetes must be able to identify which replication controller a pod belongs to)

In order for services (persistent identities) to transition automatically to new instances, they must use a selector which includes of both replication controllers. Services on the other hand are not subject to rolling updates, they are permanent identities, so to my knowledge do not need versioning.

The recommended pattern is the simplest to achieve both goals:

  • version in the replication controller name makes it different each time it needs to change
  • replication controller name label provides a stable way to include all versions of the replication controller in the service
  • replication controller version label makes a difference in labels each time it needs to change

The service is thus declared as:

apiVersion: v1
kind: Service
metadata:
  name: foo
  labels:
    app: foo
spec:
  ports:
  - port: 80
  selector:
    app: foo

Before the update, replication controller looks like:

apiVersion: v1
kind: ReplicationController
metadata:
  name: foo-v8
  labels:
    app: foo
    version: v8
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: foo
        version: v8
    spec:
      # pod spec

After the update it is transformed to:

apiVersion: v1
kind: ReplicationController
metadata:
  name: foo-v9
  labels:
    app: foo
    version: v9
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: foo
        version: v9
    spec:
      # updated pod spec

The transition is started using kubectl rolling-update foo-v8 -f foo-controller.yaml.