[kubernetes] Restart pods when configmap updates in Kubernetes?

How do I automatically restart Kubernetes pods and pods associated with deployments when their configmap is changed/updated?


I know there's been talk about the ability to automatically restart pods when a config maps changes but to my knowledge this is not yet available in Kubernetes 1.2.

So what (I think) I'd like to do is a "rolling restart" of the deployment resource associated with the pods consuming the config map. Is it possible, and if so how, to force a rolling restart of a deployment in Kubernetes without changing anything in the actual template? Is this currently the best way to do it or is there a better option?

This question is related to kubernetes kubernetes-pod configmap

The answer is


The best way I've found to do it is run Reloader

It allows you to define configmaps or secrets to watch, when they get updated, a rolling update of your deployment is performed. Here's an example:

You have a deployment foo and a ConfigMap called foo-configmap. You want to roll the pods of the deployment every time the configmap is changed. You need to run Reloader with:

kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml

Then specify this annotation in your deployment:

kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: "foo-configmap"
  name: foo
...

The current best solution to this problem (referenced deep in https://github.com/kubernetes/kubernetes/issues/22368 linked in the sibling answer) is to use Deployments, and consider your ConfigMaps to be immutable.

When you want to change your config, create a new ConfigMap with the changes you want to make, and point your deployment at the new ConfigMap. If the new config is broken, the Deployment will refuse to scale down your working ReplicaSet. If the new config works, then your old ReplicaSet will be scaled to 0 replicas and deleted, and new pods will be started with the new config.

Not quite as quick as just editing the ConfigMap in place, but much safer.


Another way is to stick it into the command section of the Deployment:

...
command: [ "echo", "
  option = value\n
  other_option = value\n
" ]
...

Alternatively, to make it more ConfigMap-like, use an additional Deployment that will just host that config in the command section and execute kubectl create on it while adding an unique 'version' to its name (like calculating a hash of the content) and modifying all the deployments that use that config:

...
command: [ "/usr/sbin/kubectl-apply-config.sh", "
  option = value\n
  other_option = value\n
" ]
...

I'll probably post kubectl-apply-config.sh if it ends up working.

(don't do that; it looks too bad)


Had this problem where the Deployment was in a sub-chart and the values controlling it were in the parent chart's values file. This is what we used to trigger restart:

spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ tpl (toYaml .Values) . | sha256sum }}

Obviously this will trigger restart on any value change but it works for our situation. What was originally in the child chart would only work if the config.yaml in the child chart itself changed:

    checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}

I also banged my head around this problem for some time and wished to solve this in an elegant but quick way.

Here are my 20 cents:

  • The answer using labels as mentioned here won't work if you are updating labels. But would work if you always add labels. More details here.

  • The answer mentioned here is the most elegant way to do this quickly according to me but had the problem of handling deletes. I am adding on to this answer:

Solution

I am doing this in one of the Kubernetes Operator where only a single task is performed in one reconcilation loop.

  • Compute the hash of the config map data. Say it comes as v2.
  • Create ConfigMap cm-v2 having labels: version: v2 and product: prime if it does not exist and RETURN. If it exists GO BELOW.
  • Find all the Deployments which have the label product: prime but do not have version: v2, If such deployments are found, DELETE them and RETURN. ELSE GO BELOW.
  • Delete all ConfigMap which has the label product: prime but does not have version: v2 ELSE GO BELOW.
  • Create Deployment deployment-v2 with labels product: prime and version: v2 and having config map attached as cm-v2 and RETURN, ELSE Do nothing.

That's it! It looks long, but this could be the fastest implementation and is in principle with treating infrastructure as Cattle (immutability).

Also, the above solution works when your Kubernetes Deployment has Recreate update strategy. Logic may require little tweaks for other scenarios.


You can update a metadata annotation that is not relevant for your deployment. it will trigger a rolling-update

for example:

    spec:
      template:
        metadata:
          annotations:
            configmap-version: 1

https://github.com/kubernetes/helm/blob/master/docs/charts_tips_and_tricks.md#user-content-automatically-roll-deployments-when-configmaps-or-secrets-change

Often times configmaps or secrets are injected as configuration files in containers. Depending on the application a restart may be required should those be updated with a subsequent helm upgrade, but if the deployment spec itself didn't change the application keeps running with the old configuration resulting in an inconsistent deployment.

The sha256sum function can be used together with the include function to ensure a deployments template section is updated if another spec changes:

kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
[...]

In my case, for some reasons, $.Template.BasePath didn't work but $.Chart.Name does:

spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: admin-app
      annotations:
        checksum/config: {{ include (print $.Chart.Name "/templates/" $.Chart.Name "-configmap.yaml") . | sha256sum }}