How to Upgrade K8up

Upgrade K8up from 0.x to 1.x

The upgrade is generally done with the following steps:

  1. Prepare new Helm release

  2. Uninstall K8up 0.x

  3. Install K8up 1.x

  4. Verify your backups work

Do not remove the CRDs, as you might loose the resources!


  1. kubectl or oc

  2. helm version 3, version 2 for deinstallation if you’re still using Tiller

  3. yq version 4 (alternatively, any editor works)

You might need to adapt the commands to your needs. This guide does not provide a copy-paste upgrade script, but points you in the right direction. It also assumes that you know basic usage of Helm.

Prepare new Helm release

The Helm Chart v1.0 comes with a few new and changed properties. Please consult the README.

Most notably, the Chart is targeted to recent Kubernetes versions.

Use helm upgrade --reuse-values only when you know what you’re doing. Some parameters have changed and are backwards incompatible. Make sure you have the new CRDs installed beforehand.

Uninstall 0.x

# Set the namespace
# Shut down and uninstall K8up. This should not delete the CRDs
helm -n ${ns} uninstall k8up

Install 1.x

  1. Make sure to prepare any changed Helm values before installing the release.

  2. Prepare all CRDs for 3-way-merge with kubectl apply

    for crd in; do
      # Get CRD definition in YAML
      kubectl get crd "${crd}" -o yaml > "${crd}.yaml"
      # Remove all metadata properties except ``
      yq -i eval 'del(.status) | del(.metadata) | += "'${crd}'"' "${crd}.yaml"
      # Apply the CRD again (this shouldn't change anything, except adding the annotation "")
      # You will also see some warnings in the output mentioning the annotation.
      # This is expected and actually required.
      kubectl apply -f "${crd}.yaml"
  3. Apply the new CRDs as documented in the Chart README.

  4. Install the Helm Chart version 1.x.

For Kubernetes < 1.15 (OpenShift 3.11), please add --set k8up.enableLeaderElection=false to the helm install command.

Verify your backups work

# See if the K8up pod came up
kubectl -n ${ns} get pods

# Check for errors in the logs
kubectl -n ${ns} logs -l ""

# Trigger a new backup by creating a new Backup object
kubectl create -f <your-backup-file-spec>