Getting Started Tutorial

This tutorial provides a quick introduction to K8up, how it works and how to use it.


This section provides information about the minimum requirements for testing K8up on Minikube.

Before starting please make sure Minikube is installed and started, and that helm is installed and properly initialized in your Minikube.

Install K8up

The most convenient way to install K8up is with helm. First add the appuio repository:

helm repo add appuio
helm repo update

Then install K8up itself:

kubectl create namespace k8up-operator
helm install k8up appuio/k8up --namespace k8up-operator

Install MinIO

MinIO is a distributed object storage service for high performance, high scale data infrastructures. It’s a drop in replacement for AWS S3 in your own environment. We’re going to install it to simulate a remote S3 bucket where our backups are going to be stored:

kubectl create -f

kubectl create -f

kubectl create -f

After a few minutes you should be able to see your MinIO installation on the browser using minikube service minio-service. The default Minio installation uses the access key minio and secret key minio123.

Create a PersistentVolumenClaim Resource

This will be the resource backed up by K8up:

kind: PersistentVolumeClaim
apiVersion: v1
  name: apvc
    - ReadWriteMany
      storage: 5Gi

Save the YAML above in a file named pvc.yml and use the kubectl apply -f pvc.yml command to deploy this configuration to your cluster.

Create Backup Credentials

Create the secret credentials for the backup repository:

apiVersion: v1
kind: Secret
  name: backup-credentials
  namespace: default
type: Opaque
  username: minio
  password: minio123


apiVersion: v1
kind: Secret
  name: backup-repo
  namespace: default
type: Opaque
  password: p@ssw0rd

Save the YAML above in a file named secrets.yml and use the kubectl apply -f secrets.yml command to deploy this configuration to your cluster.

The default MinIO installation uses the access key minio and secret key minio123. They’re in plain text inside the backup-credentials Secret definition and will be encoded as Base64 when the Secret is created on your cluster.

Please store the password of the backup-repo Secret somewhere safe. This is the encryption password for Restic. Without it you will lose access to the backup permanently.

Set Up a Backup Schedule

The custom Schedule object below defines the frequency, destination and secrets required to backup items in your namespace:

kind: Schedule
  name: schedule-test
      endpoint: http://minio-service:9000
      bucket: backups
        name: backup-credentials
        key: username
        name: backup-credentials
        key: password
      name: backup-repo
      key: password
    schedule: '*/5 * * * *'
    keepJobs: 4
    # optional
    #promURL: https://prometheus-io-instance:8443
    schedule: '0 1 * * 1'
    # optional
    #promURL: https://prometheus-io-instance:8443
    schedule: '0 1 * * 0'
      keepLast: 5
      keepDaily: 14

Save the YAML above in a file named backup.yml and use the kubectl apply -f backup.yml command to deploy this configuration to your cluster.

The file above will instruct the operator to do backups every 5 minutes and a monthly prune, and check jobs for repository maintenance. It will also archive the latest snapshots to the archive bucket once each week.

After 5 minutes of running this demo, you should be able to run the command minikube service minio-service and see the backups in a backups bucket inside the web administration. Remember that the default access and secret keys are minio and minio123 respectively.

minio browser

Feel free to adjust the frequencies to your liking. To help you with the crontab syntax, we recommend to check

You can always check the state and configuration of your backup by using kubectl describe schedule * By default all PVCs are stored in backup. By adding the annotation to a PVC object it will get excluded from backup.


The following movie shows the sequence of steps explained in this tutorial.

Checking the Status of Backup Jobs

Every time a job starts, it creates a separate pod in your namespace. You can see them using kubectl pods. You can then use the usual kubectl logs <POD NAME> command to troubleshoot a failed backup job.

Additionally the operator exposes a :8080/metrics endpoint for prometheus scraping. This will give you additional metrics that can be used to find failed jobs. See the [Prometheus examples]( in our Github repository.

Application-Aware Backups

It’s possible to define annotations on pods with backup commands. These backup commands should create an application-aware backup and stream it to stdout.

Define an annotation on pod:

      app: mariadb
    annotations: mysqldump -uroot -psecure --all-databases

With this annotation the operator will trigger that command inside the the container and capture the stdout to a backup.

Tested with:

  • MariaDB

  • MongoDB

  • tar to stdout

But it should work with any command that has the ability to output the backup to stdout.

What’s Next?

For advanced configuration of the operator please see Advanced config.