# Object Specifications Reference

The K8up operator includes various CRDs which get added to the cluster. Here We’ll explain them in more detail.

## Schedule

With the schedule CRD it’s possible to put all other CRDs on a schedule.

``````apiVersion: backup.appuio.ch/v1alpha1
kind: Schedule
name: schedule-test
spec:
backend:
name: backup-repo
s3:
endpoint: http://10.144.1.224:9000
bucket: baas
accessKeyIDSecretRef:
name: backup-credentials
secretAccessKeySecretRef:
name: backup-credentials
archive:
schedule: '0 * * * *'
restoreMethod:
s3:
endpoint: http://10.144.1.224:9000
bucket: restoremini
accessKeyIDSecretRef:
name: backup-credentials
secretAccessKeySecretRef:
name: backup-credentials
backup:
schedule: '* * * * *'
keepJobs: 4
promURL: http://10.144.1.224:9000
check:
schedule: '*/5 * * * *'
promURL: http://10.144.1.224:9000
prune:
schedule: '*/2 * * * *'
retention:
keepLast: 5
keepDaily: 14``````

### Settings

• `archive`: see archive for further explaination

• `backend`: see back-end for further explanaition

• `check`: see check for further explanaition

• `prune`: see prune for further explanaition

## Restore

It’s now possible to define various restore jobs. Currently these kinds of restores are supported:

• To a PVC

• To S3 as tar.gz

Example for a restore to a PVC:

``````apiVersion: backup.appuio.ch/v1alpha1
kind: Restore
name: restore-test

spec:
restoreMethod:
folder:
claimName: restore

backend:
name: backup-repo
s3:
endpoint: http://10.144.1.224:9000
bucket: baas
accessKeyIDSecretRef:
name: backup-credentials
secretAccessKeySecretRef:
name: backup-credentials

This will restore the latest snapshot from `10.144.1.224:9000` to the PVC with the name `restore`.

### Settings

• `backend`: see back-end for further explanation

• `restoreMethod`: is either `s3` or `folder`. For s3 please see `backend` for `folder` you just need to provide a valid claim name as shown in the example above

• `restoreFiler`: a filter passed to the underlying Restic, which will be used. Please consult the Restic docs for valid path filters.

• `snapshot`: valid snapshot ID that should get restored. If not provided will restore the latest one.

• `keepJobs`: amount of jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 6. Only applicable when used within a schedule.

## Archive

The archive CRD will take the latest snapshots from each namespace/project in the repository. Thus you should only run one schedule per repository for archival as there’s a chance that you’ll archive snapshots more than once.

``````apiVersion: backup.appuio.ch/v1alpha1
kind: Archive
name: archive-test
spec:
name: backup-repo
restoreMethod:
s3:
endpoint: http://10.144.1.224:9000
bucket: restoremini
accessKeyIDSecretRef:
name: backup-credentials
secretAccessKeySecretRef:
name: backup-credentials
backend:
s3:
endpoint: http://10.144.1.224:9000
bucket: baas
accessKeyIDSecretRef:
name: backup-credentials
secretAccessKeySecretRef:
name: backup-credentials

Archive is just a wrapper for restore, intended for use with the schedule. Will restore all namespaces on a given back-end to a given s3 location.

## Backup

This will trigger a single backup.

``````apiVersion: backup.appuio.ch/v1alpha1
kind: Backup
name: baas-test
spec:
keepJobs: 4
backend:
name: backup-repo
s3:
endpoint: http://10.144.1.224:9000
bucket: baas
accessKeyIDSecretRef:
name: backup-credentials
secretAccessKeySecretRef:
name: backup-credentials
promURL: http://10.144.1.224:9000``````

### Settings

• `backend`: see back-end

• `keepJobs`: amount of jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 6. Only applicable when used within a schedule.

• `promURL`: sends backup statistics to this Prometheus pushgateway while the backups are running.

• `statsURL`: will send a JSON webhook containing backup information information to this endpoint. Can be used to gather a list with available backups.

## Check

This will trigger a single check run on the repository.

``````apiVersion: backup.appuio.ch/v1alpha1
kind: Check
name: check-test
spec:
backend:
name: backup-repo
s3:
endpoint: http://10.144.1.224:9000
bucket: baas
accessKeyIDSecretRef:
name: backup-credentials
secretAccessKeySecretRef:
name: backup-credentials
promURL: http://10.144.1.224:9000``````

### Settings

• `statsURL`: will send a JSON webhook containing check information information to this endpoint.

• `backend`: see back-end

• `keepJobs`: amount of jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 6. Only applicable when used within a schedule.

## Prune

This will trigger a single prune run, and delete the snapshots according to the defined retention rules. This one needs to run exclusively on the repository. No other jobs must run on the same repository while this one is still running. The operator ensures that the prune will run exclusively on the repository when run on a schedule. If manually triggering a prune the wrestic locking will kick in and prevent it from damaging the repository. It will also fail the whole pod in that case.

``````apiVersion: backup.appuio.ch/v1alpha1
kind: Prune
name: prune-test
spec:
retention:
keepLast: 5
keepDaily: 14
backend:
name: backup-repo
s3:
endpoint: http://10.144.1.224:9000
bucket: baas
accessKeyIDSecretRef:
name: backup-credentials
secretAccessKeySecretRef:
name: backup-credentials

### Settings

• `retention`: see retention

• `backend`: see back-end

• `keepJobs`: amount of jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 6. Only applicable when used within a schedule.

### Retention

Retention is part of the prune object. It defines how the retention of a given back-end should look like. Most upstream Restic rules are support except for the once working with labels. Please see the upstream Restic docs for more info.

``````retention:
keepLast: 5
keepDaily: 14``````

List of available settings:

• keepLast

• keepHourly

• keepDaily

• keepWeekly

• keepMonthly

• keepYearly

• keepTags

## Back-end

Currently only S3 is supported as a back-end.

``````backend:
name: backup-repo
s3:
endpoint: http://10.144.1.224:9000
bucket: baas
accessKeyIDSecretRef:
name: backup-credentials
secretAccessKeySecretRef:
name: backup-credentials

### Settings

• `repoPasswordSecretRef`: kubernetes secret reference containing the Restic encryption key. Attention: don’t lose this key or you won’t be able to access your backup data again! So keep a copy of that somewhere off the actual cluster.

• `s3`: see s3

### S3

This object is part of back-end.

Settings: * `endpoint`: http(s) endpoint of the S3 instance * `bucket`: name of the bucket that should be used * `accessKeyIDSecretRef`: Kubernetes secret reference containing the the Access Key ID * `secretAccessKeySecretRef`: Kubernetes secret reference containing the Secret Access Key

## PreBackup

PreBackup are objects that live in the namespace that should be backed up. They’re completely optional though. Their main goal is to provide some sort of pre backup scripts. They can be used for various use cases though, see PreBackup pods.

``````apiVersion: backup.appuio.ch/v1alpha1
kind: PreBackupPod
name: mysqldump
spec:
backupCommand: mysqldump -u$USER -p$PW -h \$DB_HOST --all-databases
pod:
spec:
containers:
- env:
- name: USER
value: dumper
- name: PW
value: topsecret
- name: DB_HOST
• `backupCommand`: command that should get executed within the pod. Attention the command should output its data to stdout so that wrestic can pick it up correctly
• `fileExtension`: as this leverages the stdin backup capabilities of Restic it will generate a virtual file. That file name is by default just the name of the PreBackup pod. But to make restores easier you can define a file extension, that gets appended to the filename. For example: ".sql" for a mysql dump
• `pod`: pod is default podTemplateSpec of Kubernetes: Kubernetes docs