arrow-left

All pages
gitbookPowered by GitBook
1 of 5

Loading...

Loading...

Loading...

Loading...

Loading...

Logical backups

hashtag
What is a logical backup?

A logical backup is a backup that contains the logical structure of the database, such as tables, indexes, and data, rather than the physical storage format. It is created using mariadb-dumparrow-up-right, which generates SQL statements that can be used to recreate the database schema and populate it with data.

Logical backups serve not just as a source of restoration, but also enable data mobility between MariaDB instances. These backups are called "logical" because they are independent from the MariaDB topology, as they only contain DDLs and INSERT statements to populate data.

Although logical backups are a great fit for data mobility and migrations, they are not as efficient as for large databases. For this reason, physical backups are the recommended method for backing up MariaDB databases, especially in production environments.

hashtag
Storage types

Currently, the following storage types are supported:

  • S3 compatible storage: Store backups in a S3 compatible storage, such as or .

  • PVCs: Use the available in your Kubernetes cluster to provision a PVC dedicated to store the backup files.

  • Kubernetes volumes: Use any of the supported natively by Kubernetes.

Our recommendation is to store the backups externally in a S3 compatible storage.

hashtag
Backup CR

You can take a one-time backup of your MariaDB instance by declaring the following resource:

This will use the default StorageClass to provision a PVC that would hold the backup files, but ideally you should use a S3 compatible storage:

By providing the authentication details and the TLS configuration via references to Secret keys, this example will store the backups in a local Minio instance.

Alternatively you can use dynamic credentials from an EKS Service Account using EKS Pod Identity or IRSA:

By leaving out the accessKeyIdSecretKeyRef and secretAccessKeySecretKeyRef credentials and pointing to the correct serviceAccountName, the backup Job will use the dynamic credentials from EKS.

hashtag
Scheduling

To minimize the Recovery Point Objective (RPO) and mitigate the risk of data loss, it is recommended to perform backups regularly. You can do so by providing a spec.schedule in your Backup resource:

This resource gets reconciled into a CronJob that periodically takes the backups.

It is important to note that regularly scheduled Backups complement very well the feature detailed below.

hashtag
Retention policy

Given that the backups can consume a substantial amount of storage, it is crucial to define your retention policy by providing the spec.maxRetention field in your Backup resource:

hashtag
Compression

You are able to compress backups by providing the compression algorithm you want to use in the spec.compression field:

Currently the following compression algorithms are supported:

  • bzip2: Good compression ratio, but slower compression/decompression speed compared to gzip.

  • gzip: Good compression/decompression speed, but worse compression ratio compared to bzip2.

  • none: No compression.

compression is defaulted to none by the operator.

hashtag
Server-Side Encryption with Customer-Provided Keys (SSE-C)

You can enable server-side encryption using your own encryption key (SSE-C) by providing a reference to a Secret containing a 32-byte (256-bit) key encoded in base64:

circle-exclamation

When using SSE-C, you are responsible for managing and securely storing the encryption key. If you lose the key, you will not be able to decrypt your backups. Ensure you have proper key management procedures in place.

circle-info

When restoring from SSE-C encrypted backups, the same key must be provided in the Restore CR or bootstrapFrom configuration.

hashtag
Restore CR

You can easily restore a Backup in your MariaDB instance by creating the following resource:

This will trigger a Job that will mount the same storage as the Backup and apply the dump to your MariaDB database.

Nevertheless, the Restore resource doesn't necessarily need to specify a spec.backupRef, you can point to other storage source that contains backup files, for example a S3 bucket:

hashtag
Target recovery time

If you have multiple backups available, specially after configuring a , the operator is able to infer which backup to restore based on the spec.targetRecoveryTime field.

The operator will look for the closest backup available and utilize it to restore your MariaDB instance. Only backups strictly before or at targetRecoveryTime will be matched.

By default, spec.targetRecoveryTime will be set to the current time, which means that the latest available backup will be used.

hashtag
Bootstrap new MariaDB instances

To minimize your Recovery Time Objective (RTO) and to switfly spin up new clusters from existing Backups, you can provide a Restore source directly in the MariaDB object via the spec.bootstrapFrom field:

As in the Restore resource, you don't strictly need to specify a reference to a Backup, you can provide other storage types that contain backup files:

Under the hood, the operator creates a Restore object just after the MariaDB resource becomes ready. The advantage of using spec.bootstrapFrom over a standalone Restore is that the MariaDB is bootstrap-aware and this will allow the operator to hold primary switchover/failover operations until the restoration is finished.

hashtag
Backup and restore specific databases

By default, all the logical databases are backed up when a Backup is created, but you may also select specific databases by providing the databases field:

When it comes to restore, all the databases available in the backup will be restored, but you may also choose a single database to be restored via the database field available in the Restore resource:

There are a couple of points to consider here:

  • The referred database (db1 in the example) must previously exist for the Restore to succeed.

  • The mariadb CLI invoked by the operator under the hood only supports selecting a single database to restore via the option, restoration of multiple specific databases is not supported.

hashtag
Extra options

Not all the flags supported by mariadb-dump and mariadb have their counterpart field in the Backup and Restore CRs respectively, but you may pass extra options by using the args field. For example, setting the --verbose flag can be helpful to track the progress of backup and restore operations:

Refer to the mariadb-dump and mariadb CLI options in the section.

hashtag
Staging area

circle-info

S3 is the only storage type that supports a staging area.

When using S3 storage for backups, a staging area is used for keeping the external backups while they are being processed. By default, this staging area is an emptyDir volume, which means that the backups are temporarily stored in the node's local storage where the Backup/Restore Job is scheduled. In production environments, large backups may lead to issues if the node doesn't have sufficient space, potentially causing the backup/restore process to fail.

To overcome this limitation, you are able to define your own staging area by setting the stagingStorage field to both the Backup and Restore CRs:

In the examples above, a PVC with the default StorageClass will be used as staging area. Refer to the for more configuration options.

Similarly, you may also use a custom staging area when :

hashtag
Important considerations and limitations

hashtag
Root credentials

When restoring a backup, the root credentials specified through the spec.rootPasswordSecretKeyRef field in the MariaDB resource must match the ones in the backup. These credentials are utilized by the liveness and readiness probes, and if they are invalid, the probes will fail, causing your MariaDB Pods to restart after the backup restoration.

hashtag
Restore job

Restoring large backups can consume significant compute resources and may cause Restore Jobs to become stuck due to insufficient resources. To prevent this, you can define the compute resources allocated to the Job:

hashtag
Galera backup limitations

hashtag
mysql.global_priv

Galera only replicates the tables with InnoDB engine, see the .

Something that does not include mysql.global_priv, the table used to store users and grants, which uses the MyISAM engine. This basically means that a Galera instance with mysql.global_priv populated will not replicate this data to an empty Galera instance. However, DDL statements (CREATE USER, ALTER USER ...) will be replicated.

Taking this into account, if we think now about a restore scenario where:

  • The backup file includes a DROP TABLE statement for the mysql.global_priv table.

  • The backup has some INSERT statements for the mysql.global_priv table.

This is what will happen under the scenes while restoring the backup:

  • The DROP TABLE statement is a DDL so it will be executed in galera-0, galera-1 and galera-2.

  • The INSERT statements are not DDLs, so they will only be applied to galera-0.

After the backup is fully restored, the liveness and readiness probes will kick in, they will succeed in galera-0, but they will fail in galera-1 and galera-2, as they rely in the root credentials available in mysql.global_priv, resulting in the galera-1 and galera-2 getting restarted.

To address this issue, when backing up MariaDB instances with Galera enabled, the mysql.global_priv table will be excluded from backups by using the --ignore-table option with mariadb-dump. This prevents the replication of the DROP TABLE statement for the mysql.global_priv table. You can opt-out from this feature by setting spec.ignoreGlobalPriv=false in the Backup resource.

Also, to avoid situations where mysql.global_priv is unreplicated, all the entries in that table must be managed via DDLs. This is the recommended approach suggested in the . There are a couple of ways that we can guarantee this:

  • Use the rootPasswordSecretKeyRef, username and passwordSecretKeyRef fields of the MariaDB CR to create the root and initial user respectively. This fields will be translated into DDLs by the image entrypoint.

  • Rely on the and CRs to create additional users and grants. Refer to the for further detail.

hashtag
LOCK TABLES

Galera is not compatible with the LOCK TABLES statement:

For this reason, the operator automatically adds the --skip-add-locks option to the Backup to overcome this limitation.

hashtag
Migrations using logical backups

hashtag
Migrating an external MariaDB to a MariaDB running in Kubernetes

You can leverage logical backups to bring your external MariaDB data into a new MariaDB instance running in Kubernetes. Follow this runbook for doing so:

  1. Take a logical backup of your external MariaDB using one of the commands below:

circle-exclamation

If you are using Galera or planning to migrate to a Galera instance, make sure you understand the and use the following command instead:

  1. Ensure that your backup file is named in the following format: backup.2024-08-26T12:24:34Z.sql. If the file name does not follow this format, it will be ignored by the operator.

  2. Upload the backup file to one of the supported . We recommend using S3.

  3. Create your MariaDB resource declaring that you want to and providing a that matches the backup:

  1. If you are using Galera in your new instance, migrate your previous users and grants to use the User and Grant CRs. Refer to the for further detail.

hashtag
Migrating to a MariaDB with different topology

Database mobility between MariaDB instances with different topologies is possible with logical backups. However, there are a couple of technical details that you need to be aware of in the following scenarios:

hashtag
Migrating between standalone and replicated MariaDBs

This should be fully compatible, no issues have been detected.

hashtag
Migrating from standalone/replicated to Galera MariaDBs

There are a couple of limitations regarding the backups in Galera, please make sure you read the section before proceeding.

To overcome this limitations, the Backup in the standalone/replicated instance needs to be taken with spec.ignoreGlobalPriv=true. In the following example, we are backing up a standalone MariaDB (single instance):

Once the previous Backup is completed, we will be able bootstrap a new Galera instance from it:

hashtag
Reference

hashtag
Troubleshooting

hashtag
Galera Pods restarting after bootstrapping from a backup

Please make sure you understand the .

After doing so, ensure that your backup does not contain a DROP TABLE mysql.global_priv; statement, as it will make your liveness and readiness probes to fail after the backup restoration.

This page is: Copyright © 2025 MariaDB. All rights reserved.

The Galera cluster has 3 nodes: galera-0, galera-1 and galera-2.
  • The backup is restored in galera-0.

  • This results in the galera-1 and galera-2 not having the mysql.global_priv table.

    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      mariaDbRef:
        name: mariadb
      storage:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 100Mi
          accessModes:
            - ReadWriteOnce
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      mariaDbRef:
        name: mariadb
      storage:
        s3:
          bucket: backups
          prefix: mariadb
          endpoint: minio.minio.svc.cluster.local:9000
          region:  us-east-1
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: tls.crt
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: mariadb-backup
      annotations:
        eks.amazonaws.com/role-arn: arn:aws:iam::<<account_id>>:role/my-role-irsa
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      mariaDbRef:
        name: mariadb
      serviceAccountName: mariadb-backup
      storage:
        s3:
          bucket: backups
          prefix: mariadb
          endpoint: s3.us-east-1.amazonaws.com
          region:  us-east-1
          tls:
            enabled: true
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      mariaDbRef:
        name: mariadb
      schedule:
        cron: "*/1 * * * *"
        suspend: false
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      mariaDbRef:
        name: mariadb
      maxRetention: 720h # 30 days
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      mariaDbRef:
        name: mariadb
      compression: gzip
    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
      name: ssec-key
    stringData:
      # 32-byte key encoded in base64 (use: openssl rand -base64 32)
      customer-key: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      mariaDbRef:
        name: mariadb
      storage:
        s3:
          bucket: backups
          prefix: mariadb
          endpoint: minio.minio.svc.cluster.local:9000
          region: us-east-1
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: tls.crt
          ssec:
            customerKeySecretKeyRef:
              name: ssec-key
              key: customer-key
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Restore
    metadata:
      name: restore
    spec:
      mariaDbRef:
        name: mariadb
      backupRef:
        name: backup
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Restore
    metadata:
      name: restore
    spec:
      mariaDbRef:
        name: mariadb
      s3:
        bucket: backups
        prefix: mariadb
        endpoint: minio.minio.svc.cluster.local:9000
        region:  us-east-1
        accessKeyIdSecretKeyRef:
          name: minio
          key: access-key-id
        secretAccessKeySecretKeyRef:
          name: minio
          key: secret-access-key
        tls:
          enabled: true
          caSecretKeyRef:
            name: minio-ca
            key: tls.crt
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Restore
    metadata:
      name: restore
    spec:
      mariaDbRef:
        name: mariadb
      backupRef:
        name: backup
      targetRecoveryTime: 2023-12-19T09:00:00Z
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-from-backup
    spec:
      storage:
        size: 1Gi
      bootstrapFrom:
        backupRef:
          name: backup
        targetRecoveryTime: 2023-12-19T09:00:00Z
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-from-backup
    spec:
      storage:
        size: 1Gi
      bootstrapFrom:
        s3:
          bucket: backups
          prefix: mariadb
          endpoint: minio.minio.svc.cluster.local:9000
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: tls.crt
        targetRecoveryTime: 2023-12-19T09:00:00Z
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      mariaDbRef:
        name: mariadb
      databases:
        - db1
        - db2
        - db3
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Restore
    metadata:
      name: restore
    spec:
      mariaDbRef:
        name: mariadb
      backupRef:
        name: backup
      database: db1
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      mariaDbRef:
        name: mariadb
      args:
        - --verbose
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Restore
    metadata:
      name: restore
    spec:
      mariaDbRef:
        name: mariadb
      backupRef:
        name: backup
      args:
        - --verbose
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      storage:
        s3:
          ...
      stagingStorage:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 10Gi
          accessModes:
            - ReadWriteOnce
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Restore
    metadata:
      name: restore
    spec:
      s3:
        ...
      stagingStorage:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 10Gi
          accessModes:
            - ReadWriteOnce
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb
    spec:
      bootstrapFrom:
        s3:
          ...
        stagingStorage:
          persistentVolumeClaim:
            resources:
              requests:
                storage: 10Gi
            accessModes:
              - ReadWriteOnce
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb
    spec:
      storage:
        size: 1Gi
      bootstrapFrom:
        restoreJob:
          args:
            - --verbose
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              memory: 1Gi
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup
    spec:
      mariaDbRef:
        name: mariadb
      ignoreGlobalPriv: false
    mariadb-dump --user=${MARIADB_USER} --password=${MARIADB_PASSWORD} --host=${MARIADB_HOST} --single-transaction --events --routines --all-databases > backup.2024-08-26T12:24:34Z.sql
    mariadb-dump --user=${MARIADB_USER} --password=${MARIADB_PASSWORD} --host=${MARIADB_HOST} --single-transaction --events --routines --all-databases --skip-add-locks --ignore-table=mysql.global_priv > backup.2024-08-26T12:24:34Z.sql
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-galera
    spec:
      rootPasswordSecretKeyRef:
        name: mariadb
        key: root-password
      replicas: 3
      galera:
        enabled: true
      storage:
        size: 1Gi
      bootstrapFrom:
        s3:
          bucket: backups
          prefix: mariadb
          endpoint: minio.minio.svc.cluster.local:9000
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: tls.crt
        targetRecoveryTime: 2024-08-26T12:24:34Z
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: Backup
    metadata:
      name: backup-standalone
    spec:
      mariaDbRef:
        name: mariadb-standalone
      ignoreGlobalPriv: true
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-galera
    spec:
      replicas: 3
      galera:
        enabled: true
      storage:
        size: 1Gi
      bootstrapFrom:
        backupRef:
          name: backup-standalone
    physical backups
    AWS S3arrow-up-right
    Minioarrow-up-right
    StorageClassesarrow-up-right
    volume typesarrow-up-right
    target recovery time
    scheduled Backup
    --one-databasearrow-up-right
    reference
    API reference
    bootstrapping from backup
    Galera docsarrow-up-right
    Galera docsarrow-up-right
    User
    Grant
    SQL resource documentation
    LOCK TABLES Limitationsarrow-up-right
    Galera backup limitations
    storage types
    bootstrap from the previous backup
    root password Secret
    SQL resource documentation
    Galera backup limitations
    API reference
    mariadb-dump optionsarrow-up-right
    mariadb optionsarrow-up-right
    Galera backup limitations

    Backup and Restore

    Procedures for configuring automated and on-demand backups using MariaDB Enterprise Backup, including restoration steps to recover data.

    CSI Specific Configuration

    hashtag
    blob-csi-driver (Azure Blob Storage)

    This section outlines a recommended StorageClass configuration for the Azure Blob Storage CSI Driverarrow-up-right that resolves common mounting and list operation issues encountered in Kubernetes environments.

    The following StorageClassarrow-up-right is recommended when working with Azure Blob Storage (ABS).

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: blob-fuse
    provisioner: blob.csi.azure.com
    parameters:
      protocol: fuse2
    reclaimPolicy: Retain
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    mountOptions:
      # Resolves the issue where non-root containers cannot access the mounted blob container.
      - -o allow_other
      # Ensures list operations (critical for backups/deletion) work immediately upon mount.
      - --cancel-list-on-mount-seconds=0

    Next, when defining your PhysicalBackup resource, make sure to use the new StorageClass we created.

    hashtag
    Issue 1: Access for Non-Root Containers (-o allow_other)

    The default configuration prevents non-root Kubernetes containers from accessing the mounted blob container, resulting in an "unaccessible" volume. By setting the mountOption -o allow_other, non-root containers are granted access to the volume, resolving this issue.

    See for more information.

    hashtag
    Issue 2: Immediate List Operations and Backup Deletion (--cancel-list-on-mount-seconds=0)

    When using the blob-csi-driver with its default settings, list operations (which are critical for cleaning up old backups) may not work immediately upon mount, leading to issues like old physical backups never being deleted. Setting the mountOption --cancel-list-on-mount-seconds to "0" ensures that list operations work as expected immediately after the volume is mounted.

    See for more information.

    circle-exclamation

    Setting cancel-list-on-mount-seconds to 0 forces the driver to perform an immediate list operation, which may increase both initial mount time and Azure transaction costs (depending on the number of objects in the container). Operators should consider these performance and financial trade-offs and consult the official Azure Blob Storage documentation or an Azure representative for guidance.


    This page is: Copyright © 2025 MariaDB. All rights reserved.

    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      # ...
      storage:
        persistentVolumeClaim:
          # Specify your own class
          storageClassName: blob-fuse
    this issuearrow-up-right
    this issuearrow-up-right
    spinner
    spinner

    Physical backups

    hashtag
    What is a physical backup?

    A physical backup is a snapshot of the entire data directory (/var/lib/mysql), including all data files. This type of backup captures the exact state of the database at a specific point in time, allowing for quick restoration in case of data loss or corruption.

    Physical backups are the recommended method for backing up MariaDB databases, especially in production environments, as they are faster and more efficient than logical backups.

    hashtag
    Backup strategies

    Multiple strategies are available for performing physical backups, including:

    • mariadb-backup: Taken using the enterprise version of , specifically , which is available in the MariaDB enterprise images. The operator supports scheduling Jobs to perform backups using this utility.

    • Kubernetes VolumeSnapshot: Leverage to create snapshots of the persistent volumes used by the MariaDB Pods. This method relies on a compatible CSI (Container Storage Interface) driver that supports volume snapshots. See the section for more details.

    In order to use VolumeSnapshots, you will need to provide a VolumeSnapshotClass that is compatible with your storage provider. The operator will use this class to create snapshots of the persistent volumes:

    For the rest of compatible , the mariadb-backup CLI will be used to perform the backup. For instance, to use S3 as backup storage:

    hashtag
    Storage types

    Multiple storage types are supported for storing physical backups, including:

    • S3 compatible storage: Store backups in a S3 compatible storage, such as or .

    • Azure Blob Storage: Store backups in an .

    • Persistent Volume Claims (PVC): Use any of the available in your Kubernetes cluster to create a PersistentVolumeClaim (PVC) for storing backups.

    hashtag
    Scheduling

    Physical backup schedule can be optionally configured using the spec.schedule field in the PhysicalBackup resource. When empty, a single backup job is scheduled:

    • cron: to define the backup schedule.

    • suspend: Setting it to true, it prevents new backups from being scheduled.

    • immediate

    It is very important to note that, by default, backups are only scheduled if the referred MariaDB resource is in ready state. You can override this behavior by setting mariaDbRef.waitForIt=false which allows backups to be scheduled even if the MariaDB resource is not ready.

    hashtag
    Compression

    When using physical backups based on mariadb-backup, you are able to choose the compression algorithm used to compress the backup files. The available options are:

    • bzip2: Good compression ratio, but slower compression/decompression speed compared to gzip.

    • gzip: Good compression/decompression speed, but worse compression ratio compared to bzip2.

    • none: No compression.

    To specify the compression algorithm, you can use the compression field in the PhysicalBackup resource:

    compression is defaulted to none by the operator.

    hashtag
    Server-Side Encryption with Customer-Provided Keys (SSE-C) For S3

    You can enable server-side encryption using your own encryption key (SSE-C) by providing a reference to a Secret containing a 32-byte (256-bit) key encoded in base64:

    circle-exclamation

    When using SSE-C, you are responsible for managing and securely storing the encryption key. If you lose the key, you will not be able to decrypt your backups. Ensure you have proper key management procedures in place.

    circle-info

    When restoring from SSE-C encrypted backups via bootstrapFrom, the same key must be provided in the S3 configuration.

    hashtag
    Retention policy

    You can define a retention policy both for backups based on mariadb-backup and for VolumeSnapshots. The retention policy allows you to specify how long backups should be retained before they are automatically deleted. This can be defined via the maxRetention field in the PhysicalBackup resource:

    When using physical backups based on mariadb-backup, the operator will automatically delete backups files in the specified storage older than the retention period. The cleanup process will be performed after each successful backup.

    When using VolumeSnapshots, the operator will automatically delete the VolumeSnapshot resources older than the retention period using the Kubernetes API. The cleanup process will be performed after a VolumeSnapshot is successfully created.

    hashtag
    Target policy

    You can define a target policy both for backups based on mariadb-backup and for VolumeSnapshots. The target policy allows you to specify in which Pod the backup should be taken. This can be defined via the target field in the PhysicalBackup resource:

    The following target policies are available:

    • Replica: The backup will be taken in a ready replica. If no ready replicas are available, the backup will not be scheduled.

    • PreferReplica: The backup will be taken in a ready replica if available, otherwise it will be taken in the primary Pod.

    When using the PreferReplica target policy, you may be willing to schedule the backups even if the MariaDB resource is not ready. In this case, you can set mariaDbRef.waitForIt=false to allow scheduling the backup even if no replicas are available.

    hashtag
    Restoration

    Physical backups can only be restored in brand new MariaDB instances without any existing data. This means that you cannot restore a physical backup into an existing MariaDB instance that already has data.

    To perform a restoration, you can specify a PhysicalBackup as restoration source under the spec.bootstrapFrom field in the MariaDB resource:

    This will take into account the backup strategy and storage type used in the PhysicalBackup, and it will perform the restoration accordingly.

    As an alternative, you can also provide a reference to an S3 bucket that was previously used to store the physical backup files:

    It is important to note that the backupContentType field must be set to Physical when restoring from a physical backup. This ensures that the operator uses the correct restoration method.

    To restore a VolumeSnapshot, you can provide a reference to a specific VolumeSnapshot resource in the spec.bootstrapFrom field:

    hashtag
    Target recovery time

    By default, the operator will match the closest backup available to the current time. You can specify a different target recovery time by using the targetRecoveryTime field in the PhysicalBackup resource. This lets you define the exact point in time you want to restore to:

    Only backups strictly before or at targetRecoveryTime will be matched.

    hashtag
    Timeout

    By default, both backups based on mariadb-backup and VolumeSnapshots will have a timeout of 1 hour. You can change this timeout by using the timeout field in the PhysicalBackup resource:

    When timed out, the operator will delete the Jobs or VolumeSnapshots resources associated with the PhysicalBackup resource. The operator will create new Jobs or VolumeSnapshots to retry the backup operation if the PhysicalBackup resource is still scheduled.

    hashtag
    Log level

    When taking backups based on mariadb-backup, you can specify the log level to be used by the mariadb-enterprise-operator container using the logLevel field in the PhysicalBackup resource:

    hashtag
    Extra options

    When taking backups based on mariadb-backup, you can specify extra options to be passed to the mariadb-backup command using the args field in the PhysicalBackup resource:

    Refer to the for a list of available options.

    hashtag
    Azure Blob Storage Credentials

    Credentials for accessing Azure Blob Storage can be provided via the azureBlob key in the storage field of the PhysicalBackup resource. The credentials are provided as a reference to a Kubernetes Secret:

    Alternatively, you may choose to omit the storageAccountKey and storageAccountName if you are using

    hashtag
    S3 credentials

    Credentials for accessing an S3 compatible storage can be provided via the s3 key in the storage field of the PhysicalBackup resource. The credentials can be provided as a reference to a Kubernetes Secret:

    Alternatively, if you are running in EKS, you can use dynamic credentials from an EKS Service Account using EKS Pod Identity or IRSA:

    By leaving out the accessKeyIdSecretKeyRef and secretAccessKeySecretKeyRef credentials and pointing to the correct serviceAccountName, the backup Job will use the dynamic credentials from EKS.

    hashtag
    Staging area

    circle-info

    S3 backups based on mariadb-backup are the only scenario that requires a staging area.

    When using S3 storage for backups, a staging area is used for keeping the external backups while they are being processed. By default, this staging area is an emptyDir volume, which means that the backups are temporarily stored in the node's local storage where the PhysicalBackup Job is scheduled. In production environments, large backups may lead to issues if the node doesn't have sufficient space, potentially causing the backup/restore process to fail.

    Additionally, when restoring these backups, the operator will pull the backup files from S3, uncompress them if needded, and restore them to each of the MariaDB Pods in the cluster individually. To save network bandwidth and compute resources, a staging area is used to keep the uncompressed backup files after they have been restored to the first MariaDB Pod. This allows the operator to restore the same backup to the rest of MariaDB Pods seamlessly, without needing to pull and uncompress the backup again.

    To configure the staging area, you can use the stagingStorage field in the PhysicalBackup resource:

    Similarly, you may also use a staging area when , in the MariaDB resource:

    In the examples above, a PVC with the default StorageClass will be provisioned to be used as staging area.

    hashtag
    VolumeSnapshots

    circle-exclamation

    Before using this feature, ensure that you meet the following prerequisites :

    • and its CRs are installed in the cluster.

    • You have a compatible CSI driver that supports VolumeSnapshots

    The operator is capable of creating of the PVCs used by the MariaDB Pods. This allows you to create point-in-time snapshots of your data in a Kubernetes-native way, leveraging the capabilities of your storage provider.

    Most of the fields described in this documentation apply to VolumeSnapshots, including scheduling, retention policy, and compression. The main difference with the mariadb-backup based backups is that the operator will not create a Job to perform the backup, but instead it will create a VolumeSnapshot resource directly.

    In order to create consistent, point-in-time snapshots of the MariaDB data, the operator will perform the following steps:

    1. Execute a BACKUP STAGE START statement followed by BACKUP STAGE BLOCK_COMMIT in one of the secondary Pods.

    2. Create a VolumeSnapshot resource of the data PVC mounted by the MariaDB secondary Pod.

    This backup process is described in the and is designed to be .

    hashtag
    Non-blocking physical backups

    Both for mariadb-backup and VolumeSnapshot , the enterprise operator performs non-blocking physical backups by leveraging the . This implies that the backups are taken without long read locks, enabling consistent, production-grade backups with minimal impact on running workloads, ideal for high-availability and performance-sensitive environments.

    hashtag
    Important considerations and limitations

    hashtag
    Root credentials

    When restoring a backup, the root credentials specified through the spec.rootPasswordSecretKeyRef field in the MariaDB resource must match the ones in the backup. These credentials are utilized by the liveness and readiness probes, and if they are invalid, the probes will fail, causing your MariaDB Pods to restart after the backup restoration.

    hashtag
    Restore Job

    When using backups based on mariadb-backup, restoring and uncompressing large backups can consume significant compute resources and may cause restoration Jobs to become stuck due to insufficient resources. To prevent this, you can define the compute resources allocated to the Job:

    hashtag
    ReadWriteOncePod access mode partially supported

    When using backups based on mariadb-backup, the data PVC used by the MariaDB Pod cannot use the access mode, as it needs to be mounted at the same time by both the MariaDB Pod and the PhysicalBackup Job. In this case, please use either the ReadWriteOnce or ReadWriteMany access modes instead.

    Alternatively, if you want to keep using the ReadWriteOncePod access mode, you must use backups based on VolumeSnapshots, which do not require creating a Job to perform the backup and therefore avoid the volume sharing limitation.

    hashtag
    PhysicalBackup Jobs scheduling

    PhysicalBackup Jobs must mount the data PVC used by one of the secondary MariaDB Pods. To avoid scheduling issues caused by the commonly used ReadWriteOnce access mode, the operator schedules backup Jobs on the same node as MariaDB by default.

    If you prefer to disable this behavior and allow Jobs to run on any node, you can set podAffinity=false:

    This configuration may be suitable when using the ReadWriteMany access mode, which allows multiple Pods across different nodes to mount the volume simultaneously.

    hashtag
    Troubleshooting

    Custom columns are used to display the status of the PhysicalBackup resource:

    To get a higher level of detail, you can also check the status field directly:

    You may also check the related events for the PhysicalBackup resource to see if there are any issues:

    hashtag
    Common errors

    hashtag
    mariadb-backup log copy incomplete: consider increasing innodb_log_file_size

    In some situations, when using the mariadb-backup strategy, you may encounter the following error in the backup Job logs:

    This can be addressed by increasing the innodb_log_file_size in the MariaDB configuration. You can do this by adding the following to your MariaDB resource:

    Refer to for further details on this issue.

    hashtag
    mariadb-backup Job fails to start because the Pod cannot mount MariaDB PVC created with StorageClass provider

    Without explicitly enabled the ReadWriteOnce access mode is treated as ReadWriteOncePod.

    Refer to for further details on this issue.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Kubernetes Volumes: Store backups in any of the supported by Kubernetes out of the box, such as NFS.

  • Kubernetes VolumeSnapshots: Use to create snapshots of the persistent volumes used by the MariaDB Pods. This method relies on a compatible CSI (Container Storage Interface) driver that supports volume snapshots. See the section for more details.

  • : Setting it
    true
    , it schedules a backup immediately after creating the
    PhysicalBackup
    resource.
  • onDemand: Schedule identifier for triggering an on-demand backup. If the identifier is different from the one tracked under status.lastScheduleOnDemand, a new physical backup is triggered.

  • onPrimaryChange: By setting it to true, it schedules a new backup after the primary Pod in the referred MariaDB instance is changed. This is particularly useful for .

  • installed in the cluster.
  • You have a VolumeSnapshotClass configured configured for your CSI driver.

  • Wait until the VolumeSnapshot is provisioned by the storage system. When timing out, the operator will delete the VolumeSnapshot resource and retry the operation.

  • Issue a BACKUP STAGE END statement.

  • apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      storage:
        volumeSnapshot:
          volumeSnapshotClassName: csi-hostpath-snapclass
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      storage:
        s3:
          bucket: physicalbackups
          endpoint: minio.minio.svc.cluster.local:9000
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: ca.crt
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
        waitForIt: true
      schedule:
        cron: "*/1 * * * *"
        suspend: false
        immediate: true
        onDemand: "1"
        onPrimaryChange: true 
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      compression: bzip2
    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
      name: ssec-key
    stringData:
      # 32-byte key encoded in base64 (use: openssl rand -base64 32)
      customer-key: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      storage:
        s3:
          bucket: physicalbackups
          endpoint: minio.minio.svc.cluster.local:9000
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: ca.crt
          ssec:
            customerKeySecretKeyRef:
              name: ssec-key
              key: customer-key
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      maxRetention: 720h # 30 days
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      target: Replica
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-galera
    spec:
      bootstrapFrom:
        backupRef:
          name: physicalbackup
          kind: PhysicalBackup
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-galera
    spec:
      bootstrapFrom:
        s3:
          bucket: physicalbackups
          prefix: mariadb
          endpoint: minio.minio.svc.cluster.local:9000
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: ca.crt
        backupContentType: Physical
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-galera
    spec:
      bootstrapFrom:
        volumeSnapshotRef:
          name: physicalbackup-20250611163352
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-galera
    spec:
      bootstrapFrom:
        targetRecoveryTime: 2025-06-17T08:07:00Z
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      timeout: 2h
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      logLevel: debug
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      args:
        - "--verbose"
    ---
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      target: Replica
      compression: bzip2
      storage:
        azureBlob:
          containerName: physicalbackup
          serviceURL: https://physicalbackup.blob.core.windows.net # Format is: `https://%s.blob.core.windows.net/` where `%s` is the containerName
          prefix: mariadb
          storageAccountName: exampleStorageAccount
          storageAccountKey:
            name: azurite-key
            key: storageAccountKey
          # Optional.
          # tls:
          #   enabled: true
          #   caSecretKeyRef:
          #     name: azurite-certs
          #     key: cert.pem
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      storage:
        s3:
          bucket: physicalbackups
          endpoint: minio.minio.svc.cluster.local:9000
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: ca.crt
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: mariadb-backup
      annotations:
        eks.amazonaws.com/role-arn: arn:aws:iam::<<account_id>>:role/my-role-irsa
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      serviceAccountName: mariadb-backup
      storage:
        s3:
          bucket: physicalbackups
          prefix: mariadb
          endpoint: s3.us-east-1.amazonaws.com
          region:  us-east-1
          tls:
            enabled: true
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      storage:
        s3:
          bucket: physicalbackups
          prefix: mariadb
          endpoint: minio.minio.svc.cluster.local:9000
          region:  us-east-1
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: ca.crt
      stagingStorage:
        persistentVolumeClaim:
          resources:
            requests:
              storage: 1Gi
          accessModes:
            - ReadWriteOnce
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-galera
    spec:
      mariaDbRef:
        name: mariadb
      bootstrapFrom:
        s3:
          bucket: physicalbackups
          prefix: mariadb
          endpoint: minio.minio.svc.cluster.local:9000
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: ca.crt
        backupContentType: Physical
        stagingStorage:
          persistentVolumeClaim:
            resources:
              requests:
                storage: 1Gi
            accessModes:
              - ReadWriteOnce
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb
    spec:
      bootstrapFrom:
        restoreJob:
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              memory: 1Gi
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      mariaDbRef:
        name: mariadb
      podAffinity: false
    kubectl get physicalbackups
    
    NAME             COMPLETE   STATUS    MARIADB   LAST SCHEDULED   AGE
    physicalbackup   True       Success   mariadb   17s              17s
    kubectl get physicalbackups physicalbackup -o json | jq -r '.status'
    
    {
      "conditions": [
        {
          "lastTransitionTime": "2025-07-14T07:01:14Z",
          "message": "Success",
          "reason": "JobComplete",
          "status": "True",
          "type": "Complete"
        }
      ],
      "lastScheduleCheckTime": "2025-07-14T07:00:00Z",
      "lastScheduleTime": "2025-07-14T07:00:00Z",
      "nextScheduleTime": "2025-07-15T07:00:00Z"
    }
    kubectl get events --field-selector involvedObject.name=physicalbackup
    
    LAST SEEN   TYPE     REASON                  OBJECT                                 MESSAGE
    116s        Normal   WaitForFirstConsumer    persistentvolumeclaim/physicalbackup   waiting for first consumer to be created before binding
    116s        Normal   JobScheduled            physicalbackup/physicalbackup          Job physicalbackup-20250714140837 scheduled
    116s        Normal   ExternalProvisioning    persistentvolumeclaim/physicalbackup   Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
    116s        Normal   Provisioning            persistentvolumeclaim/physicalbackup   External provisioner is provisioning volume for claim "default/physicalbackup"
    113s        Normal   ProvisioningSucceeded   persistentvolumeclaim/physicalbackup   Successfully provisioned volume pvc-7b7c71f9-ea7e-4950-b612-2d41d7ab35b7
    mariadb [00] 2025-08-04 09:15:57 Was only able to copy log from 58087 to 59916, not 68968; try increasing
    innodb_log_file_size
    mariadb mariabackup: Stopping log copying thread.[00] 2025-08-04 09:15:57 Retrying read of log at LSN=59916
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb
    spec:
    ...
      myCnf: |
        [mariadb]
        innodb_log_file_size=200M
    mariadb-backuparrow-up-right
    MariaDB Enterprise backuparrow-up-right
    Kubernetes VolumeSnapshotsarrow-up-right
    VolumeSnapshots
    backup storage types
    AWS S3arrow-up-right
    Minioarrow-up-right
    Azure Blob Storagearrow-up-right
    StorageClassesarrow-up-right
    Cron expressionarrow-up-right
    mariadb-backup documentationarrow-up-right
    managed identityarrow-up-right
    bootstrapping from backup
    external-snapshotterarrow-up-right
    VolumeSnapshot resourcesarrow-up-right
    MariaDB documentationarrow-up-right
    non-blocking
    backup strategies
    BACKUP STAGE feature.arrow-up-right
    ReadWriteOncePodarrow-up-right
    MDEV-36159arrow-up-right
    openebs/lvm-localpvarrow-up-right
    shared optionarrow-up-right
    openebs/lvm-localpv#281arrow-up-right
    spinner
    in-tree storage providersarrow-up-right
    Kubernetes VolumeSnapshotsarrow-up-right
    VolumeSnapshots
    point-in-time recovery

    Point-In-Time-Recovery

    Point-in-time recovery (PITR) is a feature that allows you to restore a MariaDB instance to a specific point in time. For achieving this, it combines a full base backup and the binary logs that record all changes made to the database after the backup. This is something fully automated by operator, covering archival and restoration up to a specific time, ensuring business continuity and reduced RTO and RPO.

    hashtag
    Supported MariaDB versions and topologies

    The operator uses mariadb-binlogarrow-up-right to replay binary logs, in particular, it filters binlog events by passing a GTID to mariadb-binlog via the --start-positionarrow-up-right flag. This is only supported by MariaDB server 10.8 and later, so make sure you are using a compatible MariaDB version.

    Regarding supported MariaB topologies, at the moment, binary log archiving and point-in-time recovery are only supported by the , which already relies on the binary logs for replication. Galera and standalone topologies will be supported in upcoming releases.

    hashtag
    Storage types

    Full base backups and binary logs can be stored in the following object storage types:

    • S3 compatible storage: Such as or .

    • .

    For additional details on configuring storage, please refer to the section in the physical backup documentation, same settings are applicable to the PointInTimeRecovery object.

    hashtag
    Configuration

    To be able to perform a point-in-time restoration, a physical backup should be configured as full base backup. For example, you can configure a nightly backup:

    Refer to the section for additional details on how to configure the full base backup.

    Next step is configuring common aspects of both binary log archiving and point-in-time restoration by defining a PointInTimeRecovery object:

    • physicalBackupRef: It is a reference to the PhysicalBackup resource used as full base backup. See .

    • storage: Object storage configuration for binary logs. See .

    • compression

    With this configuration in place, you can enable binary log archival in a MariaDB instance by setting a reference to the PointInTimeRecovery object:

    Once a full base backup has been completed and the binary logs have been archived, you can perform a point-in-time restoration. For example, you can create a new MariaDB instance with the following configuration:

    Refer to the section for additional details.

    hashtag
    Full base backup

    To enable point-in-time recovery, a PhysicalBackup resource should be configured as full base backup. The backup should be a complete snapshot of the database at a specific point in time, and it will serve as the starting point for replaying the binary logs. Any of the supported can be used as full base backup, as all of them provide a consistent snapshot of the database and a starting GTID position.

    It is very important to note that a full physical backups should be completed before a point-in-time restoration can be performed. This is something that the operator accounts for when computing the .

    To further expand the , it is recommended to take physical backups after the primary Pod has changed. This can be automated by setting schedule.onPrimaryChange, as documented in the :

    Alternatively, you can schedule an on-demand physical backup or rely on the cron scheduling for doing so:

    The backup taken in the new primary will establish a baseline for a new , which will be expanded when new binary logs are archived.

    hashtag
    Archival

    The mariadb-enterprise-operator will periodically check for new binary logs and archive them to the configured object storage. The archival process is controlled by the archiveInterval and archiveTimeout settings in the PointInTimeRecovery configuration, which determine how often the archival process runs and how long it can take before it is considered failed.

    The archival process is performed on the primary Pod in the asynchronous replication topology, you may check the logs of the agent sidecar container, Kubernetes events and status of the MariaDB objects to monitor the current status of the archival process:

    There are a couple of important considerations regarding binary log archival:

    • The archival process should start from a clean state, which means that the object storage should be empty at the time of the first archival.

    • It is not recommended to set archiveInterval to a very low value (< 1m), as it can lead to increased load on the database Pod and the storage system.

    • If the archival process fails (e.g., due to network issues or storage unavailability), it will be retried in the next archive cycle.

    hashtag
    Binary log size

    The server has a default of 1GB, which means that a new binary log file will be created once the current one reaches that size. This is sensible default value for most cases, but it can be adjusted based on the data volume in order to enable a faster archival, and therefore a reduced RPO:

    Environment
    Recommended Size
    Rationale

    The smaller the binlog file size, the more frequently the files will be rotated and archived, which can lead to increased load on the database Pod and the storage system. On the other hand, setting a very high binlog file size can lead to longer archival times and increased RPO.

    Refer to the documentation for instructions on how to set the max_binlog_size server variable in the MariaDB instance.

    hashtag
    Compression

    In order to reduce storage usage and save bandwidth during archival and restoration, the operator supports compressing the binary log files. Compression is enabled by setting the compression field in the PointInTimeRecovery configuration:

    The supported compression algorithms are:

    • bzip2: Good compression ratio, but slower compression/decompression speed compared to gzip.

    • gzip: Good compression/decompression speed, but worse compression ratio compared to bzip2.

    • none: No compression.

    Compression is disabled by default, and the are some important considerations before enabling it:

    • Compression is immutable, which means that once configured and binary logs have been archived with a specific algorithm, it cannot be changed. This also applies to restoration, the same compression algorithm should be configured as the one used for archival.

    • Although it saves storage space and bandwidth, the restoration process may take longer when compression is enabled, leading to an increased RTO. This can migrated by enabling .

    hashtag
    Server-Side Encryption with Customer-Provided Keys (SSE-C) For S3

    When using S3-compatible storage, you can enable server-side encryption using your own encryption key (SSE-C) by providing a reference to a Secret containing a 32-byte (256-bit) key encoded in base64:

    circle-exclamation

    When using SSE-C, you are responsible for managing and securely storing the encryption key. If you lose the key, you will not be able to decrypt your binary logs. Ensure you have proper key management procedures in place.

    circle-info

    When replaying SSE-C encrypted binary logs via bootstrapFrom, the same key must be provided in the S3 configuration.

    hashtag
    Parallelization

    Several tasks during both archival an restoration process can take a significant amount of time, specially when managing large data volumes. These tasks include compressing and uploading binary logs during archival, and downloading and decompressing binary logs during restoration. This can lead to longer archival and restoration times, which can impact the RTO.

    To mitigate this, the operator supports parallelization of these tasks by using multiple workers. The maximum number of workers can be configured via the maxParallel field in the PointInTimeRecovery configuration:

    This will create up to 4 workers, each of them responsible for the operations related to a single binary log, which means that up to 4 binary logs can be processed in parallel. This can significantly reduce the archival and restoration times, specially when is enabled.

    Parallelization is disabled by default (maxParallel: 1), and there are some important considerations to be taken into account when enabling it:

    • During archival, the workers will be spawn in the container, sharing storage with the primary database Pod. Using an elevated number of workers can exhaust IOPS and/or CPU resources of the primary Pod, which can impact the performance of the database.

    • During both archival and restoration, using an elevated number of workers can saturate the network bandwidth when pulling/pushing multiple binary logs in parallel, something that can degrade the performance of the database.

    hashtag
    Retention policy

    Binary logs can grow significantly in size, especially in write-heavy environments, which can lead to increased storage costs. To mitigate this, the operator supports automatic purging of binary logs based on a retention policy defined by the maxRetention field in the PointInTimeRecovery configuration:

    The binary logs that exceed the defined retention will be automatically deleted from the object storage after each archival cycle.

    By default, binary logs are never purged from object storage, and there are few considerations regarding configuring a retention policy:

    • The date of the last event in the binary logs is used to determine its age, and therefore whether it should be purged or not.

    • The maxRetention field should not be set to a value lower than the archiveInterval, as it can lead to situations where binary logs are purged before they can be archived.

    hashtag
    Binlog inventory

    The operator maintains an inventory of the archived binary logs in an index.yaml file located at the root of the configured object storage. This file contains a list of all the archived binary logs per each server, along with their GTIDs and other metadata utilized internally. Here is an example of the index.yaml file:

    This file is used internally by the operator to keep track of the archived binary logs, and it is updated after each successful archival. It should not be modified manually, as it can lead to inconsistencies between the actual archived binary logs and the inventory.

    When it comes to point-in-time restoration, this file serves as a source of truth to compute the .

    hashtag
    Binlog timeline and last recoverable time

    Taking into account the last completed physical backup GTID and the archived binlogs in the , the operator computes a timeline of binary logs that can replayed and its corresponding last recoverable time. The last recoverable time is the latest timestamp that the MariaDB instance can be restored to. This information is crucial for understanding the RPO of the system and for making informed decisions during a recovery process.

    You can easily check the by looking at the status of the PointInTimeRecovery object:

    Then, you may provide exactly this timestamp, or an earlier one, as target recovery time when bootstrapping a new MariaDB instance, as described in the section.

    hashtag
    Point-in-time restoration

    In order to perform a point-in-time restoration, you can create a new MariaDB instance with a reference to the PointInTimeRecovery object in the bootstrapFrom field, along with the targetRecoveryTime field indicating the desired point-in-time to restore to.

    For setting the targetRecoveryTime, it is recommended to check the last recoverable time first in the PointInTimeRecovery object:

    • pointInTimeRecoveryRef: Reference to the PointInTimeRecovery object that contains the configuration for the point-in-time recovery.

    • targetRecoveryTime: The desired point in time to restore to. It should be in RFC3339 format. If not provided, the current time will be used as target recovery time, which means restoring up to the .

    The restoration process will match the closest physical backup before or at the targetRecoveryTime, and then it will replay the archived binary logs from the backup GTID position up until the targetRecoveryTime:

    As you can see, the restoration process includes the following steps:

    1. Perform a rolling restore of the , one Pod at a time.

    2. Configure replication in the MariaDB instance.

    3. Get the base backup GTID, to be used as the starting point for replaying the binary logs.

    After having completed the restoration process, the following status conditions will be available for you to inspect the restoration process:

    hashtag
    Strict mode

    The strict mode controls whether the target recovery time provided during the bootstrap process should be strictly met or not. This is configured via the strictMode field in the PointInTimeRecovery configuration, and it is disabled by default:

    When strict mode is enabled (recommended), if the target recovery time cannot be met, the initialization process will return an error early, and the MariaDB instance will not be created. This can happen, for example, if the target recovery time is later than the . Let's assume strict mode is enabled and the last recoverable time is:

    If we attempt to provision the following MariaDB instance:

    The following errors will be returned, as the target recovery time 2026-02-28T20:10:42Z is later than the last recoverable time 2026-02-27T20:10:42Z:

    When strict mode is disabled (default), and the target recovery time cannot be met, the MariaDB provisioning will proceed and the last recoverable time will be used. This would mean that, the MariaDB instance will be provisioned with a recovery time of 2026-02-27T20:10:42Z, which is the last recoverable time:

    After setting strictMode=false, if we attempt to create the same MariaDB instance as before, it will be successfully provisioned, but with a recovery time of 2026-02-27T20:10:42Z will be used instead of the requested 2026-02-28T20:10:42Z.

    It is important to note that the last recoverable time is stored in the status field of the PointInTimeRecovery object, therefore if this object is deleted and recreated, the last recoverable time metadata will be lost, and it will not be available until recomputed. When it comes to restore, this implies that the error will be returned later in the process, when computing the binary log timeline, but the strict mode behaviour still applies. This is the error returned for that scenario:

    hashtag
    Staging storage

    The operator uses a staging area to temporarily store the binary logs during the restoration process. By default, the staging area is an attached to the restoration job, which means that the binary logs are kept in the node storage where the job has been scheduled. This may not be suitable for large binary logs, as it can lead to exhausting the node's storage, resulting the restoration process to fail and potentially impacting other workloads running in the same node.

    You are able to configure an alternative staging area using the stagingStorage field under the bootstrapFrom section in the MariaDB resource:

    This will provision a PVC and attach it to the restoration job to be used as staging area.

    hashtag
    Limitations

    • A PointInTimeRecovery object can only be referred by a single MariaDB object via the pointInTimeRecoveryRef field.

    • A combination object storage bucket + prefix can only be utilizied by a single MariaDB instance to archive binary logs.

    hashtag
    Troubleshooting

    The operator tracks the current archival status under the MariaDB status subresource. This status is updated after each archival cycle, and it contains metadata about the binary logs that have been archived, along with other useful information for troubleshooting:

    Additionally, also under the status subresource, the operator sets status conditions whenever a specific state of the binlog archival or point-in-time restoration process is reached:

    The operator also emits Kubernetes events during both archival and restoration process, to either report an outstanding event or error:

    hashtag
    Common errors

    Unable to start archival process

    The following error will be returned if the archival process is configured pointing to a non-empty object storage, as the operator expects to start from a clean state:

    To solve this, you can update the PointInTimeRecovery configuration pointing to another object storage bucket or prefix that is empty:

    After updating the PointInTimeRecovery configuration, the error will be cleared in the next archival cycle, and a new archival operation will be attempted.

    Alternatively, you can also consider deleting the existing binary logs and , only after having double checked that they are not needed for recovery.

    Target recovery time is after latest recoverable time

    This error is returned in the MariaDB init process, when the targetRecoveryTime provided to bootstrap is later than the reported by the PointInTimeRecovery status.

    For example, if you have configured the bootstrapFrom.targetRecoveryTime field with the value 2026-02-28T20:10:42Z, the following error will be returned:

    There are two ways to solve this issue:

    • Update the targetRecoveryTime in the MariaDB resource to be earlier than or equal to the last recoverable time, which in this case is 2026-02-27T20:10:42Z.

    • Disable strictMode in the PointInTimeRecovery configuration, allowing to restore up until the latest recoverable time, in this case 2026-02-27T20:10:42Z

    Invalid binary log timeline: error getting binlog timeline between GTID and target time: timeline did not reach target time

    This error is returned when computing the binary log timeline during the restoration process, and it means that the operator could not build a timeline that reaches the targetRecoveryTime provided in the bootstrapFrom field of the MariaDB resource.

    For example, if you have the following :

    And your targetRecoveryTime is 2026-02-28T20:10:42Z, the following error will be returned:

    There are two ways to solve this issue:

    • Update the targetRecoveryTime in the MariaDB resource to be earlier than or equal to the last recoverable time, which in this case is 2026-02-27T16:04:15Z.

    • Disable strictMode in the PointInTimeRecovery configuration, allowing to restore up until the latest recoverable time, in this case 2026-02-27T16:04:15Z

    : Algorithm to be used for compressing binary logs. It is disabled by default. See
    .
  • archiveTimeout: Maximum duration for the binary log archival. If exceeded, agent will return an error and archival will be retried in the next archive cycle. Defaults to 1h.

  • archiveInterval: Interval at which the binary logs will be archived. Defaults to 10m. See archival for additional details.

  • maxParallel: Maximum number of workers that can be used for parallel binary log archival and restoration. Defaults to 1. See parallelization.

  • maxRetention: Maximum retention duration for binary logs. By default, binary logs are not automatically deleted. See retention policy.

  • strictMode: Controls the behavior when a point-in-time restoration cannot reach the exact target time. It is disabled by default. See strict mode.

  • If binlog_expire_logs_secondsarrow-up-right server variable is configured, it should be set to a value higher than the archiveInterval to prevent automatic deletion of binary logs before they are archived.

  • Manually executing PURGE BINARY LOGSarrow-up-rightcommand on the database is not recommended, as it can lead to inconsistencies between the database and the archived binary logs.

  • Manually executing FLUSH BINARY LOGSarrow-up-right command on the database should be compatible with the archival process, it will force the active binary log to be closed and will be archived by the agent in the next archive cycle.

  • 512MB - 1GB

    Reduces the contention caused by frequent rotations in write-heavy environments.

    restoreJob: Compute resources and metadata configuration for the restoration job. To reduce RTO, it is recommended to properly tune compute resources.
  • logLevel: Log level for the operator container, part of the restoration job.

  • Schedule the point-in-time restoration job, which will:

    1. Build the binlog timeline based on the base backup GTID and the archived binary log inventory.

    2. Pull the binary logs in the timeline into a staging area.

    3. Replay the binary logs using mariadb-binlogarrow-up-right from the GTID position of the base backup up to the targetRecoveryTime.

    .
    .

    Low Traffic

    128MB

    Keeps file size minimal for slow-growing logs.

    Standard

    256MB

    Balances rotation frequency with server overhead.

    asynchronous replication topology
    AWS S3arrow-up-right
    Minioarrow-up-right
    Azure Blob Storagearrow-up-right
    storage types
    full base backup
    full base backup
    storage types
    point-in-time restoration
    backup strategies
    last recoverable time
    last recoverable time
    physical backup docs
    binlog timeline
    sidecar agent
    max_binlog_sizearrow-up-right
    configuration
    parallelization
    compression
    agent sidecar
    binlog timeline and the last recoverable time
    inventory
    last recoverable time
    point-in-time restoration
    last recoverable time
    full base backup
    last recoverable time
    emptyDir volumearrow-up-right
    index.yaml inventory file
    last recoverable time
    binary log inventory

    High Throughput

    compression
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup-daily
    spec:
      mariaDbRef:
        name: mariadb-repl
      schedule:
        cron: "0 0 * * *"
        suspend: false
        immediate: true
      compression: bzip2
      maxRetention: 720h 
      storage:
        s3:
          bucket: physicalbackups
          prefix: mariadb
          endpoint: minio.minio.svc.cluster.local:9000
          region: us-east-1
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: ca.crt
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PointInTimeRecovery
    metadata:
      name: pitr
    spec:
      physicalBackupRef:
        name: physicalbackup-daily
      storage:
        s3:
          bucket: binlogs
          prefix: mariadb
          endpoint: minio.minio.svc.cluster.local:9000
          region: us-east-1
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: ca.crt
      compression: gzip
      archiveTimeout: 1h
      archiveInterval: 1m
      maxParallel: 4
      maxRetention: 720h # 30 days
      strictMode: false
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-repl
    spec:
      storage:
        size: 1Gi
      replicas: 3
      replication:
        enabled: true
      # sidecar agent will archive binary logs to the configured storage.
      pointInTimeRecoveryRef:
        name: pitr
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-repl
    spec:
      storage:
        size: 1Gi
      replicas: 3
      replication:
        enabled: true
      # bootstrap the instance from PITR: restore closest physical backup and replay binary logs up to targetRecoveryTime.
      bootstrapFrom:
        pointInTimeRecoveryRef:
          name: pitr
        targetRecoveryTime: 2026-02-20T18:00:04Z
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      schedule:
        onPrimaryChange: true 
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PhysicalBackup
    metadata:
      name: physicalbackup
    spec:
      schedule:
        cron: "0 0 * * *"
        onDemand: "1"
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PointInTimeRecovery
    metadata:
      name: pitr
    spec:
      archiveTimeout: 1h
      archiveInterval: 1m
    kubectl logs -l enterprise.mariadb.com/role=primary -c agent --tail 20
    {"level":"info","ts":1772208238.0152433,"logger":"binlog-archival","msg":"Archiving binary logs"}
    {"level":"info","ts":1772208238.437027,"logger":"binlog-archival.uploader","msg":"Uploading binary log","binlog":"mariadb-repl-bin.000003","object":"server-10/mariadb-repl-bin.000003.gz","start-time":"2026-02-27T16:03:58Z"}
    {"level":"info","ts":1772208238.4371545,"logger":"binlog-archival.uploader","msg":"Compressing binary log","binlog":"mariadb-repl-bin.000003","object":"server-10/mariadb-repl-bin.000003.gz","start-time":"2026-02-27T16:03:58Z"}
    {"level":"info","ts":1772208260.8291402,"logger":"binlog-archival.uploader","msg":"Binary log uploaded","binlog":"mariadb-repl-bin.000003","object":"server-10/mariadb-repl-bin.000003.gz","start-time":"2026-02-27T16:03:58Z","total-time":"22.392211226s"}
    {"level":"info","ts":1772208260.8621385,"logger":"binlog-archival","msg":"Binary log mariadb-repl-bin.000003 archived"}
    {"level":"info","ts":1772208260.8622391,"logger":"binlog-archival","msg":"Binlog archival done"}
    {"level":"info","ts":1772208261.2485638,"logger":"binlog-archival","msg":"Purging binary logs","max-retention":"720h0m0s"}
    {"level":"info","ts":1772208261.2599053,"logger":"binlog-archival","msg":"Binary logs purged","max-retention":"720h0m0s"}
    {"level":"info","ts":1772208268.0053742,"logger":"binlog-archival","msg":"Archiving binary logs"}
    {"level":"info","ts":1772208268.0907545,"logger":"binlog-archival.uploader","msg":"Uploading binary log","binlog":"mariadb-repl-bin.000004","object":"server-10/mariadb-repl-bin.000004.gz","start-time":"2026-02-27T16:04:28Z"}
    {"level":"info","ts":1772208268.0908031,"logger":"binlog-archival.uploader","msg":"Compressing binary log","binlog":"mariadb-repl-bin.000004","object":"server-10/mariadb-repl-bin.000004.gz","start-time":"2026-02-27T16:04:28Z"}
    {"level":"info","ts":1772208279.7613757,"logger":"binlog-archival.uploader","msg":"Binary log uploaded","binlog":"mariadb-repl-bin.000004","object":"server-10/mariadb-repl-bin.000004.gz","start-time":"2026-02-27T16:04:28Z","total-time":"11.670631252s"}
    {"level":"info","ts":1772208279.7794006,"logger":"binlog-archival","msg":"Binary log mariadb-repl-bin.000004 archived"}
    {"level":"info","ts":1772208279.7794523,"logger":"binlog-archival","msg":"Binlog archival done"}
    
    kubectl get events --field-selector involvedObject.name=mariadb-repl
    LAST SEEN   TYPE     REASON           OBJECT                 MESSAGE
    4m3s        Normal   BinlogArchived   MariaDB/mariadb-repl   Binary log mariadb-repl-bin.000001 archived
    2m36s       Normal   BinlogArchived   MariaDB/mariadb-repl   Binary log mariadb-repl-bin.000002 archived
    2m11s       Normal   BinlogArchived   MariaDB/mariadb-repl   Binary log mariadb-repl-bin.000003 archived
    112s        Normal   BinlogArchived   MariaDB/mariadb-repl   Binary log mariadb-repl-bin.000004 archived
    
    kubectl get mariadb mariadb-repl -o jsonpath='{.status.pointInTimeRecovery}' | jq
    {
      "lastArchivedBinaryLog": "mariadb-repl-bin.000004",
      "lastArchivedGtid": "0-10-1559",
      "lastArchivedPosition": 268506819,
      "lastArchivedTime": "2026-02-27T16:04:15Z",
      "serverId": 10,
      "storageReadyForArchival": true
    }
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PointInTimeRecovery
    metadata:
      name: pitr
    spec:
      compression: gzip
    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
      name: ssec-key
    stringData:
      # 32-byte key encoded in base64 (use: openssl rand -base64 32)
      customer-key: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXoxMjM0NTY=
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PointInTimeRecovery
    metadata:
      name: pitr
    spec:
      physicalBackupRef:
        name: physicalbackup-daily
      storage:
        s3:
          bucket: binlogs
          endpoint: minio.minio.svc.cluster.local:9000
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: ca.crt
          ssec:
            customerKeySecretKeyRef:
              name: ssec-key
              key: customer-key
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PointInTimeRecovery
    metadata:
      name: pitr
    spec:
      maxParallel: 4
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PointInTimeRecovery
    metadata:
      name: pitr
    spec:
      maxRetention: 720h # 30 days
    apiVersion: v1
    binlogs:
      server-10:
      ...
      - binlogFilename: mariadb-repl-bin.000003
        binlogVersion: 4
        firstGtid: 0-10-527
        firstTime: "2026-02-27T16:03:22Z"
        lastGtid: 0-10-1041
        lastTime: "2026-02-27T16:03:50Z"
        logPosition: 268493636
        previousGtids:
        - 0-10-526
        rotateEvent: true
        serverId: 10
        serverVersion: 11.8.5-2-MariaDB-enterprise-log
        stopEvent: false
      - binlogFilename: mariadb-repl-bin.000004
        binlogVersion: 4
        firstGtid: 0-10-1042
        firstTime: "2026-02-27T16:03:50Z"
        lastGtid: 0-10-1559
        lastTime: "2026-02-27T16:04:15Z"
        logPosition: 268506819
        previousGtids:
        - 0-10-1041
        rotateEvent: true
        serverId: 10
        serverVersion: 11.8.5-2-MariaDB-enterprise-log
        stopEvent: false
    kubectl get pitr
    NAME   PHYSICAL BACKUP        LAST RECOVERABLE TIME   STRICT MODE   AGE
    pitr   physicalbackup-daily   2026-02-27T20:10:42Z    true          43h
    kubectl get pitr
    NAME   PHYSICAL BACKUP        LAST RECOVERABLE TIME   STRICT MODE   AGE
    pitr   physicalbackup-daily   2026-02-27T20:10:42Z    true          43h
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-repl
    spec:
      rootPasswordSecretKeyRef:
        name: mariadb
        key: root-password
      storage:
        size: 1Gi
      replicas: 3
      replication:
        enabled: true
      # bootstrap the instance from PITR: restore closest physical backup and replay binary logs up to targetRecoveryTime.
      bootstrapFrom:
        pointInTimeRecoveryRef:
          name: pitr
        targetRecoveryTime: 2026-02-20T18:00:04Z
        restoreJob:
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              memory: 1Gi
        logLevel: debug
    kubectl apply -f mariadb_replication_pitr_s3.yaml
    mariadb.enterprise.mariadb.com/mariadb-repl created
    
    kubectl get mariadb
    NAME           READY   STATUS         PRIMARY          UPDATES                    AGE
    mariadb-repl   False   Initializing   mariadb-repl-0   ReplicasFirstPrimaryLast   40s
    
    kubectl get pods
    NAME                           READY   STATUS      RESTARTS       AGE
    mariadb-repl-0                 2/2     Running     0              36s
    mariadb-repl-0-pb-init-gp4gl   0/1     Completed   0              45s
    mariadb-repl-1                 1/2     Running     0              15s
    mariadb-repl-1-pb-init-z44d7   0/1     Completed   0              27s
    mariadb-repl-2-pb-init-qmkcv   0/1     Completed   0              8s
    
    kubectl get mariadb
    NAME           READY   STATUS              PRIMARY          UPDATES                    AGE
    mariadb-repl   False   Replaying binlogs   mariadb-repl-0   ReplicasFirstPrimaryLast   93s
    
    kubectl get pods
    NAME                          READY   STATUS      RESTARTS       AGE
    mariadb-repl-0                2/2     Running     0              84s
    mariadb-repl-1                2/2     Running     0              58s
    mariadb-repl-2                2/2     Running     0              38s
    mariadb-repl-pitr-pj6fr       0/1     Init:0/1    0              8s
    
    kubectl logs mariadb-repl-pitr-pj6fr -c mariadb-enterprise-operator
    {"level":"info","ts":1772294432.9904623,"msg":"Starting point-in-time recovery"}
    {"level":"info","ts":1772294432.9907954,"msg":"Getting binlog index from object storage"}
    {"level":"info","ts":1772294432.9951825,"msg":"Building binlog timeline"}
    {"level":"info","ts":1772294432.9952044,"logger":"binlog-timeline","msg":"Building binlog timeline","num-binlogs":0,"start-gtid":"0-10-4","target-time":"2026-02-27T21:10:42+01:00","strict-mode":false,"server":"server-10"}
    {"level":"info","ts":1772294432.9952517,"msg":"Got binlog timeline","path":["server-10/mariadb-repl-bin.000002","server-10/mariadb-repl-bin.000003","server-10/mariadb-repl-bin.000004","server-10/mariadb-repl-bin.000005"]}
    {"level":"info","ts":1772294432.9952574,"msg":"Pulling binlogs into staging area","staging-path":"/binlogs","compression":"gzip"}
    {"level":"info","ts":1772294432.9952772,"logger":"storage","msg":"Pulling binlog","binlog":"server-10/mariadb-repl-bin.000005","start-time":"2026-02-28T16:00:32Z"}
    {"level":"info","ts":1772294432.9967375,"logger":"storage","msg":"Decompressing binlog","binlog":"server-10/mariadb-repl-bin.000005","start-time":"2026-02-28T16:00:32Z","compressed-file":"server-10/mariadb-repl-bin.000005.gz","decompressed-file":"/binlogs/server-10/mariadb-repl-bin.000005","compression":"gzip"}
    {"level":"info","ts":1772294437.3718772,"msg":"Binlogs pulled into staging area","staging-path":"/binlogs","compression":"gzip"}
    {"level":"info","ts":1772294437.3719199,"msg":"Writing target file","file-path":"/binlogs/0-binlog-target.txt"}
    kubectl get mariadb mariadb-repl -o jsonpath='{.status.conditions}' | jq
    [
      {
        "lastTransitionTime": "2026-03-01T12:15:06Z",
        "message": "Initialized",
        "reason": "Initialized",
        "status": "True",
        "type": "Initialized"
      },
      {
        "lastTransitionTime": "2026-03-01T12:15:06Z",
        "message": "Restored physical backup",
        "reason": "RestorePhysicalBackup",
        "status": "True",
        "type": "BackupRestored"
      },
      {
        "lastTransitionTime": "2026-03-01T12:15:06Z",
        "message": "Replication configured",
        "reason": "ReplicationConfigured",
        "status": "True",
        "type": "ReplicationConfigured"
      },
      {
        "lastTransitionTime": "2026-03-01T12:16:40Z",
        "message": "Replayed binlogs",
        "reason": "ReplayBinlogs",
        "status": "True",
        "type": "BinlogsReplayed"
      },
    ]
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PointInTimeRecovery
    metadata:
      name: pitr
    spec:
      strictMode: true
    kubectl get pitr
    NAME   PHYSICAL BACKUP        LAST RECOVERABLE TIME   STRICT MODE   AGE
    pitr   physicalbackup-daily   2026-02-27T20:10:42Z    true          43h
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-repl
    spec:
      rootPasswordSecretKeyRef:
        name: mariadb
        key: root-password
      storage:
        size: 1Gi
      replicas: 3
      replication:
        enabled: true
      bootstrapFrom:
        pointInTimeRecoveryRef:
          name: pitr
        targetRecoveryTime: 2026-02-28T20:10:42Z
    kubectl get events --field-selector involvedObject.name=mariadb-repl
    LAST SEEN   TYPE      REASON                 OBJECT                     MESSAGE
    41s         Warning   MariaDBInitError       mariadb/mariadb-repl       Unable to init MariaDB: target recovery time 2026-02-28 21:10:42 +0100 CET is after latest recoverable time 2026-02-27 20:10:42 +0000 UTC
    
    kubectl get mariadb
    NAME           READY   STATUS                                                                                                                          PRIMARY          UPDATES                    AGE
    mariadb-repl   False   Init error: target recovery time 2026-02-28 21:10:42 +0100 CET is after latest recoverable time 2026-02-27 20:10:42 +0000 UTC   mariadb-repl-0   ReplicasFirstPrimaryLast   65s
    kubectl get pitr
    NAME   PHYSICAL BACKUP        LAST RECOVERABLE TIME   STRICT MODE   AGE
    pitr   physicalbackup-daily   2026-02-27T20:10:42Z    false         43h
    kubectl get events --field-selector involvedObject.name=mariadb-repl
    LAST SEEN   TYPE      REASON                 OBJECT                     MESSAGE
    12s         Warning   BinlogTimelineInvalid   mariadb/mariadb-repl      Invalid binary log timeline: error getting binlog timeline between GTID 0-10-4 and target time 2026-02-28T21:10:42+01:00: timeline did not reach target time: 2026-02-28T21:10:42+01:00, last recoverable time: 2026-02-27T21:10:42+01:00
    
    kubectl get mariadb
    NAME           READY   STATUS                                                                                                                                                                                                                                                               PRIMARY          UPDATES                    AGE
    mariadb-repl   False   Error replaying binlogs: Invalid binary log timeline: error getting binlog timeline between GTID 0-10-4 and target time 2026-02-28T21:10:42+01:00: timeline did not reach target time: 2026-02-28T21:10:42+01:00, last recoverable time: 2026-02-27T21:10:42+01:00   mariadb-repl-0   ReplicasFirstPrimaryLast   3m28s
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: MariaDB
    metadata:
      name: mariadb-repl
    spec:
      bootstrapFrom:
        stagingStorage:
          persistentVolumeClaim:
            storageClassName: my-storage-class
            resources:
              requests:
                storage: 10Gi
            accessModes:
              - ReadWriteOnce
    kubectl get mariadb mariadb-repl -o jsonpath='{.status.pointInTimeRecovery}' | jq
    {
      "lastArchivedBinaryLog": "mariadb-repl-bin.000001",
      "lastArchivedPosition": 358,
      "lastArchivedTime": "2026-03-02T11:14:00Z",
      "serverId": 10,
      "storageReadyForArchival": true
    }
    kubectl get mariadb mariadb-repl -o jsonpath="{.status.conditions}" | jq
    [
      {
        "lastTransitionTime": "2026-03-02T11:33:58Z",
        "message": "Archived binlogs",
        "reason": "ArchiveBinlogs",
        "status": "True",
        "type": "BinlogsArchived"
      },
      {
        "lastTransitionTime": "2026-03-01T12:16:40Z",
        "message": "Replayed binlogs",
        "reason": "ReplayBinlogs",
        "status": "True",
        "type": "BinlogsReplayed"
      },
    ]
    kubectl get events --field-selector involvedObject.name=mariadb-repl --sort-by='.lastTimestamp'
    
    24m         Warning   BinlogArchivalError    mariadb/mariadb-repl               Error archiving binary logs: 1 error occurred:...
    23m         Normal    BinlogArchived         mariadb/mariadb-repl               Binary log mariadb-repl-bin.000001 archived
    41s         Warning   MariaDBInitError       mariadb/mariadb-repl       Unable to init MariaDB: target recovery time 2026-02-28 21:10:42 +0100 CET is after latest recoverable time 2026-02-27 20:10:42 +0000 UTC
    12s         Warning   BinlogTimelineInvalid   mariadb/mariadb-repl      Invalid binary log timeline: error getting binlog timeline between GTID 0-10-4 and target time 2026-02-28T21:10:42+01:00: timeline did not reach target time: 2026-02-28T21:10:42+01:00, last recoverable time: 2026-02-27T21:10:42+01:00
    kubectl get mariadb mariadb-repl -o jsonpath="{.status}" | jq
    {
      "conditions": [
        {
          "lastTransitionTime": "2026-03-02T11:14:58Z",
          "message": "Error archiving binlogs: 1 error occurred:\n\t* binary log storage is not ready for archival. Archival must start from a clean state\n\n",
          "reason": "ArchiveBinlogsError",
          "status": "False",
          "type": "Ready"
        },
        {
          "lastTransitionTime": "2026-03-02T11:14:58Z",
          "message": "Error archiving binlogs: 1 error occurred:\n\t* binary log storage is not ready for archival. Archival must start from a clean state\n\n",
          "reason": "ArchiveBinlogsError",
          "status": "False",
          "type": "BinlogsArchived"
        }
      ],
    }
    apiVersion: enterprise.mariadb.com/v1alpha1
    kind: PointInTimeRecovery
    metadata:
      name: pitr
    spec:
      physicalBackupRef:
        name: physicalbackup-daily
      storage:
        s3:
          bucket: binlogs
          prefix: mariadb-v2 # previously it was "mariadb"
          endpoint: minio.minio.svc.cluster.local:9000
          region: us-east-1
          accessKeyIdSecretKeyRef:
            name: minio
            key: access-key-id
          secretAccessKeySecretKeyRef:
            name: minio
            key: secret-access-key
          tls:
            enabled: true
            caSecretKeyRef:
              name: minio-ca
              key: ca.crt
    kubectl get pitr
    NAME   PHYSICAL BACKUP        LAST RECOVERABLE TIME   STRICT MODE   AGE
    pitr   physicalbackup-daily   2026-02-27T20:10:42Z    true          43h
    
    kubectl get mariadb
    NAME           READY   STATUS                                                                                                                          PRIMARY          UPDATES                    AGE
    mariadb-repl   False   Init error: target recovery time 2026-02-28 21:10:42 +0100 CET is after latest recoverable time 2026-02-27 20:10:42 +0000 UTC   mariadb-repl-0   ReplicasFirstPrimaryLast   65s
    apiVersion: v1
    binlogs:
      server-10:
      ...
      - binlogFilename: mariadb-repl-bin.000003
        binlogVersion: 4
        firstGtid: 0-10-527
        firstTime: "2026-02-27T16:03:22Z"
        lastGtid: 0-10-1041
        lastTime: "2026-02-27T16:03:50Z"
        logPosition: 268493636
        previousGtids:
        - 0-10-526
        rotateEvent: true
        serverId: 10
        serverVersion: 11.8.5-2-MariaDB-enterprise-log
        stopEvent: false
      - binlogFilename: mariadb-repl-bin.000004
        binlogVersion: 4
        firstGtid: 0-10-1042
        firstTime: "2026-02-27T16:03:50Z"
        lastGtid: 0-10-1559
        lastTime: "2026-02-27T16:04:15Z"
        logPosition: 268506819
        previousGtids:
        - 0-10-1041
        rotateEvent: true
        serverId: 10
        serverVersion: 11.8.5-2-MariaDB-enterprise-log
        stopEvent: false
    kubectl get mariadb
    NAME           READY   STATUS                                                                                                                                                                                                                                                          PRIMARY          UPDATES                    AGE
    mariadb-repl   False   Error replaying binlogs: Invalid binary log timeline: error getting binlog timeline between GTID 0-10-4 and target time 2026-02-28T21:10:42+01:00: timeline did not reach target time: 2026-02-28T21:10:42+01:00, last recoverable time: 2026-02-27T16:04:15Z   mariadb-repl-0   ReplicasFirstPrimaryLast   3m28s