Synchronous Multi-Master With Galera
MariaDB Enterprise Kubernetes Operator provides cloud native support for provisioning and operating multi-master MariaDB clusters using Galera. This setup enables the ability to perform writes on a single node and reads in all nodes, enhancing availability and allowing scalability across multiple nodes.
In certain circumstances, it could be the case that all the nodes of your cluster go down at the same time, something that Galera is not able to recover by itself, and it requires manual action to bring the cluster up again, as documented in the Galera documentation. The MariaDB Enterprise Kubernetes Operator encapsulates this operational expertise in the MariaDB CR. You just need to declaratively specify spec.galera, as explained in more detail later in this guide.
To accomplish this, after the MariaDB cluster has been provisioned, the operator will regularly monitor the cluster's status to make sure it is healthy. If any issues are detected, the operator will initiate the recovery process to restore the cluster to a healthy state. During this process, the operator will set status conditions in the MariaDB and emit Events so you have a better understanding of the recovery progress and the underlying activities being performed. For example, you may want to know which Pods were out of sync to further investigate infrastructure-related issues (i.e. networking, storage...) on the nodes where these Pods were scheduled.
MariaDB configuration
MariaDB configurationThe easiest way to get a MariaDB Galera cluster up and running is setting spec.galera.enabled = true:
apiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
...
replicas: 3
galera:
enabled: trueThis relies on sensible defaults set by the operator, which may not be suitable for your Kubernetes cluster. This can be solved by overriding the defaults, so you have fine-grained control over the Galera configuration.
Refer to the API reference to better understand the purpose of each field.
Storage
By default, the operator provisions two PVCs for running Galera:
Storage PVC: Used to back the MariaDB data directory, mounted at
/var/lib/mysql.Config PVC: Where the Galera config files are located, mounted at
/etc/mysql/conf.d.
However, you are also able to use just one PVC for keeping both the data and the config files:
Wsrep provider
You are able to pass extra options to the Galera wsrep provider by using the galera.providerOptions field:
It is important to note that, the ist.recv_addr cannot be set by the user, as it is automatically configured to the Pod IP by the operator, something that an user won't be able to know beforehand.
A list of the available options can be found in the MariaDB documentation.
IPv6 support
If you have a Kubernetes cluster running with IPv6, the operator will automatically detect the IPv6 addresses of your Pods and it will configure several wsrep provider options to ensure that the Galera protocol runs smoothly with IPv6.
Galera cluster recovery
MariaDB Enterprise Kubernetes Operator monitors the Galera cluster and acts accordinly to recover it if needed. This feature is enabled by default, but you may tune it as you need:
The minClusterSize field indicates the minimum cluster size (either absolut number of replicas or percentage) for the operator to consider the cluster healthy. If the cluster is unhealthy for more than the period defined in clusterHealthyTimeout (30s by default), a cluster recovery process is initiated by the operator. The process is explained in the Galera documentation and consists of the following steps:
Recover the sequence number from the
grastate.daton each node.Trigger a recovery Job to obtain the sequence numbers in case that the previous step didn't manage to.
Mark the node with highest sequence (bootstrap node) as safe to bootstrap.
Bootstrap a new cluster in the bootstrap node.
Restart and wait until the bootstrap node becomes ready.
Restart the rest of the nodes one by one so they can join the new cluster.
The operator monitors the Galera cluster health periodically and performs the cluster recovery described above if needed. You are able to tune the monitoring interval via the clusterMonitorInterval field.
Refer to the API reference to better understand the purpose of each field.
Galera recovery Job
JobDuring the recovery process, a Job is triggered for each MariaDB Pod to obtain the sequence numbers. It's crucial for this Job to succeed; otherwise, the recovery process will fail. As a user, you are responsible for adjusting this Job to allocate sufficient resources and provide the necessary metadata to ensure its successful completion.
For example, if you're using a service mesh like Istio, it's important to add the sidecar.istio.io/inject=false label. Without this label, the Job will not complete, which would prevent the recovery process from finishing successfully.
Force cluster bootstrap
Use this option only in exceptional circumstances. Not selecting the Pod with the highest sequence number may result in data loss.
Ensure you unset forceClusterBootstrapInPod after completing the bootstrap to allow the operator to choose the appropriate Pod to bootstrap from in an event of cluster recovery.
You have the ability to manually select which Pod is used to bootstrap a new cluster during the recovery process by setting forceClusterBootstrapInPod:
This should only be used in exceptional circumstances:
You are absolutely certain that the chosen
Podhas the highest sequence number.The operator has not yet selected a
Podto bootstrap from.
You can verify this with the following command:
In this case, assuming that mariadb-galera-2 sequence is lower than 350454, it should be safe to bootstrap from mariadb-galera-0.
Finally, after your cluster has been bootstrapped, remember to unset forceClusterBootstrapInPod to allow the operator to select the appropriate node for bootstrapping in the event of a cluster recovery.
Bootstrap Galera cluster from existing PVCs
MariaDB Enterprise Kubernetes Operator will never delete your MariaDB PVCs. Whenever you delete a MariaDB resource, the PVCs will remain intact so you could reuse them to re-provision a new cluster.
That said, Galera is unable to form a cluster from pre-existing state, it requires a cluster recovery process to identify which Pod has the highest sequence number to bootstrap a new cluster. That's exactly what the operator does: whenever a new MariaDB Galera cluster is created and previously created PVCs exist, a cluster recovery process is automatically triggered.
Quickstart
Apply the following manifests to get started with Galera in Kubernetes:
Next, check the MariaDB status and the resources created by the operator:
Let's now proceed with simulating a Galera cluster failure by deleting all the Pods at the same time:
After some time, we will see the MariaDB entering a non Ready state:
Eventually, the operator will kick in and recover the Galera cluster:
Finally, the MariaDB resource will become Ready and your Galera cluster will be operational again:
Troubleshooting
The aim of this section is showing you how to diagnose your Galera cluster when something goes wrong. In this situations, observability is a key factor to understand the problem, so we recommend following these steps before jumping into debugging the problem.
Inspect
MariaDBstatus conditions.
Make sure network connectivity is fine by checking that you have an
EndpointperPodin your Galera cluster.
Check the events associated with the
MariaDBobject, as they provide significant insights for diagnosis, particularly within the context of cluster recovery.
Enable
debuglogs inmariadb-enterprise-operator.
Get the logs of all the
MariaDBPodcontainers, not only of the mainmariadbcontainer but also theagentandinitones.
Once you are done with these steps, you will have the context required to jump ahead to the Common errors section to see if any of them matches your case.
Common errors
Galera cluster recovery not progressing
If your MariaDB Galera cluster has been in GaleraNotReady state for a long time, the recovery process might not be progressing. You can diagnose this by checking:
Operator logs.
Galera recovery status:
MariaDBevents:
If you have
Podsnamed<mariadb-name>-<ordinal>-recovery-<suffix>running for a long time, check its logs to understand if something is wrong.
One of the reasons could be misconfigured Galera recovery Jobs, please make sure you read this section. If after checking all the points above, there are still no clear symptoms of what could be wrong, continue reading.
First af all, you could attempt to forcefully bootstrap a new cluster as it is described in this section. Please, refrain from doing so if the conditions described in the docs are not met.
Alternatively, if you can afford some downtime and your PVCs are in healthy state, you may follow this procedure:
Delete your existing
MariaDB, this will leave your PVCs intact.Create your
MariaDBagain, this will trigger a Galera recovery process as described in this section.
As a last resource, you can always delete the PVCs and bootstrap a new MariaDB from a backup as documented here.
Permission denied writing Galera configuration
This error occurs when the user that runs the container does not have enough privileges to write in /etc/mysql/mariadb.conf.d:
To mitigate this, by default, the operator sets the following securityContext in the MariaDB's StatefulSet :
This enables the CSIDriver and the kubelet to recursively set the ownership ofr the /etc/mysql/mariadb.conf.d folder to the group 999, which is the one expected by MariaDB. It is important to note that not all the CSIDrivers implementations support this feature, see the CSIDriver documentation for further information.
Unauthorized error disabling bootstrap
This situation occurs when the mariadb-enterprise-operator credentials passed to the agent as authentication are either invalid or the agent is unable to verify them. To confirm this, ensure that both the mariadb-enterprise-operator and the MariaDB ServiceAccounts are able to create TokenReview objects:
If that's not the case, check that the following ClusterRole and ClusterRoleBindings are available in your cluster:
mariadb-enterprise-operator:auth-delegator is the ClusterRoleBinding bound to the mariadb-enterprise-operator ServiceAccount which is created by the helm chart, so you can re-install the helm release in order to recreate it:
mariadb-galera:auth-delegator is the ClusterRoleBinding bound to the mariadb-galera ServiceAccount which is created on the flight by the operator as part of the reconciliation logic. You may check the mariadb-enterprise-operator logs to see if there are any issues reconciling it.
Bear in mind that ClusterRoleBindings are cluster-wide resources that are not garbage collected when the MariaDB owner object is deleted, which means that creating and deleting MariaDBs could leave leftovers in your cluster. These leftovers can lead to RBAC misconfigurations, as the ClusterRoleBinding might not be pointing to the right ServiceAccount. To overcome this, you can override the ClusterRoleBinding name setting the spec.galera.agent.kubernetesAuth.authDelegatorRoleName field.
Timeout waiting for Pod to be Synced
This error appears in the mariadb-enterprise-operator logs when a Pod is in non synced state for a duration exceeding the spec.galera.recovery.podRecoveryTimeout. Just after, the operator will restart the Pod.
Increase this timeout if you consider that your Pod may take longer to recover.
Galera cluster bootstrap timed out
This is error is returned by the mariadb-enterprise-operator after exceeding the spec.galera.recovery.clusterBootstrapTimeout when recovering the cluster. At this point, the operator will reset the recovered sequence numbers and start again from a clean state.
Increase this timeout if you consider that your Galera cluster may take longer to recover.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Last updated
Was this helpful?

