Hub page covering deployment, user management, backup and restore, hostname changes, and security in Enterprise Manager.
Usage
Hub page detailing how to use Enterprise Manager's core features, specifically Monitoring and the Workspace.
Dashboards
Hub page for MariaDB Enterprise Manager's pre-packaged Grafana dashboards, which provide deep visibility into server health, database topologies, and system performance.
Monitoring
Covers the monitoring capabilities including the built-in Grafana dashboards, metrics tracking, and predefined alert rules for the database fleet.
Architecture Overview
Explains the client/server architecture, central components (Supermax, Grafana, Prometheus), and local agents (OpenTelemetry, exporters).
MariaDB Enterprise Manager is a client/server application for monitoring and managing MariaDB deployments. It provides topology-aware monitoring, visual schema management, and query editing across multiple database connections.
The architecture consists of two primary components: a central Enterprise Manager Server that aggregates data and hosts the user interface, and an Enterprise Manager Agent that is deployed on each monitored host.
Enterprise Manager Server
The Enterprise Manager Server runs on a dedicated host and acts as the central command center. It is delivered as a suite of Docker containers managed by Docker Compose.
The core components are the following:
Component
Description
Enterprise Manager Agent
The Enterprise Manager Agent is installed on each MariaDB Server and MaxScale host that you want to monitor. Its job is to collect data and forward it to the central server.
These components are installed via the mema-agent package (RPM or DEB) and include:
Prometheus Exporters: These are the primary data gatherers.
Node Exporter: Collects system-level metrics (CPU, RAM, disk usage).
Mysqld Exporter: Collects detailed metrics from the MariaDB database itself.
Networking Requirements
For the system to function correctly, the following firewall ports must be open on the Enterprise Manager Server host:
8090 (HTTP/S): The main entry point for the web UI. Nginx listens on this port and proxies requests to Supermax and Grafana.
4318 (HTTP/S): Agents on monitored nodes push telemetry data to this port.
Guidelines and instructions for deploying MariaDB Enterprise Manager, including network and firewall requirements for successful installation.
This section provides an overview of the deployment process for MariaDB Enterprise Manager, covering installation and upgrades for both the central server and the monitoring agents.
MariaDB Enterprise Manager is designed for a streamlined deployment experience. You can launch the main server with a single-line command for a quick start, and a UI-integrated helper tool simplifies the process of installing and registering agents on your monitored databases.
Installing the Enterprise Manager Server
SSO to MaxScale (Single Sign-On)
Instructions for configuring Single Sign-On (SSO) integration to seamlessly access the MaxScale GUI directly from MariaDB Enterprise Manager.
For topologies managed by MaxScale, you can seamlessly access the MaxScale GUI directly from Enterprise Manager using Single Sign-On.SSO to MaxScale requires MaxScale 25.10.0 or higher.1
1
Accessing the MaxScale GUI
Tools
This space includes documentation for clients, utilities, and applications, including AI-focused ones, designed to help you manage, monitor, back up, and interact with your MariaDB Server deployment.
MariaDB Enterprise Manager
MariaDB Enterprise Manager is a comprehensive observability and management solution designed for your entire database fleet. It provides advanced, topology-aware monitoring and a powerful suite of visual tools for query development and schema management, all from a single, centralized interface.
MariaDB Enterprise Kubernetes Operator
MariaDB Enterprise Kubernetes Operator automates provisioning, scaling, backups, and high availability, making cloud-native database operations efficient and reliable.
Installation
Detailed guide on installing the MariaDB Enterprise Kubernetes Operator using Helm charts or manual manifests within a Kubernetes environment.
Backup and Restore
Procedures for configuring automated and on-demand backups using MariaDB Enterprise Backup, including restoration steps to recover data.
Topologies
Explains supported deployment patterns such as standalone instances, Primary/Replica replication, and Galera Cluster configurations for high availability.
MariaDB Enterprise MCP Server
Plugins
Overview of available plugins and extensions that can be used to enhance the functionality of the MariaDB Enterprise Kubernetes Operator.
Migrations
Learn about migrations with MariaDB Enterprise Kubernetes Operator. This section covers strategies and procedures for smoothly migrating your MariaDB databases within Kubernetes environments.
Specific guidance on migrating database instances into the multi-tenancy Catalog structure within a Kubernetes environment.
Example
MariaDB Enterprise Operator
MariaDB Enterprise Operator provides a seamless way to run and operate containerized versions of MariaDB Enterprise Server and MaxScale on Kubernetes, allowing you to leverage Kubernetes orchestration and automation capabilities. This document outlines the features and advantages of using Kubernetes and the MariaDB Enterprise Operator to streamline the deployment and management of MariaDB and MaxScale instances.
MariaDB Enterprise MCP Server
MariaDB Enterprise MCP (Model Context Protocol) Server is a secure, enterprise-grade application designed to act as the primary interface between AI assistants and MariaDB data ecosystems. This product solves a key challenge: how to allow powerful AI agents to safely and efficiently leverage an organization's most valuable asset—its data.
MariaDB AI RAG
MariaDB AI RAG is an enterprise-grade Retrieval-Augmented Generation (RAG) solution that integrates with MariaDB to provide AI-powered document processing, semantic search, and natural language generation capabilities.
OpenTelemetry Collector: This local collector pulls data from the Prometheus exporters and pushes it to the central collector on the Enterprise Manager Server.
mema-agent CLI: A setup utility used to register the host with the Enterprise Manager Server and configure the local agent services.
Supermax
The primary backend application that serves the main web UI for management, server registration, and configuration.
Grafana
Provides powerful, pre-built dashboards for visualizing time-series performance metrics.
Prometheus
The time-series database that ingests and stores all monitoring data collected from the agents.
OpenTelemetry Collector
The central endpoint that receives telemetry data (metrics, logs, traces) from all agents.
Nginx
A web server that acts as a reverse proxy, directing browser traffic to the appropriate service (Supermax or Grafana).
The Enterprise Manager Server is a Docker-based application installed on a dedicated host. The installation is handled by the installer script, which pulls the necessary container images and starts the application.
As a first step review the hardware, system, and network requirements:
After confirming your hardware, system, and network are compliant, proceed with the installation instructions: Installing MariaDB Enterprise Manager
Installing Enterprise Manager Agents
To monitor a MariaDB Server and MaxScale host, install agent on it. Then, use the Enterprise Manager UI to add the database topology and generate the agent setup command. This command includes the correct metric labels for that host.
Quick start
You can quickly set up and launch MariaDB Enterprise Manager with a single-line command. This allows you to start exploring its capabilities without extensive configuration.
Enterprise Manager includes a helper tool, integrated in the UI, for adding agents. The helper prompts you to download a small (< 50M) binary and then provides command-line instructions to install and register agents, enabling quick and seamless addition of new MariaDB databases to Enterprise Manager.
Explains how to monitor multiple logical databases or clusters managed by a single MaxScale deployment by adding or changing specific MaxScale monitors in the UI.
MariaDB Enterprise Manager allows you to monitor multiple logical databases or clusters that are managed by the same set of high-availability MaxScale instances. After adding your first MaxScale instance, you can easily add more monitors to track different services without re-entering the connection details.
Default Monitor Behavior
If you add a database from a MaxScale setup that has multiple monitors and do not explicitly select one, Enterprise Manager will automatically assign the first available monitor by default. To ensure you are tracking the correct service, it's best to specify the monitor manually.
Adding an Additional Monitor
Follow these steps to add another logical database that is monitored by the same MaxScale deployment.
1
Add a new monitored logical database
Navigate to your main database inventory page.
Changing the Monitor for an Existing Database
If you need to change which MaxScale monitor an existing logical database is tracking, follow these steps.
1
Open the database edit menu
Navigate to your main database inventory page and locate the logical database you wish to edit.
Step-by-step instructions for deploying the Docker-based Enterprise Manager Server, including standard online setups and air-gapped installation procedures.
Login to the MariaDB Enterprise Docker Registry providing your as a username and Customer Download Token as a password:
2
Download the installation script
Insert your Customer Download Token into the download URL and download the installation script:
The installer generates a self-signed TLS certificate for Enterprise Manager. To change the certificate, follow instructions at .
To modify metrics retention time, see .
Enterprise Manager Server Air-Gapped Installation
Installing Enterprise Manager to a machine without an Internet connection is possible by manually copying the Docker images and related settings from an Internet-connected machine to the final target machine.
Follow these steps:
1
Install on an Internet-connected machine
First, install Enterprise Manager on an Internet-connected machine as explained in the normal installation section. When the installation script asks for the address and port that Enterprise Manager should listen at for incoming connections, enter the values for the final target machine.
2
Save images and settings
Once installation is complete, save all related Docker images and settings by running the following commands from the directory that contains the
Steps for safely modifying the hostname or IP address of the Enterprise Manager server and ensuring all monitored agents remain connected.
To set the hostname or IP address for an existing MariaDB Enterprise Management instance, follow these instructions. Changing the hostname or IP address is useful if your server's IP changed or if you need to switch from an IP address to a public DNS name.
1
Connect to your server
SSH into the server where your Enterprise Manager is running:
2
Navigate to the directory
Change into the enterprise-manager directory, where your Docker Compose files are located:
3
Edit the .env file
Open the environment file with a text editor (for example nano):
Find the line that begins with MEMA_HOSTNAME=
4
Save the file
Save the file and exit the editor.
5
Restart the services
Restart the MEM services so the new environment variable takes effect. The --force-recreate flag ensures the containers are rebuilt using the updated environment variables:
After the restart, your Enterprise Manager will be accessible at the new hostname or IP address.
Overview of MariaDB Enterprise Manager, a centralized observability and management solution offering topology-aware monitoring, visual schema management, and query editing via an integrated workspace.
MariaDB Enterprise Manager is a comprehensive observability and management solution designed for your entire database fleet. It provides advanced, topology-aware monitoring and a powerful suite of visual tools for query development and schema management, all from a single, centralized interface.
At its core, Enterprise Manager uses lightweight agents to collect deep telemetry from your standalone databases, replicated topologies, and MaxScale clusters via the OpenTelemetry standard. This foundation powers the integrated Grafana dashboards, which come pre-packaged with production-ready visualizations and alerts. Beyond monitoring, the Workspace provides a shared environment for developers and DBAs with an advanced Query Editor and a visual ERD Designer. The entire system is secured with role-based access control, audit logging, and can integrate with your corporate identity provider (OIDC) for single sign-on.
Key Capabilities at a Glance
Advanced Monitoring
Leverage the power of a built-in Grafana instance, complete with pre-packaged dashboards and production-ready alerts. The platform provides the flexibility to create custom , define , and route notifications to a wide range of destinations.
Integration with Other Observability Solutions
Built on open standards, Enterprise Manager uses OpenTelemetry for metrics collection. Its integrated Prometheus time-series database exposes a query API, allowing you to seamlessly export metrics and integrate with your existing observability stack.
Centralized Management
Gain a topology-based, centralized view of your entire database fleet. Enterprise Manager discovers and visualizes your replication and clustering setups, providing the ability to drill down into a specific through a seamless single sign-on (SSO) experience.
Workspace
The Workspace provides a powerful suite of tools for developers and DBAs. It features a rich for running and debugging SQL and a visual for schema management and modeling across multiple database connections.
Enterprise Security
Secure your management layer with robust security features. Authenticate users with your corporate , enforce granular permissions with , and maintain compliance with a comprehensive audit log for all administrative actions.
Instructions for installing the mema-agent application using native OS package managers, including prerequisite steps for creating a local monitor user in MariaDB.
To install mema-agent, you need to set up .
The mema-agent is a small application that must be installed on every server you wish to monitor with MariaDB Enterprise Manager, including MariaDB Server nodes and MaxScale nodes.
This guide covers the recommended installation method using a package manager.
Network and Firewall Requirements
Outlines the necessary network ports and firewall configurations (such as ports 8090 and 4318) required for UI access and agent telemetry data collection.
It's recommended to run MariaDB Enterprise Manager on an internal, secured network. Direct public exposure is not recommended.
Before installing MariaDB Enterprise Manager, ensure that your firewall and network rules allow traffic on all required ports. Proper connectivity is essential for the system to function correctly.
The following table details the necessary ports and their purposes.
SMTP Server Configuration
Instructions for configuring SMTP credentials and server details in the environment file to enable email alerts from the integrated alerting engine.
This page explains how to configure email alerting for MariaDB Enterprise Manager using Grafana's integrated alerting engine. Configure SMTP credentials and server details in the main environment file so Enterprise Manager can send alert notifications via email.
This is an advanced draft.
1
Alerts and Notifications
Overview of the integrated Grafana-based alerting engine used to detect critical conditions and dispatch notifications to various destinations.
MariaDB Enterprise Manager provides a powerful and flexible alerting system, built on the capabilities of the integrated Grafana Alerting engine. It allows you to proactively monitor your entire database fleet, define custom rules for potential issues, and receive notifications through various channels to ensure you can respond quickly.
How It Works: The Alerting Flow
The alerting process in MariaDB Enterprise Manager follows a clear, four-step flow from detection to notification.
25.08 version update guide
This guide illustrates, step by step, how to update to 25.8.0 from previous versions.
Uninstall you current mariadb-enterprise-operator for preventing conflicts:
Alternatively, you may only downscale and delete the webhook configurations:
Suspend Reconciliation
Instructions on how to temporarily pause the Operator's automated management of a specific resource for maintenance or troubleshooting.
Suspended state
When a resource is suspended, all operations performed by the operator are disabled, including but not limited to:
Provisioning
26.03 version update guide
This guide illustrates, step by step, how to update to 26.3.1 from previous versions. This guide only applies if you are updating from a version prior to 26.3.x, otherwise you may upgrade directly (see and docs)
The must be updated to the 26.3.1 version. You must set updateStrategy.autoUpdateDataPlane=true in your MariaDB resources before updating the operator. Then, once updated, the operator will also be updating the data-plane based on its version:
Overview
"Model Context Protocol" (MCP) is a standard or interface designed to bridge the gap between AI development tools (like copilots in your code editor) and your project's specific environment.
In simple terms, it's a way for an AI to understand the context of what you're working on.
The MariaDB Enterprise MCP (Model Context Protocol) Server is a secure, enterprise-grade application designed to act as the primary interface between AI assistants and MariaDB data ecosystems. This product solves a key challenge: how to allow powerful AI agents to safely and efficiently leverage an organization's most valuable asset—its data.
Supported Docker Images
The following is a list of images that have plugins installed and available to use.
Even though these images have plugins installed, that doesn't necessarily mean that they are enabled by default. You may need to install them. The recommended operator native way to do so is to use:
Each supported plugin will have a section on how to install it.
Component
Image
Token Management
Token management is a critical part of the system's security, handled primarily by the RAG API.
Token Generation
The process involves two main steps:
Migrate Embedded MaxScale To MaxScale Resource
In this guide, we will be migrating a MaxScale embedded in a MariaDB resource to it's own resource.
Note that if you've been using the embedded maxScale property, the operator will already have created a MaxScale resource to go along with it.
Examples Catalog
A collection of YAML manifests and configuration examples for various common deployment scenarios and resource management tasks.
The contains a number of sample manifests that aim to show the operator functionality in a practical way. Follow these instructions for getting started:
Download the :
Install the configuration shared by all the examples:
Locate the existing logical database that is associated with your MaxScale deployment.
Click the three-dot menu icon (⋮) on the right side of the database entry to open the context menu and select Add Monitor.
2
Configure the new monitor
In the dialog box that appears, provide a new Logical Database Name and select the specific MaxScale Monitor you wish to track from the dropdown list.
Click the Confirm button to add the new monitored database.
Click the three-dot menu icon (⋮) on the right side of the database entry.
Select the Edit option from the menu.\
2
Select a different monitor
In the configuration window, scroll down to the Advanced section.
From the Monitor name dropdown, select the new MaxScale monitor you want this logical database to track.\
Click the Confirm button to save your changes.
Copy the Customer Download Token to use as the password when logging in to the MariaDB Enterprise Docker Registry
3
Make the installer executable
4
Run the installer
Install Enterprise Manager by running the script:
The script prompts you to enter IP address and port number on which Enterprise Manager should listen to for incoming connections. Verify the auto-detected value and correct it if it's wrong.
This address and port must be reachable from all monitored MariaDB Server and MaxScale hosts.
After you provide the details, the script launches Enterprise Manager.
5
Verify containers
Run docker compose ps in the enterprise-manager directory to check that all of the constituent Docker containers are running. The containers are:
enterprise-manager-grafana
enterprise-manager-nginx
enterprise-manager-otelcol
enterprise-manager-prometheus
enterprise-manager-supermax
6
Access the UI
Access Enterprise Manager UI at:
https://<Enterprise_Manager_IP>:8090
At the login screen, use the default username admin and the generated password displayed after the installation script finishes.
enterprise-manager
folder:
The resulting archive enterprise-manager.tar.gz contains all components of Enterprise Manager.
3
Transfer archive to target machine
Copy enterprise-manager.tar.gz to the target (air-gapped) machine into the directory under which you want to install Enterprise Manager.
4
Extract and load images on target machine
On the target machine, extract the archive and load the Docker images:
and update it with the new hostname or IP address. Example:
2
Restore the backup of all volumes
The backups are stored in the ~/backups/ directory.
3
Start the Enterprise Manager
Go to the Enterprise Manager installation directory
Run docker compose up -d to start the Enterprise Manager
Prerequisite: Create the Local Monitor User
Before installing the agent on a MariaDB Server host, you must create a local user that the agent will use to connect to the database and collect metrics.
Log in to your MariaDB Server and run the following:
Replace <password> with a secure password. You will need these credentials later when linking the agent in the Enterprise Manager UI.
Installation via Package Manager (Recommended)
This method uses your OS's native package manager (dnf, apt, zypper) to install the agent from the MariaDB Enterprise repository.
Step 1: Configure the MariaDB Enterprise Repository
If you haven't already configured the MariaDB Enterprise repository on the server, follow these steps.
An alert rule contains a query (what to measure, e.g., disk usage), a condition (the threshold, e.g., > 90%), and labels for routing (e.g., type = server disk).
2
Instances are Evaluated
Grafana periodically runs the query against your monitored targets. It creates an Alert Instance for each distinct entity (e.g., one for Server 01, one for Server 02, etc.).
3
An Instance "Fires"
If the condition is met for a specific instance (e.g., Server 01's disk usage is over 90%), that instance enters a "firing" state.
4
Notifications are Sent
The firing alert is routed through a Notification Policy. The policy matches the alert's labels (e.g., type = server disk) and sends a notification to the configured Contact Point (such as Email, Slack, or PagerDuty).
Key Alerting Concepts
To configure alerting effectively, it's helpful to understand these core concepts from Grafana:
Term
Description
Alert Rules
The combination of a data query and a threshold condition defining what to measure and when it's a problem.
Alert Instances
Generated from an alert rule for each monitored entity, showing individual statuses.
Contact Points
Destinations for notifications, such as email, Slack, PagerDuty, or webhooks.
Notification Policies
Uses labels to route alerts to contact points, facilitating team-specific alerting.
Silences and Mute Timings
Allow temporary notification pauses without halting alerts. Silences cover single events, like maintenance, while Mute Timings are for recurring periods, such as at night or weekends.
For a deep dive into advanced topics like custom message templating, alert grouping, and more complex routing, see the official Grafana documentation.
Upgrade mariadb-enterprise-operator-crds to 25.8.0:
The Galera data-plane must be updated to the 25.8.0 version.
If you want the operator to automatically update the data-plane (i.e. init and agent containers), you can set updateStrategy.autoUpdateDataPlane=true in your MariaDB resources:
Alternatively, you can also do this manually:
Upgrade mariadb-enterprise-operator to 25.8.0:
If you previously decided to downscale the operator, make sure you upscale it back:
If you previously set updateStratety.autoUpdateDataPlane=true, you may consider reverting the changes once the upgrades have finished:
More specifically, the reconciliation loop of the operator is omitted, anything part of it will not happen while the resource is suspended. This could be useful in maintenance scenarios, where manual operations need to be performed, as it helps prevent conflicts with the operator.
Suspend a resource
Currently, only MariaDB and MaxScale resources support suspension. You can enable it by setting suspend=true:
This results in the reconciliation loop being disabled and the status being marked as Suspended:
To re-enable it, simply remove the suspend setting or set it to suspend=false.
MaxScale embedded definition inside the MariaDB has been deprecated, please refer to the migration guide. to perform the migration.
At this point, you may proceed to update the operator. If you are using Helm:
Upgrade the mariadb-enterprise-operator-crds helm chart to 26.3.1:
Upgrade the mariadb-enterprise-operator helm chart to 26.3.1:
If you are on OpenShift:
If you are on the stable channel using installPlanApproval=Automatic in your Subscription object, then the operator will be automatically updated. If you use installPlanApproval=Manual, you should have a new InstallPlan which needs to be approved to update the operator:
Consider reverting updateStrategy.autoUpdateDataPlane back to false in your MariaDB object to avoid unexpected updates:
It achieves this by providing a single, hardened endpoint that offers not only standard database operations but also advanced AI workflow orchestration and integration with industry-standard authentication systems.
What is a Model Context Protocol (MCP) Server?
MCP provides a standardized, model-agnostic way for language models and other AI systems to interact with external tools and data sources. The MCP Server implements this protocol, ensuring a consistent and reliable method for AI applications to request information and perform operations. This streamlined communication layer accelerates the development and deployment of AI-integrated systems.
The Value of an MCP Server for Databases
Connecting AI directly to a production database is both risky and inefficient. An MCP server provides a critical abstraction layer that delivers three key benefits:
Security and Governance: It acts as a single, hardened chokepoint for all AI-driven data interactions. Instead of embedding credentials across numerous applications, the MCP Server manages access centrally, enabling robust auditing, permission enforcement, and integration with enterprise secret managers.
Abstraction and Simplicity: Developers building AI applications do not need to be database experts. They can interact with a simple, well-defined set of tools (e.g., list_tables, execute_sql) without writing complex connection logic or security checks, dramatically accelerating development cycles.
Standardization and Interoperability: By adhering to the MCP standard, your data infrastructure can seamlessly connect with a growing ecosystem of AI assistants and development frameworks—such as Cursor, Windsurf, and VSCode plugins—without requiring bespoke integrations for each one.
The Objective of an MCP Server
The primary goal of the MariaDB Enterprise MCP Server is to enable the secure and scalable deployment of AI agents within enterprise environments.
Key objectives include:
Enhance Security and Compliance: Integrate with centralized secret management platforms like HashiCorp Vault and 1Password to eliminate static credentials and meet stringent enterprise security policies.
Streamline Complex AI Workflows: Provide a unified endpoint for orchestrating multi-step RAG (Retrieval-Augmented Generation) pipelines, from data ingestion to final response generation.
Improve Manageability: Offer a robust, configurable, and observable server that can be reliably deployed and managed by platform engineering and DBA teams.
Accelerate AI Application Development: Provide a standardized protocol that simplifies how developers connect AI agents to MariaDB data.
Once a client has a JWT, it includes it in the Authorization header of every request to the MCP Server. The server then validates the token before processing the request.
Key Security Measures
Signature Verification: Prevents token tampering.
Expiration Check: Tokens have a limited lifetime (e.g., 30 minutes).
Database Validation: Ensures the user associated with the token still exists and is active.
Issuer/Audience Validation: Prevents a token from one system from being used on another.
Not-Before Check: Prevents a token from being used before it is valid
Overview of security best practices within Enterprise Manager, including securing the UI, managing audit logs, and enforcing strict access controls.
MariaDB Enterprise Manager provides security at multiple levels, including transport-layer encryption for all components, secure authentication, and a detailed audit log.
This guide covers the primary security configurations. For Users, Roles and Permissions, see User Management.
SSL/TLS Certificate Management
The Enterprise Manager installation generates a self-signed TLS certificate and key for immediate use. For production environments, you should use your own custom certificates.
1
Place custom certificates
Copy your custom certificate and private key files into the enterprise-manager/certs/ directory on the host machine.
2
Update the configuration
Open the enterprise-manager/.env
Enabling the Audit Log
The audit log records all REST API requests made to MariaDB Enterprise Manager, providing a clear trail of administrative actions for security and compliance.
1
Step: Navigate to the directory
Open a terminal and change into your MariaDB Enterprise Manager installation directory.
2
Step: Edit the .env file
Open the environment file using a text editor.
3
Configuring Secure Connections
Agent to Enterprise Manager Connections
The connection from the mema-agent to the Enterprise Manager server is secured using HTTPS.
To enable encryption: ensure the URL provided in the agent setup command uses https://.
To bypass certificate checks: if you are using a self-signed or non-trusted TLS certificate on the Enterprise Manager server, you can add the --otlp-insecure flag to the agent setup command. This is recommended only for testing environments.
Enterprise Manager to Monitored Databases
You can configure secure TLS connections from Enterprise Manager to your monitored MariaDB Servers and MaxScale instances when you first add them.
In the "Add Database" page:
Toggle the SSL/TLS option to ON.
To validate the server's certificate against your Certificate Authority (CA), provide the path to your CA file in the Certificate Authority field. The file must be located in the enterprise-manager/certs/ directory and the path must begin with /certs/.
Check Verify peer certificate to enable validation.
All certificate and key files referenced for server validation or client authentication must be placed in the enterprise-manager/certs/ directory on the host and referenced with a path beginning with /certs/.
Monitors the underlying host infrastructure, providing detailed metrics for CPU utilization, memory consumption, disk I/O, and network throughput for each database node.
The Node Dashboard pane provides detailed visibility into the health and performance of individual nodes that run MariaDB Server and MaxScale. It combines uptime, system capacity, operating system details, and hardware utilization with disk and network activity. This view helps administrators ensure each node has sufficient resources and can support the workloads running on it.
Node Information
Provides a high-level, at-a-glance summary of a specific server node's status, configuration, and capacity.
Metric
Description
Node System Information
Tracks memory usage, CPU performance, system load, and resource consumption at the process level.
Metric
Description
Filesystem Section
Monitors disk performance and utilization for the node’s storage devices.
Installation and configuration steps for Enterprise Manager and agents, covering standalone and MaxScale topologies.
MariaDB Enterprise Manager is a database management and observability solution that provides advanced topology-aware monitoring coupled with visual schema management, query editing, and ERD design across multiple database connections.
This guide describes steps to install MariaDB Enterprise Manager for evaluation purposes.
Prerequisites
1
Prepare a machine for Enterprise Manager installation
(minimal hardware resources for evaluation):
CPU: 2 cores (or 2 vCPUs) with x86-64 architecture
Instructions for integrating Enterprise Manager with an OpenID Connect (OIDC) identity provider for centralized Single Sign-On (SSO) authentication.
MariaDB Enterprise Manager can be integrated with external identity providers (like Okta, Keycloak, or Azure AD) using OpenID Connect (OIDC). This allows you to centralize user authentication, enforce your organization's security policies, and enable single sign-on (SSO).
Integrating with an external Identity Provider is an optional feature. MariaDB Enterprise Manager includes a built-in user management system that works out-of-the-box.
Before You Begin
Before configuring OIDC in Enterprise Manager, you must first register Enterprise Manager as a client application within your Identity Provider's administrative console and obtain the necessary credentials.
1
Configure client settings in your identity provider
In your Identity Provider's client configuration screen, you will need to provide several URLs that point back to your MariaDB Enterprise Manager instance. These URLs tell the provider where to send the user after authentication and what origins are allowed to make requests.
OIDC Using Keycloak
Here is an example of what the filled-in fields might look like if you are using Keycloak.
Authentication URL: This is the URL to your specific Keycloak realm:
Mapping IDP Roles to Enterprise Manager Permissions
For Enterprise Manager to assign the correct permissions to a user logging in via OIDC, it expects the JWT token from your provider to contain a specific field (claim) named account.
The value of this account field must exactly match the name of a role that exists in MariaDB Enterprise Manager (for example, admin, viewer, or a custom role).
Highlights the administrative tools within the Workspace, including the Schema Inspector, Object Browser, user management, and live process list viewing.
The MariaDB Enterprise Manager Workspace includes a powerful set of integrated tools that allow DBAs and developers to perform common administrative tasks graphically, without needing to write raw SQL commands. These features are primarily accessed through the Schemas Sidebar and dedicated tabs in the main worksheet area.
Schema Inspector
The Schema Inspector provides detailed, read-only metadata views for any selected schema object. This allows you to quickly understand the structure, data types, constraints, and dependencies of your tables, views, and other objects without querying the information_schema. To use it, simply click on an object in the Object Browser.
Object Browser
The Object Browser is the hierarchical tree view located in the Schemas Sidebar on the left side of the Workspace. It is your primary tool for navigating and exploring your database instances. You can expand databases to see their tables, views, stored procedures, and triggers, and use the filter box at the top to quickly locate specific objects.
Object Editor
The Object Editor allows you to create, modify, and delete schema objects using graphical forms and dialogs. You can access these functions by right-clicking on an object (or object type) in the Object Browser. This will open a context menu with actions such as:
CREATE TABLE, CREATE VIEW
ALTER TABLE
DROP TABLE
User Management
This dedicated tab provides a grid-based interface for managing database users and their privileges directly, without writing GRANT or CREATE USER statements.
From this interface, you can:
View a list of all database users and their assigned global privileges.
Create new database users using a simple form.
Edit an existing user's password or modify their privileges.
Process List Viewer
The Processlist tab provides a real-time view of the database server's active sessions and the commands they are executing, equivalent to running SHOW FULL PROCESSLIST. This is an essential tool for diagnosing performance issues.
Using the Processlist Viewer, you can:
Monitor all active connections, their current status (e.g., Query, Sleep), and how long they have been running.
Identify long-running or problematic queries that may be impacting server performance.
Manage live sessions, which may include the ability to terminate (kill) a specific process.
Guide to modifying the default 30-day metrics data retention period by editing the PROMETHEUS_RETENTION_TIME environment variable and restarting services.
By default, MariaDB Enterprise Manager retains detailed metrics for 30 days. You can configure this data retention period to balance your need for historical data with storage costs.
This guide explains how to change the retention period and how the underlying storage system works.
How to Change the Retention Period
Changing the retention time is done by editing the environment file for Enterprise Manager and then restarting the services.
1
Locate and edit the .env file
Navigate to your Enterprise Manager installation directory and open the .env file in a text editor.
2
Data Retention Policy
Prometheus, the time-series database used by Enterprise Manager, does not delete expired data instantly.
Block-Based Storage: Prometheus stores metrics data in blocks, which are typically two-hour chunks of time. In the background, these small blocks are compacted into larger ones.
Delayed Cleanup: Data is not deleted on a sample-by-sample basis. Instead, Prometheus removes an entire block once all the data within it has passed the retention period. This cleanup process runs in the background and may not be immediate.
Delayed metrics removal for deleted databases
After you delete a database from MariaDB Enterprise Manager, you may continue to see its historical metrics in Grafana dashboards for a period of time.
This is expected behavior. Enterprise Manager does not immediately delete a database's metric history from Prometheus. Instead, the data is removed automatically by Prometheus's own cleanup process once it passes the configured retention period.
These old metrics will no longer receive new data and will eventually disappear from the dashboards on their own.
Valid Retention Time Units
When setting PROMETHEUS_RETENTION_TIME, you can use the following units:
This section outlines a recommended StorageClass configuration for the Azure Blob Storage CSI Driver that resolves common mounting and list operation issues encountered in Kubernetes environments.
The following StorageClass is recommended when working with Azure Blob Storage (ABS).
Next, when defining your PhysicalBackup resource, make sure to use the new StorageClass we created.
Issue 1: Access for Non-Root Containers (-o allow_other)
The default configuration prevents non-root Kubernetes containers from accessing the mounted blob container, resulting in an "unaccessible" volume. By setting the mountOption -o allow_other, non-root containers are granted access to the volume, resolving this issue.
See for more information.
Issue 2: Immediate List Operations and Backup Deletion (--cancel-list-on-mount-seconds=0)
When using the blob-csi-driver with its default settings, list operations (which are critical for cleaning up old backups) may not work immediately upon mount, leading to issues like old physical backups never being deleted. Setting the mountOption --cancel-list-on-mount-seconds to "0" ensures that list operations work as expected immediately after the volume is mounted.
See for more information.
Setting cancel-list-on-mount-seconds to 0 forces the driver to perform an immediate list operation, which may increase both initial mount time and Azure transaction costs (depending on the number of objects in the container). Operators should consider these performance and financial trade-offs and consult the official Azure Blob Storage documentation or an Azure representative for guidance.
In this guide, we will be migrating existing MariaDB Galera and MaxScale instances to TLS without downtime.
1. Ensure that MariaDB has TLS enabled and not enforced. Set the following options if needed:
By setting these options, the operator will issue and configure certificates for MariaDB, but TLS will not be enforced in the connections i.e. both TLS and non-TLS connections will be accepted. TLS enforcement will be optionally configured at the end of the migration process.
This will trigger a rolling upgrade, make sure it finishes successfully before proceeding with the next step. Refer to the for further information about update strategies.
2. If you are currently using MaxScale, it is important to note that, unlike MariaDB, it does not support TLS and non-TLS connections simultaneously (see ). For this reason, you must temporarily point your applications to MariaDB during the migration process. You can achieve this by configuring your application to use the . At the end of the MariaDB migration process, the MaxScale instance will need to be recreated in order to use TLS, and then you will be able to point your application back to MaxScale. Ensure that all applications are pointing to MariaDB before moving on to the next step.
3.MariaDB is now accepting TLS connections. The next step is by pointing them to MariaDB securely. Ensure that all applications are connecting to MariaDB via TLS before proceeding to the next step.
4. If you are currently using MaxScale, and you are planning to connect via TLS through it, you should now delete your MaxScale instance. If needed, keep a copy of the MaxScale manifest, as we will need to recreate it with TLS enabled in further steps:
It is very important that you wait until your old MaxScale instance is fully terminated to make sure that the old configuration is cleaned up by the operator.
5. For enhanced security, it is recommended to enforce TLS in all MariaDB connections by setting the following options. This will trigger a rolling upgrade, make sure it finishes successfully before proceeding with the next step:
6. For improved security, you can optionally configure TLS for Galera SSTs by following the steps below:
Get the and grant execute permissions:
Run the migration script. Make sure you set <mariadb-name> with the name of the MariaDB resource:
Set the following option to enable TLS for Galera SSTs:
This will trigger a rolling upgrade, make sure it finishes successfully before proceeding with the next step
7. As mentioned in step 4, recreate your MaxScale instance with tls.enabled=true if needed:
8.MaxScale is now accepting TLS connections. Next, you need to by pointing them back to MaxScale securely. You have done this previously for MariaDB, you just need to update your application configuration to use the and its CA bundle.
In this guide, we will be migrating an external MariaDB into a new MariaDB instance running in Kubernetes and managed by MariaDB Enterprise Kubernetes Operator. We will be using logical backups for achieving this migration.
If you are currently using or migrating to a Galera instance, use the following command instead:
2. Ensure that your backup file matches the following format: backup.2024-08-26T12:24:34Z.sql. If the file name does not follow this format, it will be ignored by the operator.
3. Upload the backup file to one of the supported . We recommend using S3.
4. Create your MariaDB resource declaring that you want to and providing a that matches the backup:
5. If you are using Galera in your new instance, migrate your previous users and grants to use the User and Grant CRs. Refer to the for further detail.
Where do you get MCP Server from and what are the installation requirements?
The MCP Server can be launched individually or as part of the RAG-in-a-box system. It is distributed as pre-compiled binaries that can run on various operating systems, including:
Windows
RHEL (Red Hat Enterprise Linux)
Ubuntu
Is MCP Server a command-line tool, or does it have a GUI?
The MCP Server is a network service that runs as an HTTP server; it does not have a graphical user interface (GUI) or a direct command-line interface (CLI) for tools. It's designed to be a backend service that is:
Accessed programmatically via the Model Context Protocol.
How do you configure the MCP Server and connect it to MariaDB?
The MCP Server does not include its own database. It acts as a client and requires a connection to an external, pre-existing MariaDB server.
The system components are connected as follows:
Configuration is managed through environment files where you specify the connection details for your MariaDB instance.
How are tools like list_databases executed?
Tools are not typed into a command line. Instead, they are executed programmatically by a Large Language Model (LLM) in response to a user's query in natural language.
The process works like this:
A user asks a question in an integrated client (e.g., "Can you show me what databases are available?").
What are the JSON snippets in the documentation for?
The JSON snippets shown in the documentation are examples of the "behind-the-scenes" communication between a client, the LLM, and the MCP Server. They are not meant to be copied and pasted into a CLI but serve to illustrate how the protocol functions.
This guide covers the basic configuration of the MariaDB AI RAG system. For production deployments and advanced configuration scenarios, please refer to the Deployment Documentation.
See Also:
- Production configuration for Ubuntu/Debian
- Container-based deployment configuration
- Configuration validation checklist
- System architecture and configuration details
Configuration File
MariaDB AI RAG uses a .env configuration file located in the installation directory. A template is provided at config.env.template. Copy this file to .env and modify the parameters according to your environment.
Database Initialization
MariaDB AI RAG requires a properly configured database. The system can automatically initialize the database schema during first startup, or you can manually initialize it using the provided SQL script:
Security Configuration
Authentication
MariaDB AI RAG implements JWT-based authentication. Configure the following parameters in your .env file:
For production environments, it is strongly recommended to use a properly generated secure random string for the SECRET_KEY.
API Key Management
External service API keys should be securely stored in the .env file. In production environments, consider using a secure vault solution or environment variable management system.
Guide to resolving common installation, configuration, agent connectivity, and metrics collection issues.
Troubleshooting installation/deployment issues for Enterprise Manager and Agent
Is the MariaDB Enterprise repository configured correctly?
The agent is distributed as a native OS package that can be installed from the MariaDB Enterprise repositories. The repositories can be installed by following the .
Make sure to use the mariadb_es_repo_setup
MariaDB Galera Cluster
Extends standard server monitoring with Galera-specific metrics like flow control pauses, write conflicts, replication queue depth, and individual node cluster states.
The dashboard mirrors most sections from the dashboard extending it with Galera Metrics section and the Galera Nodes table. Use this dashboard when you need Galera-specific cluster health alongside the familiar server views.
Galera Metrics
Insights into Galera Cluster health with critical metrics and node-specific status details.
Metrics
Overview of the metrics collected by Enterprise Manager, including MariaDB Server counters, MaxScale performance data, and node-level system resource utilization.
MariaDB Server Metrics
MariaDB Server metrics are gathered with the Prometheus exporter for MySQL and stored in Enterprise Manager’s Prometheus with the mariadb prefix. The agent runs the exporter with the following collector flags:
Collector name
Description
Standalone
This guide covers configuring standalone MariaDB Enterprise Server with minimal settings for development. Avoid using it in production due to risks like single point of failure and necessary downtime
This operator allows you to configure standalone MariaDB Enterprise Server instances. To achieve this, you can either omit the replicas field or set it to 1:
Whilst this can be useful for development and testing, it is not recommended for production use because of the following reasons:
Single point of failure
25.10 LTS version update guide
This guide illustrates, step by step, how to update to 25.10.4 from previous versions. This guide only applies if you are updating from a version prior to 25.10.x, otherwise you may upgrade directly (see and docs)
The Galera data-plane must be updated to the 25.10.4 version. You must set updateStrategy.autoUpdateDataPlane=true in your MariaDB resources before updating the operator. Then, once updated, the operator will also be updating the data-plane based on its version:
Data Plane
In order to effectively manage the full lifecycle of both and topologies, the operator relies on a set of components that run alonside the MariaDB instances and expose APIs for remote management. These components are collectively referred to as the "data-plane".
Components
The mariadb-enterprise-operator data-plane components are implemented as lightweight containers that run alongside the MariaDB instances within the same Pod. These components are available in the operator image. More preciselly, they are subcommands of the CLI shipped as binary inside the image.
Migrate Community operator to Enterprise operator
In this guide, we will be migrating from the to the without downtime. This guide assumes:
version of the MariaDB Community Operator is installed in the cluster.
MariaDB community resources will be migrated to its counterpart MariaDB enterprise resource. In this case, we will be using 11.4.4
# Make executable
chmod +x install-enterprise-manager.sh
# Run installer
./install-enterprise-manager.sh
# Extract and load images
tar -xzvf enterprise-manager.tar.gz
cd enterprise-manager
docker image load -i images.tar
cd enterprise-manager
docker compose images | awk 'p{print $2 ":" $3} {p=1}' | xargs docker image save -o images.tar
cd ..
tar -czvf enterprise-manager.tar.gz enterprise-manager
ssh user@your-server-ip
cd enterprise-manager
nano .env
docker compose up -d --force-recreate
MEMA_HOSTNAME=your.new.hostname.or.ip
Restore backup to all volumes
# Clear out any existing data first
docker run --rm --volumes-from enterprise-manager-grafana -v $(pwd)/backups/:/backups/ alpine:latest find /var/lib/grafana/ -delete -mindepth 1
docker run --rm --volumes-from enterprise-manager-prometheus -v $(pwd)/backups/:/backups/ alpine:latest find /prometheus/ -delete -mindepth 1
docker run --rm --volumes-from enterprise-manager-supermax -v $(pwd)/backups/:/backups/ alpine:latest find /var/lib/supermax/ -delete -mindepth 1
# Restore the data from the backups
docker run --rm --volumes-from enterprise-manager-grafana -v $(pwd)/backups/:/backups/ alpine:latest tar -C / -xzf /backups/grafana-backup.tar.gz
docker run --rm --volumes-from enterprise-manager-prometheus -v $(pwd)/backups/:/backups/ alpine:latest tar -C / -xzf /backups/prometheus-backup.tar.gz
docker run --rm --volumes-from enterprise-manager-supermax -v $(pwd)/backups/:/backups/ alpine:latest tar -C / -xzf /backups/supermax-backup.tar.gz
Create the `backups` directory
mkdir backups
Back up all volumes
docker run --rm --volumes-from enterprise-manager-grafana -v $(pwd)/backups/:/backups/ alpine:latest tar -czf /backups/grafana-backup.tar.gz /var/lib/grafana/
docker run --rm --volumes-from enterprise-manager-prometheus -v $(pwd)/backups/:/backups/ alpine:latest tar -czf /backups/prometheus-backup.tar.gz /prometheus/
docker run --rm --volumes-from enterprise-manager-supermax -v $(pwd)/backups/:/backups/ alpine:latest tar -czf /backups/supermax-backup.tar.gz /var/lib/supermax/
sudo dnf install mema-agent
sudo apt-get install mema-agent
Create monitor user
CREATE USER 'monitor'@'localhost' IDENTIFIED BY '<password>';
GRANT SELECT, PROCESS, REPLICATION CLIENT, RELOAD, REPLICA MONITOR, REPLICATION MASTER ADMIN ON *.* TO 'monitor'@'localhost';
Restart Grafana container
# Take down the existing Grafana container
docker compose down grafana
# Start a new Grafana container with the updated configuration
docker compose up -d grafana
cd enterprise-manager/
nano .env
# --- Grafana SMTP Email Settings ---
# Set to true to enable email alerting
GF_SMTP_ENABLED=true
# Your SMTP server hostname and port
GF_SMTP_HOST=smtp.example.com:587
# Credentials for your SMTP user
GF_SMTP_USER=my-email-user
GF_SMTP_PASSWORD=my-super-secret-password
# Set to true if your server uses a self-signed certificate
GF_SMTP_SKIP_VERIFY=false
# The "From" address that will appear on alert emails
GF_SMTP_FROM_ADDRESS=alerts@my-domain.com
# The display name for the sender
GF_SMTP_FROM_NAME=MariaDB Enterprise Manager
{ "tool": "search_vector_store", "parameters": { "database_name": "test_db", "vector_store_name": "my_vectors", "user_query": "What is the capital of France?", "k": 5 } }
{ "tool": "rag_generation", "parameters": { "database_name": "test_db", "vector_store_name": "my_vectors", "user_query": "What is the capital of France?", "k": 5, "temperature": 0.9 } }
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: blob-fuse
provisioner: blob.csi.azure.com
parameters:
protocol: fuse2
reclaimPolicy: Retain
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
# Resolves the issue where non-root containers cannot access the mounted blob container.
- -o allow_other
# Ensures list operations (critical for backups/deletion) work immediately upon mount.
- --cancel-list-on-mount-seconds=0
Current vs. maximum number of open file descriptors.
Filesystem Type
Table of filesystem types and mount points on the node.
Node Uptime
Shows the total amount of time the server node has been running since its last restart.
Topology Info
Displays the node's current role or state within its database topology (e.g., Primary, Replica).
Node Allocatable Capacity
Details the compute resources allocated to the node, such as the number of CPU cores available.
Node Disk Capacity
Shows the total size of the key mounted filesystems, such as /boot and /home.
OS Info
Provides details about the node's OS, including architecture, distribution (e.g., CentOS Stream 9), and kernel release.
Memory Usage
Percentage of physical memory in use.
CPU
Graph showing CPU usage distribution across user, system, idle, iowait, and kernel.
Memory Stack
Breakdown of memory allocation: applications, cache, buffers, swap, etc.
Network Traffic
Inbound and outbound network throughput per interface.
CPU Utilisation
Effective CPU usage and number of cores for the node.
System Load
Load averages for the last 1, 5, and 15 minutes.
Disk Throughput
Read and write throughput (bytes per second) per device.
Disk IOPS
Number of input/output operations per second for reads and writes.
Disk Utilisation
Percentage of time that disk devices are busy handling I/O requests.
Managing constraints and relationships
Renaming or copying objects
Delete users who no longer require access.
collect.binlog_size
Reports binary log files and their sizes to track binlog count and total disk usage/growth.
collect.engine_innodb_status
Parses SHOW ENGINE INNODB STATUS to expose InnoDB internals (waits, deadlocks, transaction and I/O snapshots).
collect.info_schema.innodb_metrics
Reads INFORMATION_SCHEMA.INNODB_METRICS for detailed InnoDB counters (buffer pool, I/O, log, lock, purge, recovery, etc.).
collect.info_schema.innodb_tablespaces
Exposes per-tablespace/file size and allocation details from Information Schema for space-usage monitoring.
collect.info_schema.processlist
Exposes current session/thread activity (users, hosts, commands, states, runtimes) based on the process list.
collect.info_schema.replica_host
Discovers replica hosts via Information Schema (MariaDB-friendly alternative to SHOW SLAVE HOSTS) for topology visibility.
collect.slave_hosts
Emits replica host topology using SHOW SLAVE HOSTS/SHOW REPLICA HOSTS (note: MariaDB expects the legacy SHOW SLAVE HOSTS syntax).
collect.slave_status
Exposes replication status from SHOW SLAVE/REPLICA STATUS (I/O/SQL thread states, positions/GTID, seconds behind, etc.).
MaxScale Metrics
MariaDB Enterprise Manager collects a wide range of time-series metrics from your MariaDB MaxScale instances to provide deep insight into their performance, health, and activity. Monitoring these metrics is crucial for diagnosing performance bottlenecks, ensuring high availability, and understanding how your database proxy is handling application traffic.
Here is the list of available MaxScale metrics collected by Enterprise Manager.
Node Metrics
Node metrics provide crucial information about the health and performance of the underlying hardware and operating system on each monitored host. These metrics are essential for diagnosing infrastructure bottlenecks, understanding resource utilization, and planning for future capacity.
MariaDB Enterprise Manager gathers these metrics using Prometheus Node Exporter, which includes a default set of collectors.
Key metrics collected by default include:
CPU Usage: Overall and per-core utilization, load average, and context switching.
Memory: Total, used, free, and cached memory, including swap space.
Disk I/O: Read/write operations, throughput (bytes per second), and I/O time.
Filesystem Usage: Total, used, and available space for each mounted filesystem.
Network Traffic: Data sent and received, packets, and network interface errors.
For a complete and detailed list of all metrics gathered by the default collectors, please refer to the official Prometheus Node Exporter documentation.
Example: if your files are my-host.crt and my-host.key, your configuration should be:
The path you provide must begin with /certs/. This is because the host's certs/ directory is mounted inside the Docker containers at the /certs path.
3
Restart Enterprise Manager
To apply the changes, restart the services:
Step: Update the audit log variable
Inside the editor, locate the line for the audit API setting.
Find this line:
Change it to:
4
Step: Save and exit
Save the changes and exit the editor.
5
Step: Restart Enterprise Manager
The change requires a restart to take effect.
(Optional) Check Verify peer host to ensure the server's hostname matches the certificate.
If the database requires client-side certificates for authentication, provide the paths to your client certificate and key in the Certificate and Key fields, respectively. These files must also be in the enterprise-manager/certs/ directory.
Integration with MariaDB vector database
System requirements and prerequisites
Embedding and LLM provider configuration
Verify installation by accessing the API health endpoint:
Once set, you may proceed to update the operator. If you are using Helm:
Upgrade the mariadb-enterprise-operator-crds helm chart to 25.10.4:
Upgrade the mariadb-enterprise-operator helm chart to 25.10.4:
As part of the 25.10 LTS release, we have introduced support for LTS versions. Refer to the Helm docs for sticking to LTS versions.
If you are on OpenShift:
If you are on the stable channel using installPlanApproval=Automatic in your Subscription object, then the operator will be automatically updated. If you use installPlanApproval=Manual, you should have a new InstallPlan which needs to be approved to update the operator:
As part of the 25.10 LTS release, we have introduced new release channels. Consider switching to the stable-v25.10 if you are willing to stay in the 25.10.x version:
Consider reverting updateStrategy.autoUpdateDataPlane back to false in your MariaDB object to avoid unexpected updates:
The init container is responsible for dynamically generating the Pod-specifc configuration files before the MariaDB container starts. It also plays a crucial role in the MariaDB container startup, enabling replica recovery for the replication topolology and guaranteeing ordered deployment of Pods for the Galera topology.
Agent sidecar
The agent sidecar provides an HTTP API that enables the operator to remotely manage MariaDB instances. Through this API, the operator is able to remotely operate the data directory and handle the instance lifecycle, including operations such as replica recovery for replication and cluster recovery for the Galera topology. It supports multiple authentication methods to ensure that only the operator is able to call the agent API.
Since it has access to the data directory, it is also responsible for periodically archiving binary logs to be used for point-in-time recovery.
Agent auth methods
As previously mentioned, the agent exposes an API to remotely manage the replication and Galera clusters. The following authentication methods are supported to ensure that only the operator is able to call the agent:
ServiceAccount based authentication
The operator uses its ServiceAccount token as a mean of authentication for communicating with the agent, which subsequently verifies the token by creating a TokenReview object. This is the default authentication method and will be automatically applied by setting:
This Kubernetes-native authentication mechanism eliminates the need for the operator to manage credentials, as it relies entirely on Kubernetes for this purpose. However, the drawback is that the agent requires cluster-wide permissions to impersonate the system:auth-delegatorClusterRole and to create TokenReviews, which are cluster-scoped objects.
Basic authentication
As an alternative, the agent also supports basic authentication:
Unlike the ServiceAccount based authentication, the operator needs to explicitly generate credentials to authenticate. The advantage of this approach is that it is entirely decoupled from Kubernetes and it does not require cluster-wide permissions on the Kubernetes API.
version, which is supported in both community and enterprise versions. Check the supported
and migrate to a counterpart community version first if needed.
MaxScale resources cannot be migrated in a similar way, they need to be recreated. To avoid downtime, temporarily point your applications to MariaDB directly during the migration.
3. Migrate MariaDB resources using the migration script. Make sure you set <mariadb-name> with the name of the MariaDB resource to be migrated and <operator-version> with the version of the Enterprise operator you will be installing:
4. Update the apiVersion of the rest of CRs to enterprise.mariadb.com/v1alpha1.
5. Uninstall the Community operator:
6. If your MariaDB Community had Galera enabled, delete the <mariadb-name>Role, as it will be specifying the Community CRDs:
7. Install the Enterprise operator as described in the Helm documentation. This will trigger a rolling upgrade, make sure it finishes successfully before proceeding with the next step.
8. Delete the finalizers and uninstall the Community CRDs:
9. Run mariadb-upgrade in all Pods. Make sure you set <mariadb-name> with the name of the MariaDB resource:
for crd in $(kubectl get crds -o json | jq -r '.items[] | select(.spec.group=="k8s.mariadb.com") | .metadata.name'); do
kubectl get "$crd" -A -o json | jq -r '.items[] | "\(.metadata.namespace)/\(.metadata.name)"' | while read cr; do
ns=$(echo "$cr" | cut -d'/' -f1)
name=$(echo "$cr" | cut -d'/' -f2)
echo "Removing finalizers from $crd: $name in $ns..."
kubectl patch "$crd" "$name" -n "$ns" --type merge -p '{"metadata":{"finalizers":[]}}'
done
done
helm uninstall mariadb-operator-crds
for pod in $(kubectl get pods -l app.kubernetes.io/instance=<mariadb-name> -o jsonpath='{.items[*].metadata.name}'); do
kubectl exec "$pod" -- sh -c 'mariadb-upgrade -u root -p${MARIADB_ROOT_PASSWORD} -f'
done
While the exact field names may vary, you must configure the following endpoints, replacing <Your_Enterprise_Manager_Address> with the actual address of your instance:
Root / Home URL: https://<Your_Enterprise_Manager_Address>:8090
Valid Post Logout Redirect URI: https://<Your_Enterprise_Manager_Address>:8090/
Web Origins: https://<Your_Enterprise_Manager_Address>:8090
2
Obtain your credentials
Once the client application is saved in your Identity Provider, find and copy the following values:
Authentication URL: The provider's endpoint for authentication requests.
Client ID: The unique ID for the Enterprise Manager application.
Client Secret: The secret key for the Enterprise Manager application.
3
Configure role mapping in your identity provider
Finally, you must configure your Identity Provider to pass the user's role in the JWT token. This is explained in the "Mapping IDP Roles" section further down this page.
Configuration Steps in Enterprise Manager
4
Navigate to Identity Provider settings
From the main UI, click the Settings icon (⚙️) in the left navigation bar.
On the Settings page, click the Identity Provider card.
5
Enter your OIDC provider details
On the OpenID Connect (OIDC) configuration page, fill in the details from your provider:
Authentication URL: The full URL for your OIDC provider's authentication endpoint.
Authentication Flow: Choose the OIDC flow. auto is the default and recommended for most providers.
Client ID: The Client ID you obtained from your provider.
Client Secret: The Client Secret you obtained from your provider.
6
Save the configuration
Click the Save button to apply the settings.
http://<keycloak_ip>:<port>/realms/<your_realm>
Authentication Flow: The default auto flow is recommended for Keycloak.
Client ID: The Client ID you configured for the application within your Keycloak realm: enterprise-manager
Client Secret: This secret is generated by Keycloak and found in the 'Credentials' tab of your client configuration in the Keycloak admin console: 12345ab-c67d-89e0-f123-456789abcdef
"jti": "0780a545-bb7a-404d-a384-64d04557801d",
"sub": "admin"
}
This token's account claim value "admin" would grant the user the admin role upon login.
In the confirmation dialog, click Reset.\
2
A success message will confirm the reset.
Integrated into AI assistants and clients like Claude Desktop, Cursor, or Windsurf.
You interact with the server by configuring a client application to communicate with it. For example, here is how you might configure a client like Windsurf:
The LLM interprets the request and determines that the list_databases tool is needed.
The LLM calls the list_databases tool by sending a JSON-RPC request to the MCP Server.
The MCP Server executes the tool against the connected MariaDB database.
The results are sent back to the LLM, which formats them into a natural language response for the user.
MCP Server (Port 8002) ---------> MariaDB Server (Port 3306)
(connects via MySQL protocol)
Download the Windows .msi installation package from:
Run the .msi installer.
Follow the installation wizard instructions.
script with your Customer Download Token.
Was the agent installed successfully?
The agent installation can be done with the native package manager for your OS.
# For Red Hat/CentOS/Rockysudodnfinstall-ymema-agent
Did the agent setup complete successfully without errors?
The mema-agent setup command should produce no errors if it is successful. You can always run the setup again by generating the installation command from the GUI and then executing it again on the target server.
Did the setup fail on a MariaDB node?
Make sure that MariaDB is listening on the loopback adapter address. If MariaDB cannot be accessed on port 3306 on localhost, the setup command should define the port with --mariadb-port and the host with --mariadb-host. To use a UNIX domain socket, use --mariadb-socket instead.
Did the setup fail on a MaxScale node?
Make sure that the --maxscale-host uses the correct protocol. If MaxScale REST-API is configured to use HTTPS use --maxscale-host=https://127.0.0.1:8989. If the TLS certificates used in the MaxScale REST-API are self-signed, you can disable TLS certificate verification by adding the --maxscale-insecure option to the setup command.
Did the agent processes start up successfully?
The agent processes run as systemd services. Use normal systemd commands to inspect the state of the agent.
Show the agent status
Show status
sudosystemctlstatusmema-agent.slice
If the agent didn't start, errors will be shown in the status output. Once errors are fixed, start the agent again.
Start agent
sudosystemctlstartmema-agent.target
For a more detailed analysis of errors, inspect the agent logs.
Show the agent logs
The agent uses the systemd journal for logging:
Can the agent collect MariaDB metrics?
The credentials that the agent uses to connect to MariaDB require certain grants in order to collect all metrics. Check the Quickstart Guide for the set of grants and verify that the user provided with --mariadb-user has the necessary grants.
If the MariaDB metrics agent is working correctly, the logs should not have any errors. Check the logs with:
To verify the MariaDB metrics agent is running, inspect the raw metrics output:
Raw metrics check
curl-shttp://127.0.0.1:18902/metrics|wc
The output should contain about 3000 lines if everything is working.
Is MaxScale able to send metrics?
Make sure that the version of MaxScale you have installed is 25.10 or greater. Older versions do not support sending metrics.
Any errors in metrics exporting are logged on the info level in MaxScale. To enable info logging, run:
Info level logging is verbose and may cause large log volumes. Once issues are resolved, disable info logging:
Can the agent connect to the Enterprise Manager?
To check connectivity between the agent host and the Enterprise Manager, use curl. If your Enterprise Manager is at 192.168.122.16, the following commands show the expected responses:
The first command should report an HTTP-to-HTTPS error.
The second command should return 404 page not found.
If there are errors, check that port 4318 is open on the Enterprise Manager server and that network connectivity between the agent host and the Enterprise Manager is working.
If the curl commands produce the expected output and the agent status does not report errors after five minutes of startup, the agent is successfully sending metrics to the Enterprise Manager.
Are the metrics available in the Enterprise Manager?
To verify metrics are stored in the time series database, query a system OS metric. Example (assumes Enterprise Manager at 192.168.122.16 and default admin:mariadb credentials):
The result should be a JSON object with one object per node in the data.result array.
Is the time synchronized between Enterprise Manager and agents?
When agents push metrics they include the agent’s timestamp and Enterprise Manager assumes those timestamps are accurate. If Enterprise Manager and monitored instances are not time-synchronized, you can observe:
Misaligned graphs
Missed alerts
Dropped/future/old samples that create “no data” gaps
Poor alignment with logs/traces/events
Ensure clocks are synchronized (for example using NTP/chrony) to avoid these issues.
Dedicated dashboard for monitoring MaxScale proxies, detailing service status, query routing efficiency, client connections, and resource usage across the proxy layer.
This dashboard shows MaxScale’s health and load, how backend servers are seen by each MaxScale, and the traffic/query volume flowing through it—plus cache efficiency from the Query Classifier.
Topology Overview
Provides a visual representation of the entire system's architecture and connectivity.
Section
Description
System Metrics
System Metrics provide comprehensive insights into the performance and health of individual system resources.
Metric
Description
MaxScale Metrics
Query Classifier Cache Metrics help in analyzing and optimizing query routing efficiency by tracking cache hits/misses and monitoring cache size.
Metric
Description
Query Classifier Cache Metrics
Evaluate query routing efficiency by tracking and optimizing cache metrics like hits, misses, and cache size.
High-level dashboard providing an aggregated view of the entire database fleet, highlighting overall health, critical alerts, and resource consumption across multiple topologies.
The "fleet" dashboard is the central inventory for all your monitored database topologies. It provides a hierarchical, at-a-glance overview of the health, status, and configuration of your entire database environment.
Understanding the Dashboard Columns
NAME Column
This column displays the logical names of your databases and the individual server nodes within each topology. It also contains important status and quick-access icons.
Status Icons
Icon
Applies To
Meaning
Quick-Access Icons
This icon () is a shortcut that takes you directly to the detailed Grafana monitoring dashboard for that specific node or topology.
TYPE Column
This column shows the role of each node as automatically detected by Enterprise Manager (e.g., Primary, Replica, MaxScale, Galera Node, Standalone Server).
If this column shows '-', it indicates an issue. For instance, in a Primary/Replica topology, a server expected to be a Replica that shows '-' is likely not replicating correctly from the primary.
LAST METRIC AGE Column
This column shows the time elapsed since the agent on that node last reported metrics.
If the age is 5 minutes or greater, it indicates a problem. Verify that the mema-agent is installed, running, and can communicate with the Enterprise Manager server on that host.
Interacting with Your Databases
You can perform actions on your databases and nodes using the three-dot menu (⋮) on the far right of each row.
1
Accessing the MaxScale GUI
Click the three-dot menu (⋮) next to a MaxScale node.
Details how to customize Kubernetes metadata, such as labels and annotations, for the resources generated and managed by the Operator.
This documentation shows how to configure metadata in the MariaDB Enterprise Kubernetes Operator CRs.
Children object metadata
MariaDB and MaxScale resources allow you to propagate metadata to all the children objects by specifying the inheritMetadata field:
This means that all the reconciled objects will inherit these labels and annotations. For instance, see the Services and Pods:
Pod metadata
You have the ability to provide dedicated metadata for Pods by specifying the podMetadata field in any CR that reconciles a Pod, for instance: MariaDB, MaxScale, Backup, Restore and SqlJobs:
It is important to note that the podMetadata field supersedes the inheritMetadata field, therefore the labels and annotations provided in the former will override the ones in the latter.
Service metadata
Provision dedicated metadata for Services in the MariaDB resources can be done via the service, primaryService and secondaryService fields:
In the case of MaxScale, you can also do this via the kubernetesService field.
Refer to the to know more about the Service fields and MaxScale.
PVC metadata
Both MariaDB and MaxScale allow you to define a volumeClaimTemplate to be used by the underlying StatefulSet. You may also define metadata for it:
Use cases
Being able to provide metadata allows you to integrate with other CNCF landscape projects:
Metallb
If you run on bare metal and you use for managing the LoadBalancer objects, you can declare its IPs via annotations:
Istio
injects the data-plane container to all Pods, but you might want to opt-out of this feature in some cases:
For instance, you probably don't want to inject the Istio sidecar to BackupPods, as it will prevent the Jobs from finishing and therefore your backup process will hang.
Instructions for customers to authenticate and gain access to the private MariaDB Enterprise Docker registry to pull protected container images.
This documentation aims to provide guidance on how to configure access to docker.mariadb.com in your MariaDB Enterprise Kubernetes Operator resources.
Customer credentials
MariaDB Corporation requires customers to authenticate when logging in to the . A Customer Download Token must be provided as the password. Customer Download Tokens are available through the MariaDB Customer Portal. To retrieve the customer download token for your account:
Navigate to the .
Log in using your .
Copy the Customer Download Token to use as the password when logging in to the MariaDB Enterprise Docker Registry.
Then, configure a Kubernetes to authenticate:
Openshift
If you are running in Openshift, it is recommended to use the to configure . The global pull secret is automatically used by all Pods in the cluster, without having to specify imagePullSecrets explicitly.
To configure the global pull secret, you can use the following commands:
Extract your :
Login in the MariaDB registry providing the customer download token as password:
Update the global pull secret:
Alternatively, you can also create a dedicated Secret for authenticating:
MariaDB
In order to configure access to docker.mariadb.com in your MariaDB resources, you can use the imagePullSecrets field to specify your :
As a result, the Pods created as part of the reconciliation process will have the imagePullSecrets.
MaxScale
Similarly to MariaDB, you are able to configure access to docker.mariadb.com in your MaxScale resources:
Backup, Restore and SqlJob
The batch Job resources will inherit the imagePullSecrets from the referred MariaDB, as they also make use of its image. However, you are also able to provide dedicated imagePullSecrets for these resources:
When the resources from the previous examples are created, a Job with both mariadb-enterprise and backup-registryimagePullSecrets will be reconciled.
The MariaDB Enterprise MCP Server offers a comprehensive suite of tools, categorized into standard database operations, advanced vector functionalities, and workflow orchestration.
Standard Database Operations
These tools provide fundamental control and insight into your MariaDB environment. By default, operations are read-only (MCP_READ_ONLY = true) but can be configured for write access (MCP_READ_ONLY = false).
list_databases: Discovers all accessible databases.
list_tables: Enumerates all tables within a specified database.
get_table_schema: Retrieves the detailed schema for a specific table, including column names, data types, keys, and default values.
execute_sql: Executes read-only SQL queries like SELECT, SHOW, and DESCRIBE. Supports parameterized queries for enhanced security.
create_database: Creates a new database if it does not already exist.
Harnessing the Power of Vectors: Advanced AI Functionality
The server’s integrated vector functionality enables semantic search and other embedding-based operations directly within your database.
Vector Store Management
create_vector_store: Creates a new table optimized as a vector store. The schema includes columns for id, document, embedding (VECTOR type), and metadata (JSON). Users can specify the embedding model and distance function (e.g., cosine, euclidean) at creation.
list_vector_stores
Embedding and Search Operations
insert_docs_vector_store: Inserts documents and associated metadata into a vector store. The server manages the generation of embeddings using a configured service.
search_vector_store: Performs semantic similarity searches by generating an embedding for a user query and finding the 'k' most similar documents in the specified vector store.
Workflow Orchestration
The server exposes powerful orchestration endpoints that allow an AI agent to execute an entire RAG pipeline through a single, secure interface.
Ingestion (/orchestrate/ingestion): Triggers the ingestion of documents into a specified vector store, including the chunking and embedding processes.
Generation (/orchestrate/generation): Executes a query against a set of documents, performing retrieval and generating a final, context-aware response from an LLM.
This operator gives you flexibility to define the storage that will back the /var/lib/mysql data directory mounted by MariaDB.
Configuration
The simplest way to configure storage for your MariaDB is:
This will make use of the default StorageClass available in your cluster, but you can also provide a different one:
Under the scenes, the operator is configuring the StatefulSet's volumeClaimTemplate property, which you are also able to provide yourself:
Volume resize
The StorageClass used for volume resizing must define allowVolumeExpansion = true.
It is possible to resize your storage after having provisioned a MariaDB. We need to distinguish between:
PVCs already in use.
StatefulSet storage size, which will be used when provisioning new replicas.
It is important to note that, for the first case, your StorageClass must support volume expansion by declaring the allowVolumeExpansion = true. In such case, it will be safe to expand the storage by increasing the size and setting resizeInUseVolumes = true:
Depending on your storage provider, this operation might take a while, and you can decide to wait for this operation before the MariaDB becomes ready by setting waitForVolumeResize = true. Operations such as and will not be performed if the MariaDB resource is not ready.
Ephemeral storage
Provisioning standalone MariaDB instances with ephemeral storage can be done by setting ephemeral = true:
This may be useful for multiple use cases, like provisioning ephemeral MariaDBs for the integration tests of your CI.
MariaDB AI RAG is an enterprise-grade Retrieval-Augmented Generation (RAG) solution that integrates with MariaDB to provide AI-powered document processing, semantic search, and natural language generation capabilities.
The system enables organizations to leverage their document repositories and databases for AI-powered search and generation. By combining the reliability of MariaDB with modern AI capabilities, AI RAG provides accurate, context-aware responses based on your organization's proprietary data.
System Architecture
MariaDB AI RAG follows a modular architecture with the following key components:
Explains the Role-Based Access Control (RBAC) system, including how to create custom roles, manage base permissions (admin, edit, view, sql), and add or modify users.
MariaDB Enterprise Manager uses a Role-Based Access Control (RBAC) system to manage user permissions. This guide explains how to manage users and create custom roles to fit your organization's security needs.
Accessing User Management
1
Adding Databases
Guide to registering database topologies in the UI and using the integrated helper tool to generate setup commands for installing monitoring agents.
To install mema-agent, you need to setup
This guide outlines the two primary methods for registering and monitoring your database topologies in MariaDB Enterprise Manager: adding a standalone server directly or adding a full topology via its MaxScale instance.
Built-in Alert Rules
Details the pre-configured rules for monitoring MariaDB Server, Galera Cluster, and system health, including sustained-duration triggers to prevent alert fatigue.
MariaDB Enterprise Manager includes a comprehensive set of pre-configured alert rules to provide production-ready monitoring for your entire database stack out-of-the-box. These alerts are built on the integrated Grafana Alerting engine and are designed to detect common issues across your MariaDB Servers, Galera Clusters, MaxScale instances, and the underlying operating systems.
A key feature of these rules is the use of a "sustained for" duration. This means a condition must remain true for a specified period (e.g., 3 minutes) before an alert will fire. This prevents alert fatigue from brief, transient spikes and ensures you are only notified of persistent, actionable problems.
MariaDB Server
Export metrics
Explains two methods for exporting metrics: scraping the built-in Prometheus federation endpoint or configuring the agent to push data directly to OTLP-compatible external systems.
MariaDB Enterprise Manager provides two primary methods for exporting metrics, allowing you to integrate with external observability platforms for aggregation or long-term retention.
1
Scraping the built-in Prometheus endpoint (Server-to-Server)
Workspace
Overview of the Workspace environment, which provides collaborative tools for DBAs and developers including a Query Editor, ERD Designer, and Database Administration tools.
Workspace enhances MariaDB Enterprise Manager by adding query editing, visual schema management, and ERD design. It provides a collaborative environment for DBAs, developers, and analysts.
Query Editor
Feature
Description
External MariaDB
Describes how the Operator can manage resources or connections for MariaDB instances that reside outside the local Kubernetes cluster.
MariaDB Enterprise Kubernetes Operator supports managing resources in external MariaDB instances i.e running outside of the Kubernetes cluster where the operator runs. This feature allows to manage users, privileges, databases, run SQL jobs declaratively and taking backups using the same CRs that you use to manage internal MariaDB instances.
ExternalMariaDB configuration
The ExternalMariaDB
Introduction
General introduction to the Operator's capabilities, benefits for database operations, and its role in managing MariaDB within Kubernetes clusters.
MariaDB Enterprise Kubernetes Operator provides a seamless way to run and operate containerized versions of MariaDB Enterprise Server and MaxScale on Kubernetes, allowing you to leverage Kubernetes orchestration and automation capabilities. This document outlines the features and advantages of using Kubernetes and the MariaDB Enterprise Kubernetes Operator to streamline the deployment and management of MariaDB and MaxScale instances.
What is Kubernetes?
Kubernetes is more than just a container orchestrator; it is a comprehensive platform that provides APIs for managing both applications and the underlying infrastructure. It automates key aspects of container management, including deployment, scaling, and monitoring, while also handling essential infrastructure needs such as networking and storage. By unifying the management of applications and infrastructure, Kubernetes simplifies operations and improves efficiency in cloud-native environments.
Authentication
A cornerstone of the Enterprise edition is its ability to integrate with centralized secret managers, eliminating the need for static credentials stored in local or .env files. The server dynamically fetches database credentials and API keys at startup, ensuring a secure and compliant operational posture.
Key Features
ERD Designer
Explains the ERD Designer tool, a visual interface for creating entity-relationship diagrams, generating models from live databases, and modeling tables and indexes.
Enterprise manager provides a visual interface for creating entity relationship diagrams (ERD) and for observing existing database schemas, so you can quickly understand table relationships, identify dependencies, and visually assess the impact of schema changes before implementation.
This procedure outlines the steps required to access and utilize the ERD Designer within the Workspace section of Enterprise Manager UI.
From the main Workspace screen, click the "Run Queries" card.\
MariaDB AI RAG
MariaDB AI RAG is an enterprise-grade Retrieval-Augmented Generation (RAG) solution that integrates with MariaDB to provide AI-powered document processing, semantic search, and natural language generation capabilities.
Documentation Contents
Deployment
This section provides comprehensive guides for deploying the MariaDB AI RAG system in various environments.
Documentation in This Section
chmod +x install-enterprise-manager.sh
./install-enterprise-manager.sh
sudo yum install -y mema-agent
CREATE USER 'monitor'@'localhost' IDENTIFIED BY '<password>';
GRANT PROCESS, BINLOG MONITOR, REPLICA MONITOR, REPLICATION MASTER ADMIN ON *.* TO 'monitor'@'localhost';
CREATE USER 'monitor'@'localhost' IDENTIFIED BY '<password>';
GRANT PROCESS, BINLOG MONITOR, REPLICA MONITOR, REPLICATION MASTER ADMIN ON *.* TO 'monitor'@'localhost';
Count of nodes grouped by type (e.g., server, MaxScale).
Backend Server States
Timeline of each backend server’s role and health as seen by each MaxScale. Values are color-mapped to: Read, Write, Up, Down. Use this to spot failovers, read/write role flips, or outages over time.
Maxscale Uptime by Instance
Uptime in seconds for each MaxScale instance.
CPU Utilisation
Effective CPU usage (%) per instance, excluding idle/iowait/guest time.
Memory Usage
Working memory in use (%) per instance (total minus free/buffers/cache/slab).
Network Traffic
Per-interface throughput (bits/s). Transmit is plotted below the axis (negative-Y), receive above—making direction easy to read.
MaxScale Processing Load
Percentage of total CPU time consumed by the MaxScale process over time (a direct view of router load).
Connections
Active backend connections per server as observed by MaxScale.
Operations
Active operations per backend server (ongoing requests tracked by MaxScale).
Packets Read/Writes
Per-server packet read and write rates (packets/s). Useful for spotting uneven load distribution.
QPS
Queries per second passing through MaxScale across the selected instances (overall routing throughput).
Cache Hits vs Misses
Per-second hits and misses in the Query Classifier cache. Analyze the relationship to assess effectiveness.
Cache Size
Current size of the Query Classifier cache (bytes). Monitor growth with Hits/Misses for tuning insights.
containing the user password. These will be the connection details that the operator will use to connect to the external MariaDB in order to manage resources, make sure that the specified user has enough privileges:
If you need to use TLS to connect to the external MariaDB, you can provide the server CA certificate and the client certificate Secrets via the tls field:
When using TLS, if you don't want to send the client certificate during the TLS handshake, please set tls.mutual=false:
As a result, you will be able to specify the ExternalMariaDB as a reference in multiple objects, the same way you would do for a internal MariaDB resource.
As part of the ExternalMariaDB reconciliation, a Connection will be created whenever the connection template is specified. This could be handy to track the external connection status and declaratively create a connection string in a Secret to be consumed by applications to connect to the external MariaDB.
Supported objects
Currently, the ExternalMariaDB resource is supported by the following objects:
Connection
User
Grant
Database
Backup
SqlJob
You can use it as an internal MariaDB resource, just by setting kind to ExternalMariaDB in the mariaDBRef field:
When the previous example gets reconciled, an user will be created in the referred external MariaDB instance.
Click the Settings icon (⚙️) in the left navigation bar.
2
Open User Management
Select User management.
Permissions, Roles & Users
In MariaDB Enterprise Manager, permissions, roles, and users are organized in a clear structure:
Permissions define specific actions a user can perform (viewing data, editing settings, accessing the SQL editor).
Roles are collections of one or more permissions grouped together. They can be pre-configured (for example admin, monitoring-admin, viewer) or custom-defined.
Users are assigned one or more roles and inherit the associated permissions.
This structure allows administrators to manage access by assigning roles to users rather than setting individual permissions per user.
The Admin Permission
Access to the User Management page is restricted based on a user's assigned permissions.
✅ Only users with admin permissions (assigned via a role) can add, modify, or remove other users and roles.
❌ Non-admin users cannot access or change these settings, but they can update their own password via their Profile page.\
Default Roles
Enterprise Manager ships with three pre-configured roles:
admin: Has all permissions. Can do everything, including managing other users.
monitoring-admin: Can manage databases and monitoring, but cannot manage users or roles.
viewer: Has read-only access to monitoring data and can use the Workspace.
Create custom roles instead of editing pre-configured ones
While it's possible to edit or delete the pre-configured roles (admin, viewer, etc.), the recommended best practice is to create a new custom role to fit your specific permission requirements.
Leaving the pre-configured roles unmodified ensures you always have a known, baseline configuration to reference or fall back on.
Roles (pre-configured or custom) are built from combinations of the following base permissions:
Base Permission in MariaDB Enterprise Manager
Permission
Description
admin
Can view and manage all users and roles.
edit
Can manage databases and monitoring settings. Requires the view permission to be selected as well.
view
Can view dashboards and monitoring data.
sql
Can access the Query Editor and ERD tools in the Workspace. Enabling this allows you to set a query row limit for the role.
Managing Roles
Only users with the admin permission can create or modify roles.
Creating a Custom Role
1
Roles tab
From the User Management page, select the Roles tab.
2
Add role
Click the Add button.
3
Name role
Enter a name for your new role (e.g., "Developer" or "Auditor").
4
Select base permissions
Select the checkboxes for the Base Permissions you want to grant.
5
Confirm
Click Add.
Modifying or Deleting a Role
1
Locate role
From the Roles tab, locate the custom role you wish to change.
2
Open role menu
Click the three-dot menu (⋮) on the right side of the role's row.
3
Choose action
Select one of the following options:
Managing Users
Adding a User
1
Users tab
From the User Management page, ensure you are on the Users tab.
Users tab show the list of User associated with your Enterprise Manager instance.
The User you're logged in with to Enterprise Manager is shown in bold.
2
Add user
Click the Add button.
3
Enter credentials
Enter a unique Username and a secure Password.
4
Assign role
Select a Role for the user from the dropdown menu.
5
Confirm
Click Add.
Modifying or Deleting a User
1
Locate user
From the Users tab, locate the user you wish to change.
2
Open user menu
Click the three-dot menu (⋮) on the right side of the user's row.
3
Choose action
Select one of the following options:
The Default Admin User
Upon installation of MariaDB Enterprise Manager, a default admin user is created with an automatically generated password.
Option 1: Adding a Standalone Server or Topology (Without MaxScale)
Use this method for a single MariaDB Server or to manually define a Primary/Replica or Galera cluster.
1
Prepare your server(s)
First, perform these actions on each MariaDB Server you plan to add.
Install the Agent package.
Create the Enterprise Manager user (allows the Enterprise Manager server to connect remotely):
Replace <Enterprise_Manager_IP> with the IP of your Enterprise Manager server and <password> with a secure password.
Create the Local Agent user (required for the agent to collect detailed metrics from the local database instance):
Replace <password> with a secure password.
2
Register in the UI
Go to your MariaDB Enterprise Manager web interface (for example https://<Enterprise_Manager_IP>:8090).
3
Standalone server or a Topology
To add a Standalone Server: Click Add and proceed to the next step (4).
To create a Topology:
4
Link the Agent(s) 🔗
For each server added, link its agent:
Find the server in the inventory list, click the three-dot menu (⋮), and select
Option 2: Adding a Topololgy (With MaxScale)
Use this method to add a complete primary/replica or Galera cluster that is managed by one or more MaxScale instances.
1
Prepare all servers in the topology
Perform these actions on every server in the topology: the MaxScale instance(s) and each backend MariaDB Server attached.
Install the Agent package on all servers.
Create a Local Agent user on each backend MariaDB Server:
Replace <password> with a secure password.
2
Register the MaxScale instance in the UI 🖥️
Begin the Add Database process:
3
Link all a 🔗
You must link the agent on every server in the topology to Enterprise Manager. The UI will show the MaxScale instance and discovered backend servers marked as "Not Registered."
For each server in the list (start with the MaxScale instance, then each MariaDB server):
MariaDB instance down for 3 minutes (sustained for 3m). Triggers when the exporter reports the instance as down (mariadb_up = 0) or when no sample from mariadb_up has been received for more than 120 seconds.
ReplicaProcessDown
MariaDB instance has a Replica process Down (sustained for 3m). Triggers when replication is unhealthy: the I/O or SQL thread is stopped, orSeconds_Behind_Master is missing (replica not reporting progress).
ReplicaSecondsBehindPrimary
MariaDB replica is more than 600s behind primary (sustained for 3m). Triggers when replication lag exceeds 600 seconds.
HighUtilizationMaxConnections
MariaDB instance has high connection utilization (sustained for 5m). Triggers when Threads_connected exceeds ~80% of max_connections.
MariaDBInstanceRestart
MariaDB instance restarted recently (sustained for 5m). Triggers when server uptime is below 1 hour, indicating a recent restart.
MariaDBDeadlockFound
MariaDB Deadlock found in the last 15m (sustained for 5m). Triggers when the count of InnoDB deadlocks increases compared to 15 minutes ago.
Galera Cluster
Alert name
Description
GaleraClusterDown
Galera instance down for 5 minutes (sustained for 5m). Triggers when the cluster is not in Primary state (wsrep_cluster_status ≠ 1) or the node is not ready (wsrep_ready ≠ 1).
GaleraNodeNotReady
Galera node not ready (state ≠ 4) for 5m (sustained for 5m). Triggers when the node is not in Synced state and it’s not a temporary DESYNC (desync counter did not change in the last 5 minutes).
GaleraInWrongState
Galera instance is in an unexpected state (sustained for 5m). Triggers when the node’s state comment isn’t one of the normal values (Synced / Donor / Joining / Joined / Waiting for SST).
GaleraClusterDonorFallingBehind
Galera donor lagging (recv queue > 100) for 5m (sustained for 5m). Triggers when a Donor node (state=2) accumulates a large receive queue, indicating it’s falling behind replication.
GaleraClusterSizeChanged
Galera cluster size changed in last 15m (sustained for 5m). Triggers when the cluster size increases within 15 minutes.
MaxScale
Alert name
Description
MaxScaleInstanceDown
MaxScale down for 3 minutes (sustained for 3m). Triggers when no recent MaxScale metrics have been received for more than 120 seconds (e.g., MaxScale down or exporter/scrape pipeline issue).
MaxScaleNoPrimary
MaxScale has no primary for 3 minutes (sustained for 3m). Triggers when MaxScale reports zero servers with role = Primary/Master.
Node/OS
Alert name
Description
NodeFilesystemSpaceUsage
Filesystem disk space is above 90% (sustained for 1h). Triggers when disk space used exceeds 90% on a writable filesystem.
NodeFilesystemSpaceFillingUp
Filesystem predicted to run out of space within ~24h (sustained for 1h). Triggers when usage is above 80% and the trend (predictive model) indicates free space will reach zero within ~24 hours; excludes read-only filesystems.
NodeMemoryHighUtilization
Instance is running out of memory > 95% (sustained for 15m). Triggers when memory utilization exceeds 95%.
NodeCPUHighUtilization
Instance is running out of CPU > 90% (sustained for 15m). Triggers when CPU utilization exceeds 90% over a 5-minute window.
NodeFilesystemAlmostOutOfFiles
Filesystem has less than 3% inodes left (sustained for 1h). Triggers when available inodes drop below 3% on a writable filesystem.
NodeNetworkReceiveErrs
Network interface has a high receive-error rate (sustained for 1h). Triggers when receive errors exceed 1% of total received packets over a 2-minute rate window.
The Prometheus server integrated within MariaDB Enterprise Manager exposes its metrics via a standard federation endpoint. You can configure your own external Prometheus server (or any Prometheus-compatible system) to "scrape" these metrics.
Identify the Federation Endpoint
The endpoint is located on your MariaDB Enterprise Manager server at the /prometheus/federate path. The full URL will be:
In your external Prometheus server's configuration file (prometheus.yml), add a new scrape job to target the Enterprise Manager endpoint.
After adding this configuration and restarting your external Prometheus, it will begin scraping and storing all metrics from your MariaDB Enterprise Manager instance.
2
Pushing metrics with the OpenTelemetry agent (Agent-to-External)
The mema-agent can be configured to push metrics directly to a third-party monitoring system that supports the OpenTelemetry Protocol (OTLP). This method sends data straight from the agent to your external endpoint, bypassing the built-in Prometheus server.
To configure this, run the mema-agent setup command on your MariaDB Server or MaxScale host with the appropriate flags.
Command examples
For a MariaDB Server host:
For a MaxScale host:
Flag descriptions
Flag
Description
For a full list of all available flags and their descriptions, run mema-agent help setup on the host where the agent is installed.
Kubernetes brings several key benefits to the table when managing applications in a containerized environment:
Standardization: Kubernetes relies on standard APIs for managing applications and infrastructure, making it easier to ensure uniformity across various environments. It acts as a common denominator across cloud providers and on-premises.
Automation: Kubernetes APIs encapsulate operational best practises, minimizing the need for manual intervention and improving the efficiency of operations.
Cost Effectiveness: Having an standardized way to manage infrastructure across cloud providers and automation to streamline operations, Kubernetes helps reducing the infrastructure and operational costs.
What is a Kubernetes Operator?
Kubernetes has been designed with flexibility in mind, allowing developers to extend its capabilities through custom resources and operators.
In particular, MariaDB Enterprise Kubernetes Operator, watches the desired state defined by users via MariaDB and MaxScale resources, and takes actions to ensure that the actual state of the system matches the desired state. This includes managing compute, storage and network resources, as well as the full lifecycle of the MariaDB and MaxScale instances. Whenever the desired state changes or the underlying infrastructure is modified, the Operator takes the necessary actions to reconcile the actual state with the desired state.
Operational expertise is baked into the MariaDB and MaxScale APIs and seamlessly managed by the Operator. This includes automated backups, restores, upgrades, monitoring, and other critical lifecycle tasks, ensuring reliability in Day 2 operations.
MariaDB Enterprise Kubernetes Operator Features
Provision and Configure MariaDB and MaxScale Declaratively: Define MariaDB Enterprise Server and MaxScale clusters in YAML manifests and deploy them with ease in Kubernetes.
as a Database proxy to load balance requests and perform failover/switchover operations.
Cluster-Aware Rolling Updates: Perform rolling updates on MariaDB and MaxScale clusters, ensuring zero-downtime upgrades with no disruptions to your applications.
Flexible Storage Configuration and Volume Expansion: Easily configure storage for MariaDB instances, including the ability to expand volumes as needed.
Physical Backups based on and . By leveraging the feature, backups are taken without long read locks or service interruptions.
Logical Backups based on .
Backup Management: Take, restore, and schedule backups with multiple storage types supported: S3, Azure Blob Storage, PVCs, Kubernetes volumes and VolumeSnapshots..
Policy-Driven Backup Retention: Implement backup retention policies with bzip2 and gzip compression.
Bootstrap New Instances: Initialize new MariaDB instances from backups, S3, Azure Blob Storage, PVCs or VolumeSnapshots to quickly spin up new clusters.
Point-In-Time-Recovery: Archive binary logs to enable point-in-time restoration and significantly reduce RPO.
TLS Certificate Management: Issue, configure, and rotate TLS certificates and Certificate Authorities (CAs) for secure connections.
Native Integration with cert-manager: Leverage , the de-facto standard for managing certificates in Kubernetes, to enable issuance with private CAs, public CAs and HashiCorp Vault.
Prometheus Metrics: Expose metrics using the MariaDB and MaxScale Prometheus exporters.
Native Integration with prometheus-operator: Leverage to scrape metrics from MariaDB and MaxScale instances.
Declarative User and Database Management: Manage users, grants, and logical databases in a declarative manner using Kubernetes resources.
Secure, immutable and lightweight images based on Red Hat UBI, available for multiple architectires (amd64, arm64 and ppc64le).
In the "Connect to..." dialog, select your target server, enter your credentials, and click Connect.\
Upon successful connection, the main ERD worksheet will appear.\
Creating ERD diagram
1
Initiate generation
From the ERD Worksheet
On ERD Designer worksheet, click Generate ERD
From the Query Editor
In the Query Editor, right-click on a schema name in the Schemas Sidebar and select the "Generate ERD" option.
2
Select schema, and tables
A dialog will appear. Choose the specific schema you want to visualize. You may select which tables within that schema to include in the diagram.
3
Visualize
Click the Visualize button to generate and display the ERD on the worksheet canvas.
ERD Worksheet Features
The core of the designer is a visual canvas where you can build and manage your database structures.
Model Tables, Indexes, and Relationships
You can graphically manage all core MariaDB schema objects.
Create New Tables
Use the toolbar or right-click on the canvas to add new table entities to your diagram.
Edit Entities
Double-click any table to open the Entity Editor at the bottom of the screen.
Here, you can define and modify columns (including data types and NOT NULL constraints), indexes, and foreign keys through an intuitive interface.
Draw Foreign Keys
To create a new relationship, simply click the connection point on a column in one table and drag it to the column it references in another table.
Auto Layout
For large or complex schemas, the diagram can become cluttered. Use the Auto Arrange Entities feature, typically found in the top toolbar, to automatically rearrange the tables and relationships into a clean, organized, and easily navigable diagram.
Working with the ERD Worksheet
The ERD worksheet provides several tools and shortcuts to streamline your workflow.
Managing Foreign Keys
Right-click on a relationship link between two tables to open a context menu with quick actions, such as editing or removing the foreign key, toggling the relationship type (e.g., one-to-one vs. one-to-many), and changing NOT NULL constraints.
Exporting Your Model
Once your design is complete, you can export it for documentation or deployment. The export options, found in the toolbar or by right-clicking the canvas, include the following:
Export as SQL Script: Generates the CREATE TABLE and ALTER TABLE statements for your entire diagram.
Export as JPEG: Creates an image of your diagram for use in presentations or other documents.
Copy script to clipboard: A quick way to get the SQL for pasting elsewhere.
Applying Changes to a Database
Click the "Apply Script" button (▶) in the toolbar to execute the generated SQL against your connected database. This allows you to deploy your new or modified schema directly from the designer.
MariaDB AI RAG enables organizations to leverage their document repositories and databases for AI-powered search and generation. By combining the reliability of MariaDB with modern AI capabilities, AI RAG provides accurate, context-aware responses based on your organization's proprietary data.
Key Features
Document ingestion and processing
Semantic chunking and embedding
Vector-based similarity search
AI-powered response generation
Database integration
Fine-grained access control
Comprehensive REST API
For detailed information on each component, please refer to the specific documentation sections.
Details the Query Editor feature, providing a comprehensive multi-tabbed environment for writing and debugging SQL, formatting code, and analyzing data results.
The Query Editor is a powerful, integrated environment for database developers and administrators. It provides a comprehensive set of tools for writing and debugging SQL and analyzing query results, all from a single interface.
This procedure outlines the steps required to access and utilize the Query Editor within the Workspace section of Enterprise Manager UI.
From the main Workspace screen, click the "Run Queries" card.\
In the "Connect to..." dialog, select your target server, enter your credentials, and click Connect.\
Upon successful connection, the main will appear, ready for you to begin.\
Query Editor Worksheet
The Query Editor Workspace is organized around a flexible, multi-tabbed interface designed for parallel work. At the top level, Worksheet tabs represent your connections to different database servers. Within each worksheet, you can open multiple Query Tabs, allowing you to write and execute several independent SQL statements without losing your context.
SQL Code Management Features
These features are designed to make writing and managing SQL code efficient and intuitive.
SQL Editor
Write, run, and debug SQL statements. The editor supports executing queries in parallel across multiple Query Tabs, allowing you to work on different tasks or connect to different servers simultaneously within isolated sessions.
SQL Code Completion
Speed up query authoring and minimize syntax errors with context-sensitive suggestions. As you type, the editor offers relevant SQL keywords, functions, and objects (like tables and columns) from the currently selected database schema.
SQL Code Formatter
Improve readability and maintain consistent coding standards by automatically formatting your SQL code. Access this feature via the editor's context menu or command palette (F1).
SQL Syntax Highlighting
Enhance code clarity with color syntax highlighting. Different parts of your SQL statements (keywords, strings, comments) are displayed in distinct colors, making queries easier to scan and debug.
SQL Snippets
Save frequently used SQL code blocks for quick reuse across sessions. Press CTRL+D (or CMD+D on Mac) to save the current content of the editor as a snippet.
SQL History
Keep track of every query executed within the Workspace. The History tab provides a running log, allowing you to quickly find, review, and re-execute previous commands.
Multiple Connections
Define and manage connections to various database servers (e.g., development, testing, production). Each connection opens in its own top-level Worksheet tab, within which you can open multiple Query Tabs.
Open/Edit/Save SQL Files
Load existing SQL scripts from your local machine into the editor, make changes, and save them back without leaving the workspace.
Data Management and Analysis Features
These features help you interact with and understand the results of your queries.
Export Result Sets
Easily share or archive query results. You can export data grids directly into common formats like CSV, JSON, or as SQL INSERT statements.
1
From results tab, click Export Results
2
Display multiple Result Sets
When executing a script with multiple SELECT statements, view each result set in its own dedicated grid within the Results panel for easy comparison.
Vertical Results Mode
Improve readability for tables with many columns by displaying results in a vertical, record-by-record format.
Result Set Limits
Control the number of rows returned by SELECT statements (default: 10,000). This safety feature keeps queries responsive and can be adjusted per role.
Result Visualizations
Gain quick insights from your data by visualizing query results directly within the Workspace as simple line, bar, or scatter charts.
Grid Operations
Interact directly with the data displayed in the Results grid. Perform actions like searching for specific values, filtering rows, grouping data, and customizing column visibility without writing additional SQL.
A fast-track guide to deploying your first MariaDB Enterprise instance using the Operator, from initial configuration to a running database.
This guide aims to provide a quick way to get started with the MariaDB Enterprise Kubernetes Operator for Kubernetes. It will walk you through the process of deploying a MariaDB Enterprise Cluster and MaxScale via the MariaDB and MaxScale CRs (Custom Resources) respectively.
Before you begin, ensure you meet the following prerequisites:
The first step will be configuring a Secret with the credentials used by the MariaDB CR:
Next, we will deploy a MariaDB Enterprise Cluster (Galera) using the following CR:
Let's break it down:
rootPasswordSecretKeyRef: A reference to a Secret containing the root password.
imagePullSecrets: The name of the Secret containing the customer credentials to pull the MariaDB Enterprise Server image.
After applying the CR, we can observe the MariaDB Pods being created:
Now, let's deploy a MaxScale CR:
Again, let's break it down:
imagePullSecrets: The name of the Secret containing the customer credentials to pull the MaxScale image.
mariaDbRef: A reference to the MariaDB CR that we want to connect to.
After applying the CR, we can observe the MaxScale Pods being created, and that both the MariaDB and MaxScale CRs will become ready eventually:
To conclude, let's connect to the MariaDB Enterprise Cluster through MaxScale using the initial user and database we initially defined in the MariaDB CR:
You have successfully deployed a MariaDB Enterprise Cluster with MaxScale in Kubernetes using the MariaDB Enterprise Kubernetes Operator!
The MariaDB MCP (Model Context Protocol) Server is a modular, multi-layered system designed to provide secure, scalable, and extensible AI-powered tools and services. Its architecture is centered around a primary gateway (MCP Server), an optional specialized microservice for Retrieval-Augmented Generation (RAG API), and a Shared MariaDB Database that serves as the single source of truth for all components.
This design prioritizes security through multi-layered token validation and promotes flexibility with an adaptive tool registration system, allowing services to be enabled or disabled dynamically.
Architectural Diagram
The following diagram illustrates the flow of a request from a client application through the various components of the MCP ecosystem.
Component Breakdown
Client Applications
These are the consumers of the MCP Server's services. They are responsible for acquiring a JWT Bearer Token and including it in the Authorization header of every request.
Examples: AI assistants, custom applications using the REST API, and dedicated MCP clients.
MCP Server (Port 8002)
The MCP Server acts as the primary gateway and orchestrator. All client requests must pass through it. It performs two critical functions:
Token Extraction & Validation
This is the first layer of security. The MCP Server validates the identity and legitimacy of every incoming request through a three-step process:
Extract Token: It retrieves the JWT from the Authorization header.
Verify Signature: It cryptographically verifies the token's signature to ensure it hasn't been tampered with.
Validate User: It queries the Users table in the shared database to confirm the user exists and is active.
Adaptive Tool Registration
A key feature of the MCP Server is its ability to dynamically adjust the tools it offers based on the availability of dependent services.
Core, Database, & Vector Tools: These are foundational toolsets and are always registered and available.
RAG Tools: These tools, which rely on the RAG API, are only registered if the MCP Server can successfully connect to the RAG API. This makes the RAG component an optional, plug-in extension.
RAG API (Port 8000)
This is a specialized microservice designed for complex, knowledge-based tasks using the Retrieval-Augmented Generation pattern. It operates as a distinct service that the MCP Server communicates with.
Authentication & Authorization
The RAG API implements a second, more granular layer of security. After receiving a forwarded request from the MCP Server, it re-verifies the JWT and performs deeper authorization checks:
Verify JWT Token: Ensures the token is still valid.
Check User Roles: Examines the user's roles and permissions to determine if they are authorized to perform the requested RAG operation.
Enforce Permissions: Applies access control rules, for example, restricting document access based on ownership or group membership.
RAG Pipeline
This is the core logic of the RAG API. It transforms a user's query into a knowledge-rich response.
Document Ingestion: The process of adding new documents to the knowledge base.
Vector Embedding: Documents are converted into numerical representations (vectors) and stored in the Vector Store within the MariaDB database.
Retrieval: When a query is received, the API searches the Vector Store
Shared MariaDB Database
The database is the foundation of the entire architecture, providing a single, consistent source of data for all services.
Users: Stores user credentials, roles, and metadata required for authentication and authorization across both the MCP Server and RAG API.
Documents: Contains the raw content (e.g., text, metadata) that the RAG pipeline uses for retrieval.
Vector Store: A dedicated table or set of tables within MariaDB that stores the vector embeddings of the documents, enabling efficient similarity searches.
Request and Data Flow
Request Initiation: A client application sends a request to the MCP Server (:8002) with a JWT in the Authorization header.
MCP Server Authentication: The MCP Server validates the JWT against the shared database. If invalid, the request is rejected with a 401 Unauthorized error.
This architecture ensures a clear separation of concerns, enhances security with multiple checkpoints, and provides a highly extensible platform for building advanced AI tools.
Best practices and procedures for performing rolling updates and version upgrades for MariaDB Enterprise Server and MaxScale without downtime.
By leveraging the automation provided by MariaDB Enterprise Kubernetes Operator, you can declaratively manage large fleets of databases using CRs. This also covers day two operations, such as upgrades, which can be risky when rolling out updates to thousands of instances simultaneously.
To mitigate this, and to give you full control on the upgrade process, you are able to choose between multiple update strategies described in the following sections.
Update strategies
In order to provide you with flexibility for updating MariaDB reliably, this operator supports multiple update strategies:
: Roll out replica Pods one by one, wait for each of them to become ready, and then proceed with the primary Pod.
: Utilize the rolling update strategy from Kubernetes.
: Updates are performed manually by deleting Pods
Configuration
The update strategy can be configured in the updateStrategy field of the MariaDB resource:
It defaults to ReplicasFirstPrimaryLast if not provided.
Trigger updates
Updates are not limited to updating the image field in the MariaDB resource, an update will be triggered whenever any field of the Pod template is changed. This translates into making changes to MariaDB fields that map directly or indirectly to the Pod template, for instance, the CPU and memory resources:
Once the update is triggered, the operator manages it differently based on the selected update strategy.
ReplicasFirstPrimaryLast
This role-aware update strategy consists in rolling out the replica Pods one by one first, waiting for each of them become ready (i.e. readiness probe passed), and then proceed with the primary Pod. This is the default update strategy, as it can potentially meet various reliability requirements and minimize the risks associated with updates:
Write operations won't be affected until all the replica Pods have been rolled out. If something goes wrong in the update, such as an update to an incompatible MariaDB version, this is detected early when the replicas are being rolled out and the update operation will be paused at that point.
Read operations impact is minimized by only rolling one replica Pod at a time.
Waiting for every
RollingUpdate
This strategy leverages the rolling update strategy from the , which, unlike , does not take into account the role of the Pods(primary or replica). Instead, it rolls out the Pods one by one, from the highest to the lowest StatefulSet index.
You are able to pass extra parameters to this strategy via the rollingUpdate object:
OnDelete
This strategy aims to provide a method to update MariaDB resources manually by allowing the user to restart the Pods individually. This way, the user has full control over the update process and can decide which Pods are rolled out at any given time.
Whenever an , the MariaDB will be marked as pending to update:
From this point, you are able to delete the Pods to trigger the update, which will result the MariaDB marked as updating:
Once all the Pods have been rolled out, the MariaDB resource will be back to a ready state:
Never
The operator will not perform updates on the StatefulSet whenever this update strategy is configured. This could be useful in multiple scenarios:
Progressive fleet upgrades: If you're managing large fleets of databases, you likely prefer to roll out updates progressively rather than simultaneously across all instances.
Operator upgrades: When upgrading the operator, changes to the StatefulSet or the Pod template may occur from one version to another, which could trigger a rolling update of your MariaDB instances.
Data-plane updates
Highly available topologies rely on that run alongside MariaDB to enable the remote management of the database instances. These containers use the mariadb-enterprise-operator image, which can be automatically updated by the operator based on its image version:
By default, updateStrategy.autoUpdateDataPlane is false, which means that no automatic upgrades will be performed, but you can opt-in/opt-out from this feature at any point in time by updating this field. For instance, you may want to selectively enable updateStrategy.autoUpdateDataPlane in a subset of your MariaDB instances after the operator has been upgraded to a newer version, and then disable it once the upgrades are completed.
It is important to note that this feature is fully compatible with the strategy: no upgrades will happen when updateStrategy.autoUpdateDataPlane=true and updateStrategy.type=Never.
This guide details installing the MariaDB Enterprise Kubernetes Operator on OpenShift, leveraging the Operator Lifecycle Manager, and configuring image pull credentials.
This documentation provides guidance on installing the MariaDB Enterprise Kubernetes Operator operator in OpenShift. This operator has been and it is available in the OpenShift console.
Operators are deployed into OpenShift with the , which facilitates the installation, updates, and overall management of their lifecycle.
Prerequisites
Configure your to be able to pull images.
SQL Resources
Explains how to manage database objects like users, databases, and privileges natively through Kubernetes Custom Resources (CRDs).
MariaDB Operator Enterprise enables you to manage SQL resources declaratively through CRs. By SQL resources, we refer to users, grants, and databases that are typically created using SQL statements.
The key advantage of this approach is that, unlike executing SQL statements manually, which is a one-time operation, declaring a SQL resource via a CR ensures that the resource is periodically reconciled by the operator. This provides a guarantee that the resource will be recreated if it gets manually deleted. Additionally, it prevents state drifts, as the operator will regularly update the resource according to the CR specification.
User CR
Configuration
This documentation aims to provide guidance on various configuration aspects shared across many MariaDB Enterprise Kubernetes Operator CRs.
my.cnf
An inline can be provisioned in the MariaDB resource via the myCnf field:
scrape_configs:
- job_name: 'mem-federation'
scrape_interval: 60s
honor_labels: true
metrics_path: '/prometheus/federate'
params:
'match[]':
- '{job=~".+"}' # This parameter tells the endpoint to return all series.
static_configs:
- targets: ['<Enterprise_Manager_IP>:8090']
scheme: https
basic_auth:
username: admin # default username for Enterprise Manager
password: mariadb # default password for admin user
# You may need to add TLS and authentication configurations
# depending on your network setup and security requirements.
# tls_config:
# insecure_skip_verify: true
# Direct values in config file
DB_HOST=localhost
DB_PASSWORD=your_password
SECRET_KEY=your_secret_key
JWT_SECRET_KEY=your_jwt_secret
GEMINI_API_KEY=your_api_key
# RAG API
rag-api.exe --config=config.env.secure.local
# MCP Server
$env:MCP_CONFIG="config.env.secure.local"
mcp-server.exe
# RAG API
op run --env-file=config.env.1password.employee -- rag-api.exe
# MCP Server
op run --env-file=config.env.1password.employee -- mcp-server.exe
If you select the sql permission, a "Query editor row limit" dropdown will appear. You can adjust this value as needed.
When creating a role, selecting the edit permission requires you to also select the view permission.
Update: Opens the "Edit Role" dialog where you can change the role's name or its assigned permissions.
Delete: Permanently removes the custom role. A confirmation dialog will appear.
Roles that are currently assigned to any user cannot be deleted.
Update: Opens the "Edit User" dialog where you can change the user's assigned role or update their password.
Delete: Permanently removes the user from MariaDB Enterprise Manager.
You cannot delete the user account that you are currently logged in with. To delete an administrator account, you must first log in with a different administrator account.
Log in with user who has edit permission.
Begin the Add Database process:
If this is your first time and no databases are present, you'll be on the "Add Database" screen automatically.
If you already have other databases, click the + Add Database button.
Ensure the Database without MaxScale option is selected.\
Fill in the connection details for your first server using the Enterprise Manager User ('monitor'@'<Enterprise_Manager_IP>').
Click the Plus icon (+) to add another server.\
Fill in the connection details for the second server in your topology and click Confirm. Repeat for all nodes in your topology.
Once all nodes are added, select the Topology Type (e.g., Primary/Replica — default — or Galera Cluster) and click Confirm.\
To convert an existing standalone server into a topology of multiple servers: click the three-dot menu (⋮) next to the server, choose Edit, and click the Plus icon (+). Then follow the same steps to add nodes.
Install Agent
.\
Enter the credentials for the Local Agent User ('monitor'@'localhost') to generate a setup command.\
Copy the command and run it on that server's terminal to link the agent.
If this is your first time and no databases are present, you'll be on the "Add Database" screen to begin with.
If you already have other databases, click the + Add Database button.
Select the Database with MaxScale option.
Provide the connection details for your MaxScale instance (IP address, API port 8989, and its admin credentials).
Click Add. Enterprise Manager will connect to MaxScale and automatically discover all backend MariaDB servers it manages.
Click the three-dot menu (⋮) and select Install Agent.\
The UI will generate a unique setup command for that specific server with the username and password you provide. Copy the command.\
On that specific server, paste and run the command in the terminal.
Repeat this process for every server in the topology. Once all agents are linked, the dashboard will begin showing the health of the entire topology.
From Export results window, make the selection.
Setting
Description
File name
The name for the downloaded export file. A default name with the current date is usually suggested.
Fields to export
Allows you to select which columns from the query result set to include in the export.
File format
Choose the output format: CSV, SQL, or JSON.
CSV Options
Fields terminated by
The character used to separate values (e.g., , or \t).
Lines terminated by
The character indicating the end of a row (e.g., \n).
to find the most semantically relevant document chunks.
Generation: The retrieved chunks are combined with the original query and fed to a language model to generate a comprehensive, context-aware answer.
Tool Dispatching: The server identifies that the request requires a RAG tool. It's checks if the RAG API is available.
Request Forwarding: The MCP Server forwards the original request, including the JWT, to the RAG API (:8000).
RAG API Authorization: The RAG API performs its own validation of the JWT and checks the user's permissions for the requested action. If unauthorized, it returns an error.
RAG Pipeline Execution: The RAG API executes its pipeline, querying the Documents and Vector Store tables in the MariaDB database to retrieve relevant context.
Response Generation: The RAG API generates a final response.
Response Relay: The response is sent back to the MCP Server, which in turn relays it to the client application.
The recommended way to configure credentials is to use the global pull secret provided by OpenShift, as described in this section. Alternatively, the operator bundle has a mariadb-enterpriseimagePullSecret configured by default. This means that you can configure a Secret named mariadb-enterprise in same namespace where the operator will be installed in order to pull images from the MariaDB Enterprise registry.
PackageManifest
You can install the certified operator in OpenShift clusters that have the mariadb-enterprise-operatorpackagemanifest available. In order to check this, run the following command:
SecurityContextConstraints
Both the operator and the operand Pods run with the restricted-v2SecurityContextConstraint, the most restrictive SCC in OpenShift in terms of container permissions. This implies that OpenShift automatically assigns a SecurityContext for the Pods with minimum permissions, for example:
OpenShift does not assign SecurityContexts in the default and kube-system namespaces. Please refrain from deploying operands on them, as it will result in permission errors when trying to write to the filesystem.
To install the operator watching resources on all namespaces, you need to create a Subscription object for mariadb-enterprise-operator using the stable channel in the openshift-operators namespace:
This will use the global-operatorsOperatorGroup that is created by default in the openshift-operators namespace. This OperatorGroup will watch all namespaces in the cluster, and the operator will be able to manage resources across all namespaces.
In order to define which namespaces the operator will be watching, you need to create an OperatorGroup in the namespace where the operator will be installed:
This OperatorGroup will watch the namespaces defined in the targetNamespaces field. The operator will be able to manage resources only in these namespaces.
Then, the operator can be installed by creating a Subscription object in the same namespace as the OperatorGroup:
Release channels
We maintain support across a variety of OpenShift channels to ensure compatibility with different release schedules and stability requirements. Below, you will find an overview of the specific OpenShift channels we support.
Channel
Supported OpenShift Versions
Description
stable
4.18, 4.16
Points to the latest stable version of the operator. This channel may span multiple major versions.
stable-v25.10
4.18, 4.16
v25.10.x is an LTS release. This channel points to the latest patch release of 25.10. Use this if you require version pinning to a stable version of the operator without necessarily looking for newer features.
An example Subscription would look like this:
Updates
Updates are fully managed by OLM and controlled by the installPlanApproval field in the Subscription object. The default value is Automatic, which means that OLM will automatically update the operator to the latest version available in the channel. If you want to control the updates, you can set this field to Manual, and OLM will only update the operator when you approve the update.
Uninstalling
The first step for uninstalling the operator is to delete the Subscription object. This will not remove the operator, but it will stop OLM from managing the operator:
After that, you can uninstall the ClusterServiceVersion (CSV) object that was created by OLM. This will remove the operator from the cluster:
OpenShift console
As an alternative to create Subscription objects via the command line, you can install operators by using the OpenShift console. Go to the Operators > OperatorHub section and search by mariadb enterprise:
Select MariaDB Enterprise Kubernetes Operator, click on install, and you will be able to create a Subscription object via the UI.
Once deployed, the operator comes with example resources that can be deployed from the console directly. For instance, to create a MariaDB:
As you can see in the previous screenshot, the form view that the OpenShift console offers is limited, we recommend using the YAML view:
By creating this resource, you are declaring an intent to create an user in the referred MariaDB instance, just like a statement would do:
In the example above, a user named bob identified by the password available in the bob-passwordSecret will be created in the mariadb instance.
Refer to the API reference for more detailed information about every field.
Custom name
By default, the CR name is used to create the user in the database, but you can specify a different one providing the name field under spec:
Grant CR
By creating this resource, you are declaring an intent to grant permissions to a given user in the referred MariaDB instance, just like a statement would do.
You may provide any set of .
Refer to the API reference for more detailed information about every field.
Database CR
By creating this resource, you are declaring an intent to create a logical database in the referred MariaDB instance, just like a statement would do:
Refer to the API reference for more detailed information about every field.
Custom name
By default, the CR name is used to create the user in the database, but you can specify a different one providing the name field under spec:
Initial User, Grant and Database
If you only need one user to interact with a single logical database, you can use of the MariaDB resource to configure it, instead of creating the User, Grant and Database resources separately:
Behind the scenes, the operator will be creating an User resource with ALL PRIVILEGES in the initial Database.
Authentication plugins
This feature requires the skip-strict-password-validation option to be set. See: .
Passwords can be supplied using the passwordSecretKeyRef field in the User CR. This is a reference to a Secret that contains a password in plain text.
Alternatively, you can use to avoid passing passwords in plain text and provide the password in a hashed format instead. This doesn't affect the end user experience, as they will still need to provide the password in plain text to authenticate.
Password hash
Provide the password hashed using the function:
The password hash can be obtained by executing SELECT PASSWORD('<password>'); in an existing MariaDB installation.
Password plugin
Provide the password hashed using any of the available , for example mysql_native_password:
The plugin name should be available in a Secret referenced by pluginNameSecretKeyRef and the argument passed to it in pluginArgSecretKeyRef. The argument is the hashed password in most cases, refer to the for further detail.
Configure reconciliation
As we previously mentioned, SQL resources are periodically reconciled by the operator into SQL statements. You are able to configure the reconciliation interval using the following fields:
If the SQL statement executed by the operator is successful, it will schedule the next reconciliation cycle using the requeueInterval. If the statement encounters an error, the operator will use the retryInterval instead.
Cleanup policy
Whenever you delete a SQL resource, the operator will also delete the associated resource in the database. This is the default behaviour, that can also be achieved by setting cleanupPolicy=Delete:
You can opt-out from this cleanup process using cleanupPolicy=Skip. Note that this resources will remain in the database.
Under the hood, the operator automatically creates a ConfigMap with the contents of the myCnf field, which will be mounted in the MariaDB instance. Alternatively, you can manage your own configuration using a pre-existing ConfigMap by linking it via myCnfConfigMapKeyRef. It is important to note that the key in this ConfigMap i.e. the config file name, must have a .cnf extension in order to be detected by MariaDB:
To ensure your configuration changes take effect, the operator triggers a MariaDB update whenever the myCnf field or the ConfigMap is updated. For the operator to detect changes in a ConfigMap, it must be labeled with enterprise.mariadb.com/watch. Refer to the external resources section for further detail.
Compute resources
CPU and memory resouces can be configured via the resources field in both the MariaDB and MaxScale CRs:
In the case of MariaDB, it is recommended to set the innodb_buffer_pool_size system variable to a value that is 70-80% of the available memory. This can be done via the myCnf field:
Timezones
By default, MariaDB does not load timezone data on startup for performance reasons and defaults the timezone to SYSTEM, obtaining the timezone information from the environment where it runs. See the for further information.
You can explicitly configure a timezone in your MariaDB instance by setting the timeZone field:
This setting is immutable and implies loading the timezone data on startup.
In regards to Backup and SqlJob resources, which get reconciled into CronJobs, you can also define a timeZone associated with their cron expression:
If timeZone is not provided, the local timezone will be used, as described in the Kubernetes docs.
Passwords
Some CRs require passwords provided as Secret references to function properly. For instance, the root password for a MariaDB resource:
By default, fields like rootPasswordSecretKeyRef are optional and defaulted by the operator, resulting in random password generation if not provided:
You may choose to explicitly provide a Secret reference via rootPasswordSecretKeyRef and opt-out from random password generation by either not providing the generate field or setting it to false:
This way, we are telling the operator that we are expecting a Secret to be available eventually, enabling the use of GitOps tools to seed the password:
sealed-secrets: The Secret is reconciled from a SealedSecret, which is decrypted by the sealed-secrets controller.
external-secrets: The Secret is reconciled fom an ExternalSecret, which is read by the external-secrets controller from an external secrets source (Vault, AWS Secrets Manager ...).
External resources
Many CRs have a references to external resources (i.e. ConfigMap, Secret) not managed by the operator.
These external resources should be labeled with enterprise.mariadb.com/watch so the operator can watch them and perform reconciliations based on their changes. For example, see the my.cnfConfigMap:
Probes
Kubernetes probes serve as an inversion of control mechanism, enabling the application to communicate its health status to Kubernetes. This enables Kubernetes to take appropriate actions when the application is unhealthy, such as restarting or stop sending traffic to Pods.
Fine tunning of probes for databases running in Kubernetes is critical, you may do so by tweaking the following fields:
There isn't an universally correct default value for these thresholds, so we recommend determining your own based on factors like the compute resources, network, storage, and other aspects of the environment where your MariaDB and MaxScale instances are running.
Comprehensive dashboard for monitoring MariaDB Server instances, covering topology overviews, replication health, InnoDB metrics, query performance, and active connections.
This dashboard provides a unified view of a database topology. It combines topology information, system health, replication or cluster metrics, and query performance in one place. Administrators can use it to monitor availability, troubleshoot issues, and optimize performance.
Topology Overview
Summarizes the overall topology, showing which servers are active, their versions, and how they are organized.
Name — Displays the name of the MariaDB topology currently being monitored.
Project — Shows the associated project or environment label.
Primary/Replica — A table with:
Instance: Server hostname.
Type: Instance role.
Topology Info — Count of nodes grouped by type (e.g., server, MaxScale).
MariaDB Server Uptime by Instance — Uptime in seconds for each server instance.
System Metrics
Monitors server resource usage to detect bottlenecks in CPU, memory, network, and storage.
Feature
Description
Replication / Cluster Metrics
Provides insight into replication and cluster-related activity, including binary log usage, commit rates, and delay measurements.
Metric
Description
Replication Status Table
This table provides a consolidated view of the health status of replication across instances.
Field Name
Description
Query Metrics
Focuses on query execution and workload behavior, highlighting concurrency, throughput, and inefficiencies.
Metric
Description
Connections
This section provides visibility into how clients connect to the server and whether connection limits or failures are occurring.
Metric
Description
Range Metrics
Highlights query access patterns where range operations or scans are used.
Metric
Description
InnoDB Metrics
Shows activity within the InnoDB storage engine.
Metric
Description
Processlist
Shows information about active sessions and thread states collected from information_schema.processlist.
Details the hardware sizing, system prerequisites (x86-64 Linux, Docker), and supported OS versions for deploying the central server and monitoring agents.
This guide outlines the system and hardware requirements for deploying the Enterprise Manager Server and the Enterprise Manager Agent.
Enterprise Manager Server 🖥️
The Enterprise Manager Server is the central component that hosts the UI and stores monitoring data.
Hardware Sizing Guide
Monitored Servers
CPU
Memory (RAM)
Storage (SSD)
Tip: Adjust storage size depending on your requirements for metrics retention.
System Requirements
CPU Architecture: x86-64
Operating System: 64-bit Linux with Docker support.
Software: Docker Engine and Docker Compose must be installed.
Enterprise Manager Agent🕵
The agent must be installed on each and instance you wish to monitor. Below are the supported operating systems.
Supported Platforms for MariaDB Server
MariaDB Server Version
Supported OS (x86_64, ARM64)
Supported Platforms for MariaDB MaxScale
MaxScale Version
Supported OS (x86_64, ARM64)
* Monitoring and Single Sign-On(SSO) are only supported for MaxScale versions 25.10 and Above
# For Red Hat/CentOS/Rocky
sudo dnf install -y mema-agent
# For Debian/Ubuntu
sudo apt install -y mema-agent
CREATE USER 'monitor'@'<Enterprise_Manager_IP>' IDENTIFIED BY '<password>';
GRANT REPLICA MONITOR ON *.* TO 'monitor'@'<Enterprise_Manager_IP>';
CREATE USER 'monitor'@'localhost' IDENTIFIED BY '<password>';
GRANT PROCESS, BINLOG MONITOR, REPLICA MONITOR, REPLICATION MASTER ADMIN ON *.* TO 'monitor'@'localhost';
# For Red Hat/CentOS/Rocky
sudo dnf install -y mema-agent
# For Debian/Ubuntu
sudo apt install -y mema-agent
CREATE USER 'monitor'@'localhost' IDENTIFIED BY '<password>';
GRANT PROCESS, BINLOG MONITOR, REPLICA MONITOR, REPLICATION MASTER ADMIN ON *.* TO 'monitor'@'localhost';
oc get packagemanifests -n openshift-marketplace mariadb-enterprise-operator
NAME CATALOG AGE
mariadb-enterprise-operator Certified Operators 21h
apiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-galera
spec:
# Tune your liveness probe accordingly to avoid Pod restarts.
livenessProbe:
periodSeconds: 10
timeoutSeconds: 5
# Tune your readiness probe accordingly to prevent disruptions in network traffic.
readinessProbe:
periodSeconds: 10
timeoutSeconds: 5
# Tune your startup probe accordingly to ensure that the SST completes with a large amount of data.
# failureThreshold × periodSeconds = 30 × 10 = 300s = 5m until the container gets restarted if unhealthy
startupProbe:
failureThreshold: 30
periodSeconds: 10
timeoutSeconds: 5
NULL replaced by
How NULL values should be represented (e.g., \N).
With Headers
Checkbox to include column names as the first row.
SQL Options
Export option
Choose whether to export Both structure and data, Data only (INSERT statements), or Structure only (CREATE TABLE).
JSON Options
None
Seconds behind primary: Replication delay value.
Status: Availability of the node.
Last_SQL_Errno
Most recent numeric error code reported by the SQL thread.
Read_Master_Log_Pos
Current read position in the source’s binary log.
Relay_Log_Pos
Last executed position in the local relay log.
Deadlocks
Number of detected deadlocks, where transactions block each other and require one to be rolled back.
Value: Number of processes/threads from that client.
CPU Utilisation
Line graph of CPU usage percentage per instance.
Memory Usage
Percentage of used memory per instance (excluding cache/buffers).
Network Traffic
Time-series of receive and transmit throughput per instance (bits per second).
Filesystems Info
Table with filesystem type, mount point, capacity, and instance.
Disk Used Space Utilisation
Graph of percentage disk space used per mount point.
Disk IOPS
Reads and writes per second per storage device.
Binlog Size
Current binary log size per instance.
Binlog Throughput
Bytes written to binary logs per second.
Binlog Commits
Rate of commit operations recorded in binary logs.
Replication Lag
Replication delay value reported in seconds.
Slave_connections
Number of replication I/O connections to the upstream source.
Retried_transactions
Total replicated transactions retried due to transient errors.
Slave_IO_Running
Status flag indicating if the I/O thread is fetching events.
Slave_SQL_Running
Status flag indicating if the SQL thread is applying events.
Last_Errno
Most recent numeric error code for replication issues overall.
Last_IO_Errno
Most recent numeric error code reported by the I/O thread.
Current Threads Running
Number of threads actively executing queries.
Questions (QPS)
Queries per second executed on each instance.
Slow Queries
Rate of queries exceeding long_query_time.
Created Tmp Disk Tables
On-disk temporary tables created per second.
Number of Connections
Current number of active client connections (Threads_connected).
Connection Utilization
Share of connections in use compared to the configured maximum (Threads_connected / max_connections).
% of Aborted Connections
Percentage of connection attempts that failed or were aborted (aborted_connects / connections).
Select Range Scan
Number of SELECT operations performing range scans.
Select Full Range Join
Number of queries that performed a full range join. Indicates potential suboptimal indexing or join conditions.
Select Range Check
Number of SELECT operations requiring range checks.
InnoDB Read/Writes
Rate of physical read and write operations by InnoDB per second. Reads are disk fetches, writes are disk flushes.
InnoDB Buffer Pool Reads
Logical reads from the buffer pool vs. evicted or read-ahead pages, indicating buffer pool efficiency.
InnoDB Row Lock
Number of row lock waits in InnoDB, with high values indicating contention or poor indexing.
InnoDB Checkpoint Age
Size of uncheckpointed redo log data in bytes, with large sizes signaling risk of long crash recovery times.
InnoDB Log Writes
Number of write operations to the InnoDB redo log per second, reflecting redo logging activity.
InnoDB History List Length
Length of the undo log history list, with growth indicating long-running transactions preventing purge.
This section provides guidance on how to configure high availability in MariaDB and MaxScale instances. If you are looking for an HA setup for the operator, please refer to the Helm documentation.
Synchronous multi-master with at least 3 nodes. Always an odd number of nodes, as it is quorum-based.
Leverage as database proxy to load balance requests and perform failover/switchover operations. Configure 2 replicas to enable MaxScale upgrades without downtime.
Use to avoid noisy neighbours.
Define .
Highly Available Topologies
: The primary node allows both reads and writes, while secondary nodes only serve reads. The primary has a binary log and the replicas asynchronously replicate the binary log events.
: All nodes support reads and writes, but writes are only sent to one node to avoid contention. The fact that is synchronous and that all nodes are equally configured makes the primary failover/switchover operation seamless and usually instantaneous.
Kubernetes Services
In order to address nodes, MariaDB Enterprise Kubernetes Operator provides you with the following Kubernetes Services:
<mariadb-name>: This is the default Service, only intended for the .
<mariadb-name>-primary: To be used for write requests. It will point to the primary node.
Whenever the primary changes, either by the user or by the operator, both the <mariadb-name>-primary and <mariadb-name>-secondaryServices will be automatically updated by the operator to address the right nodes.
The primary may be manually changed by the user at any point by updating the spec.[replication|galera].primary.podIndex field. Alternatively, automatic primary failover can be enabled by setting spec.[replication|galera].primary.autoFailover, which will make the operator to switch primary whenever the primary Pod goes down.
MaxScale
While Kubernetes Services can be used for addressing primary and secondary instances, we recommend utilizing as database proxy for doing so, as it comes with additional advantages:
Enhanced failover/switchover operations for both replication and Galera
Single entrypoint for both reads and writes
Multiple router modules available to define how to route requests
The full lifecyle of the MaxScale proxy is covered by this operator. Please refer to for further detail.
Pod Anti-Affinity
Bear in mind that, when enabling this, you need to have at least as many Nodes available as the replicas specified. Otherwise your Pods will be unscheduled and the cluster won't bootstrap.
To achieve real high availability, we need to run each MariaDBPod in different Kubernetes Nodes. This practice, known as anti-affinity, helps reducing the blast radius of Nodes being unavailable.
By default, anti-affinity is disabled, which means that multiple Pods may be scheduled in the same Node, something not desired in HA scenarios.
You can selectively enable anti-affinity in all the different Pods managed by the MariaDB resource:
Anti-affinity may also be enabled in the resources that have a reference to MariaDB, resulting in their Pods being scheduled in Nodes where MariaDB is not running. For instance, the Backup and Restore processes can run in different Nodes:
In the case of MaxScale, the Pods will also be placed in Nodes isolated in terms of compute, ensuring isolation not only among themselves but also from the MariaDBPods. For example, if you run a MariaDB and MaxScale with 3 replicas each, you will need 6 Nodes in total:
Default anti-affinity rules generated by the operator might not satisfy your needs, but you can always define your own rules. For example, if you want the MaxScalePods to be in different Nodes, but you want them to share Nodes with MariaDB:
Dedicated Nodes
If you want to avoid noisy neighbours running in the same Kubernetes Nodes as your MariaDB, you may consider using dedicated Nodes. For achieving this, you will need:
Taint your Nodes and add the counterpart toleration in your Pods.
Tainting your Nodes is not covered by this operator, it is something you need to do by yourself beforehand. You may take a look at the to understand how to achieve this.
Select the Nodes where Pods will be scheduled in via a nodeSelector.
Although you can use the default Node labels, you may consider adding more significative labels to your Nodes, as you will have to set to them in your PodnodeSelector. Refer to the .
Add podAntiAffinity to your Pods as described in the section.
The previous steps can be achieved by setting these fields in the MariaDB resource:
Pod Disruption Budgets
Take a look at the if you are unfamiliar to PodDisruptionBudgets
By defining a PodDisruptionBudget, you are telling Kubernetes how many Pods your database tolerates to be down. This quite important for planned maintenance operations such as Node upgrades.
MariaDB Enterprise Kubernetes Operator creates a default PodDisruptionBudget if you are running in HA, but you are able to define your own by setting:
For more information about configuring the plugin as well as different capabilities, please check the documentation. This guide will cover a minimal example for configuring the plugin with the operator.
Configuring TDE in MariaDB Using Hashicorp Key Management Plugin
Transparent Data Encryption (TDE) can be configured in MariaDB leveraging the Hashicorp Key Management Plugin.
Requirements
Running and accessible Vault KMS setup with a valid SSL certificate.
Vault is unsealed and you've logged in to it with vault login $AUTH_TOKEN, where $AUTH_TOKEN is an authentication token given to you by an administrator
openssl for generating secrets
Steps
Creating A New Key-Value Store In Vault. Create a new key-value store and take note of the path. In our example we will use mariadb.
Adding necessary secrets. We will put 2 secrets with ids 1 and 2. 2 will be used for temporary files, while 1
Day-2 Operations
Rotating Secrets
Put A New Secret In Vault. After logging in to vault, you can run again:
This will start re-encrypting data.
Monitor Re-Encryption.
If you check the encrpytion status again:
You should see CURRENT_KEY_VERSION column start getting updated to point to the new key version.
Rotating Token
Make sure when rotating the token, to do so in advance of the token expiring.
Acquire a new token and update the secret.
Restart MariaDB Pods. MariaDB will continue using the old token until the Pods are restarted. You can add the following annotation to the Pods in order to trigger an update, see the for further detail:
Known Issues/Limitations
Vault Not Being Accessible Will Result In MariaDB Not Working
As MariaDB uses Vault to fetch it's decryption key, in case that Vault becomes unavailable, it will result in MariaDB not being able to fetch the decryption key and hence stop working. While the Hashicorp plugin has a configurable cache, that should be set and will result in MariaDB still working for a few seconds to minutes, depending on configuration, the cache is not reliable as it's ephemeral and short lived.
Deleting The Decryption Key Will Make Your Data Inaccessible.
It is recommended to back up the decryption key so accidental deletions will not result in issues.
Decryption Key Must Be Hexadecimal
Use the following to generate correct decryption keys.
Rotating The Decryption Key Before A Previous Re-Encryption Has Finished, Will Result In Data Corruption.
To check the re-encryption progress, you can run:
Look for the CURRENT_KEY_VERSION and make sure they are in sync with the latest version you have in Vault.
Docker Images
Lists and describes the specific Docker images used by the Operator, including MariaDB Enterprise Server, MaxScale, and supporting sidecars.
Certified images
All the Docker images used by this operator are based on and have been . The advantages of using UBI based images are:
<mariadb-name>-secondary
: To be used for read requests. It will load balance requests to all nodes except the primary.
Replay pending transaction when primary goes down
Ability to choose whether the old primary rejoins as a replica
Connection pooling
WHITE PAPER
The Ultimate Guide to High Availability with MariaDB
will be used for everything else. It is not neccessary to create 2 of them and in that case, temporary files will use
1
.
Note: Here you should use the path we chose in the previous step.
(Optional) Create An Authentication Token With Policy. This step can be skipped if you want to use your own token. Consult with a Vault administrator regarding this. Policies are Vault's way to restrict access to what you are allowed to do. The following is a policy that should be used by the token following the least permission principle.
After which, we can create a new token with the given policy.
You will see output similar to:
Your new token is: EXAMPLE_TOKEN.
Create A Secret For the vault token. Now that you've either created a new token, or are using an existing one, we need to create a secret with it.
Create a Secret for the Certificate Authority (CA) used to issue the Vault certificate. For further information, consult the docs If you have the certificate locally in a file called ca.crt you can run:
Create A MariaDB Custom Resource. The final step is creating a new MariaDB instance.
mariadb-vault.yaml
kubectl apply -f mariadb-vault.yaml
Verify Encryption Works.
You should see something along the lines of:
At this point, you can check the encryption status:
If you create a new database and then table, the above query should return additional information about them. Something like:
Note: The above query is truncated. In reality, you will see a few more columns.
If you don't see a command prompt, try pressing enter.
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 95
Server version: 11.4.7-4-MariaDB-enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
SELECT * from information_schema.INNODB_TABLESPACES_ENCRYPTION;
Immutability: UBI images are built to be secure and stable, reducing the risk of unintended changes or vulnerabilities due to mutable base layers.
Small size: The UBI minimal and micro variants used by this operator are designed to be lightweight, containing only the essential packages. This can lead to smaller container image sizes, resulting in faster build times, reduced storage requirements, and quicker image pulls.
Security and compliance: Regular CVE scanning and vulnerability patching help maintain compliance with industry standards and security best practices.
Enterprise-grade support: UBI images are maintained and supported by Red Hat, ensuring timely security updates and long-term stability.
List of compatible images
MariaDB Enterprise Kubernetes Operator is compatible with the following Docker images:
This section outlines several methods for pulling official MariaDB container images from docker.mariadb.com and making them available in your private container registry. This is often necessary for air-gapped, offline, or secure environments.
Option 1: Direct Pull, Tag, and Push
This method is ideal for a "bastion" or "jump" host that has network access to both the public internet (specifically docker.mariadb.com) and your internal private registry.
Log in to both registries. You will need a MariaDB token for the public registry and your credentials for the private one. Refer to the official documentation.
Pull the required image. Pull the official MariaDB Enterprise Kubernetes Operator image from its public registry.
Tag the image for your private registry. Create a new tag for the image that points to your private registry's URL and desired repository path.
Push the re-tagged image. Push the newly tagged image to your private registry.
Option 2: Using a Proxy or Caching Registry
Many modern container registries can be configured to function as a pull-through cache or proxy for public registries. When an internal client requests an image, your registry pulls it from the public source, stores a local copy, and then serves it. This automates the process after initial setup.
You can use Harbor as a pull-through cache (Harbor calls this Replication Rules).
Option 3: Offline Transfer using docker save and docker push
This method is designed for fully air-gapped environments where no single machine has simultaneous access to the internet and the private registry.
On the Internet-Connected Machine
Log in and pull the image.
Save the image to a tar archive. This command packages the image into a single, portable file.
Use a tool like scp or sftp or a USB drive to copy the generated .tar archives from the internet-connected machine to your isolated systems.
On the Machine with Private Registry Access
Load the image from the archive.
Log in to your private registry.
Tag the loaded image. The image loaded from the tar file will retain its original tag. You must re-tag it for your private registry.
Push the image to your private registry.
Option 4: For OpenShift, you can use OpenShift Disconnected Installation Mirroring
Option 5: Offline Transfer for containerd Environments
This method is for air-gapped environments that use containerd as the container runtime (common in Kubernetes) and do not have the Docker daemon. It uses the ctr command-line tool to import, tag, and push images. ⚙️
1. On the Bastion Host (with Internet)
First, on a machine with internet access, you'll pull the images and export them to portable archive files.
Pull the Container Image Use the ctr image pull command to download the required image from its public registry.
Note: If your bastion host uses Docker, you can use docker pull instead as we did in Option 3.
Export the Image to an Archive Next, export the pulled image to a .tar file using ctr image export. The format is ctr image export <output-filename> <image-name>.
Note: To find the exact image name as containerd sees it, run ctr image ls. The Docker equivalent for this step is docker save <image-name> -o <output-filename>.
Repeat this process for all the container images you need to transfer.
2. Transfer the Archives
Use a tool like scp or sftp or a USB drive to copy the generated .tar archives from the bastion host to your isolated systems.
3. On the Isolated Host
Finally, on the isolated system, you will import the archives into containerd. Official Docs
Importing for Kubernetes (Important!) ⚙️ If the images need to be available to Kubernetes, you must import them into the k8s.io namespace by adding the -n=k8s.io flag.
Verify the Image Check that containerd recognizes the newly imported image.
You can also verify that the Container Runtime Interface (CRI) sees it by running:
Important Note
The examples above use the mariadb-enterprise-operator:25.8.0 image. You must repeat the chosen process for all required container images. A complete list is available here
Developing Applications with MariaDB & Containers via Docker
Connections
Explains how application clients connect to databases managed by the Operator, including the use of Kubernetes Services and MaxScale proxies.
MariaDB Enterprise Kubernetes Operator provides the Connection resource to configure connection strings for applications connecting to MariaDB. This resource creates and maintains a Kubernetes Secret containing the credentials and connection details needed by your applications.
Connection CR
A Connection resource declares an intent to create a connection string for applications to connect to a MariaDB instance. When reconciled, it creates a Secret containing the DSN and optionally, individual connection parameters:
The operator creates a Secret named connection containing a DSN and individual fields like username, password, host, port, and database. Applications can mount this Secret to obtain the connection details.
Service selection
By default, the host in the generated Secret points to the Service named after the referenced MariaDB or MaxScale resource (the same as metadata.name). For HA MariaDB, the Service<mariadb-name>-primary is used instead, so only the primary Pod will be used as target:
Alternatively, you may override the default behaviour by setting serviceName and connect to another Service.
Please refer to the to identify which Services are available.
Credential generation
The operator can automatically generate credentials for users via the GeneratedSecretKeyRef type with the generate: true field. This feature is available in the MariaDB, MaxScale, and User resources.
For example, when creating a MariaDB resource with an initial user:
The operator will automatically generate a random password and store it in a Secret named app-password. You can then reference this Secret in your Connection resource:
If you prefer to provide your own password, you can opt-out from random password generation by either not providing the generate field or setting it to false. This enables the use of GitOps tools like or to seed the password.
Secret template
The secretTemplate field allows you to customize the output Secret, allowing you to include individual connection parameters:
The resulting Secret will contain:
dsn: The full connection string
username: The database username
password: The database password
Custom DSN format
You can customize the DSN format using Go templates via the format field:
Available template variables:
{{ .Username }}: The database username
{{ .Password }}: The database password
{{ .Host }}: The database host
Refer to the for additional details about the template syntax.
TLS authentication
Connection supports TLS client certificate authentication as an alternative to password authentication:
When using TLS authentication, provide tlsClientCertSecretRef instead of passwordSecretKeyRef. The referenced Secret must be a Kubernetes TLS Secret containing the client certificate and key.
Cross-namespace connections
Connection resources can reference MariaDB instances in different namespaces:
This creates a Connection in the app namespace that references a MariaDB in the mariadb namespace.
MaxScale connections
Connection resources can reference MaxScale instances using maxScaleRef:
When referencing a MaxScale, the operator uses the MaxScale Service and its listener port. The health check will consume connections from the MaxScale connection pool.
External MariaDB connections
Connection resources can reference ExternalMariaDB instances by specifying kind: ExternalMariaDB in the mariaDbRef:
This is useful for generating connection strings to external MariaDB instances running outside of Kubernetes.
Health checking
The healthCheck field configures periodic health checks to verify database connectivity:
interval: How often to perform health checks (default: 30s)
retryInterval: How often to retry after a failed health check (default: 3s)
The Connection status reflects the health check results, allowing you to monitor connectivity issues through Kubernetes.
The MariaDB pam plugin facilitates user authentication by interfacing with the Pluggable Authentication Modules (PAM) framework, enabling diverse and centralized authentication schemes.
Currently the enterprise operator utilizes this plugin to provide support for:
LDAP based authentication
LDAP
This guide outlines the process of configuring MariaDB to authenticate users against an LDAP or Active Directory service. The integration is achieved by using MariaDB's Pluggable Authentication Module (PAM) plugin, which delegates authentication requests to the underlying Linux PAM framework.
How Does It Work?
To enable LDAP authentication for MariaDB through PAM, several components work in tandem:
PAM (Pluggable Authentication Modules): A framework used by Linux and other UNIX-like systems to consolidate authentication tasks. Applications like MariaDB can use PAM to authenticate users without needing to understand the underlying authentication mechanism. Operations such as system login, screen unlocking, and sudo access commonly use PAM.
nss-pam-ldapd: This is the software package that provides the necessary bridge between PAM and an LDAP server. It includes the core components required for authentication.
pam_ldap.so: A specific PAM module, provided by the nss-pam-ldapd package. This module is the "plug-in" that the PAM framework loads to handle authentication requests destined for an LDAP server.
The nslcd daemon is ran as a sidecar container and communication happens through the shared unix socket, following container best practices of keeping a single process per container.
What is needed for LDAP Auth?
nslcd is configured with 2 files. nslcd.conf which tells the daemon about the LDAP server and nsswitch.conf, determine the sources from which to obtain name-service information.
nslcd can be configured to run as a specific user based on the uid and gid props specified in the config file, however that user should have sufficient permissions to read/write to /var/run/nslcd, should own both nslcd.conf and nsswitch.conf and they should not be too open (0600).
Both of these configuration files will be attached later on in the example given.
nslcd.conf
The /etc/nslcd.conf is the configuration file for LDAP nameservice daemon.
In a production environment it is recommended to use LDAPS (LDAP secure), which uses traditional TLS encryption to secure data in transit. To do so, you need to add the following to your nslcd.conf file:
nsswitch.conf
The Name Service Switch (NSS) configuration file, located at /etc/nsswitch.conf. It is used by the GNU C Library and certain other applications to determine the sources from which to obtain name-service information in a range of categories, and in what order. Each category of information is identified by a database name.
Installing The PAM Plugin
The pam plugin is not enabled by default (even though it is installed). To enable it, you should add the following lines to your MariaDB Custom Resource:
See below for a complete example.
Combining It All Together
Fistly, we need to create our ConfigMaps and Secrets, that will store the nsswitch.conf, nslcd.conf and the mariadb pam module.
Make sure to adapt the nslcd-conf as per your ldap server configuration.
mariadb-nss-config.yaml:
kubectl apply -f mariadb-nss-config.yaml
Now that our configuration is done, we need to create the MariaDB custom resource along with needed configurations.
mariadb.yaml:
kubectl apply -f mariadb.yaml
And in the end we need to create our user in the database, which must have the same name as a user in ldap server. In the example below that's ldap-user. We also create mariadb-ldap secret, which holds the name of the plugin we are using as well as the module we need to load.
mariadb-user.yaml:
kubectl apply -f mariadb-user.yaml
After a few seconds, the user should have been created by the operator. To verify that all is working as expected, modify the <password> field below and run:
You should see something along the lines of:
LDAPS
If you followed the instructions for setting up a basic MariaDB instance with ldap, you need to fetch the public certificate that your LDAP server is set up with and add it to a called mariadb-ldap-tls.
If you have the certificate locally in a file called tls.crt you can run:
With MaxScale
To put MaxScale in front of your PAM-enabled MariaDB cluster, configure MaxScale so that it skips checking if passwords of incoming clients are correct, but rather assumes they are. The failure still occurs, but at the time when MaxScale tries to authenticate to the backend servers.
maxscale-ldap.yaml:
kubectl apply -f maxscale-ldap.yaml
Ref:
Known Issues
Slow Start On KIND
This may be a problem with the maximum number of file-handles a process can allocate. Some systems have this value set to really high, which causes an issue. To remedy this, you need to delete your kind cluster and run:
nslcd (Name Service Lookup Daemon): This daemon acts as an intermediary service. The pam_ldap.so module does not communicate directly with the LDAP server. Instead, it forwards authentication requests to the nslcd daemon, which manages the connection and communication with the LDAP directory. This design allows for connection caching and a more robust separation of concerns.
# /etc/nslcd.conf: Configuration file for nslcd(8)
# The user/group nslcd will run as. Note that these should not be LDAP users.
# required to be `mysql`
uid mysql
# required to be `mysql`
gid mysql
# The location of the LDAP server.
uri ldap://openldap-service.default.svc.cluster.local:389
# The search base that will be used for all queries.
base dc=openldap-service,dc=default,dc=svc,dc=cluster,dc=local
# The distinguished name with which to bind to the directory server for lookups.
# This is a service account used by the daemon.
binddn cn=admin,dc=openldap-service,dc=default,dc=svc,dc=cluster,dc=local
bindpw PASSWORD_REPLACE-ME
# Change the protocol to `ldaps`
+uri ldaps://openldap-service.default.svc.cluster.local:636
-uri ldap://openldap-service.default.svc.cluster.local:389
# ...
+tls_reqcert demand # Look at: https://linux.die.net/man/5/ldap.conf then search for TLS_REQCERT
+tls_cacertfile /etc/openldap/certs/tls.crt # You will need to mount this certificate (from a secret) later
---
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: mariadb-nslcd-secret
stringData:
nslcd.conf: |
# /etc/nslcd.conf: Configuration file for nslcd(8)
# The user/group nslcd will run as. Note that these should not be LDAP users.
uid mysql # required to be `mysql`
gid mysql # required to be `mysql`
# The location of the LDAP server.
uri ldap://openldap-service.default.svc.cluster.local:389
# The search base that will be used for all queries.
base dc=openldap-service,dc=default,dc=svc,dc=cluster,dc=local
# The distinguished name with which to bind to the directory server for lookups.
# This is a service account used by the daemon.
binddn cn=admin,dc=openldap-service,dc=default,dc=svc,dc=cluster,dc=local
bindpw PASSWORD_REPLACE-ME
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mariadb-nsswitch-configmap
labels:
enterprise.mariadb.com/watch: ""
data:
nsswitch.conf: |
passwd: files ldap
group: files ldap
shadow: files ldap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mariadb-pam-configmap
labels:
enterprise.mariadb.com/watch: ""
data:
mariadb: |
# This is needed to tell PAM to use pam_ldap.so
auth required pam_ldap.so
account required pam_ldap.so
---
apiVersion: v1
kind: Secret
metadata:
name: mariadb # Used to hold the mariadb and root user passwords
labels:
enterprise.mariadb.com/watch: ""
stringData:
password: MariaDB11!
root-password: MariaDB11!
---
apiVersion: enterprise.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb
spec:
rootPasswordSecretKeyRef:
name: mariadb
key: root-password
username: mariadb
passwordSecretKeyRef:
name: mariadb
key: password
generate: true
database: mariadb
port: 3306
storage:
size: 1Gi
service:
type: LoadBalancer
metadata:
annotations:
metallb.universe.tf/loadBalancerIPs: 172.18.0.20
myCnf: |
[mariadb]
bind-address=*
default_storage_engine=InnoDB
binlog_format=row
innodb_autoinc_lock_mode=2
innodb_buffer_pool_size=800M
max_allowed_packet=256M
plugin_load_add = auth_pam # Load auth plugin
resources:
requests:
cpu: 1
memory: 128Mi
limits:
memory: 1Gi
metrics:
enabled: true
volumes: # Attach `nslcd.conf`, `nsswitch.conf` and `mariadb` (pam). Also add an emptyDir volume for `nslcd` socket
- name: nslcd
secret:
secretName: mariadb-nslcd-secret
defaultMode: 0600
- name: nsswitch
configMap:
name: mariadb-nsswitch-configmap
defaultMode: 0600
- name: mariadb-pam
configMap:
name: mariadb-pam-configmap
defaultMode: 0600
- name: nslcd-run
emptyDir: {}
sidecarContainers:
# The `nslcd` daemon is ran as a sidecar container
- name: nslcd
image: docker.mariadb.com/nslcd:0.9.10-13
volumeMounts:
- name: nslcd
mountPath: /etc/nslcd.conf
subPath: nslcd.conf
- name: nsswitch
mountPath: /etc/nsswitch.conf
subPath: nsswitch.conf
# nslcd-run is missing because volumeMounts from main container are shared with sidecar
volumeMounts:
- name: mariadb-pam
mountPath: /etc/pam.d/mariadb
subPath: mariadb
- name: nslcd-run
mountPath: /var/run/nslcd
---
apiVersion: v1
kind: Secret
metadata:
name: mariadb-ldap
stringData:
plugin: pam # name of the plugin, must be `pam`
pamModule: mariadb # This is the name of the pam config file placed in `/etc/pam.d/`
---
apiVersion: enterprise.mariadb.com/v1alpha1
kind: User
metadata:
name: ldap-user # This user must exist already in your ldap server.
spec:
mariaDbRef:
name: mariadb
host: "%" # Don't specify the ldap host here. Keep this as is
passwordPlugin:
pluginNameSecretKeyRef:
name: mariadb-ldap
key: plugin
pluginArgSecretKeyRef:
name: mariadb-ldap
key: pamModule
cleanupPolicy: Delete
requeueInterval: 10h
retryInterval: 30s
If you don't see a command prompt, try pressing enter.
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 95
Server version: 11.4.7-4-MariaDB-enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>
docker --version
docker-compose --version
# Test Docker
docker run hello-world
# Verify ports are free
netstat -ano | findstr :8000
netstat -ano | findstr :8002
netstat -ano | findstr :3306
# No output = ports are free ✓
# Navigate to your MariaDB AI RAG deployment directory
cd "<path-to-your-mariadb-ai-rag-directory>"
# Verify required files exist
Get-ChildItem | Select-Object Name
# Required files:
# ✓ ai-nexus.deb
# ✓ Dockerfile
# ✓ docker-compose.yml
# ✓ start-services.sh
# ✓ config.env.secure.local
# Edit configuration file
notepad config.env.secure.local
# Update this line with your actual API key:
# GEMINI_API_KEY=YOUR_ACTUAL_API_KEY_HERE
# Save and close
# Ensure you're in the MariaDB AI RAG directory
docker build -t ai-nexus-image .
docker-compose up -d
[+] Running 3/3
✔ Network ai-nexus-network Created
✔ Container mysql-db Started
✔ Container ai-nexus Started
docker logs ai-nexus -f
✓ RAG API is ready! (took ~30 seconds)
Starting MCP server...
Adaptive MCP Server ready on 0.0.0.0:8002
docker-compose ps
NAME STATUS PORTS
ai-nexus Up X minutes 0.0.0.0:8000->8000/tcp, 0.0.0.0:8002->8002/tcp
mysql-db Up X minutes (healthy) 0.0.0.0:3306->3306/tcp
# Test RAG API
Invoke-RestMethod -Uri "http://localhost:8000/health"
# Open Swagger UI
Start-Process "http://localhost:8000/docs"
# Ensure you're in the MariaDB AI RAG directory
docker build -t ai-nexus-image .
# Check logs
docker logs ai-nexus --tail 100
docker logs mysql-db --tail 50
# Rebuild and restart
docker build -t ai-nexus-image .
docker-compose down
docker-compose up -d
# Check MariaDB status
docker logs mysql-db --tail 20
# Wait for healthy status
docker-compose ps
# Look for "(healthy)" next to mysql-db
# Verify DB_HOST in config
# Should be: DB_HOST=mysql-db
# Find process using port
netstat -ano | findstr :8000
# Stop process (replace <PID>)
Stop-Process -Id <PID> -Force
# Or change port in docker-compose.yml
# Verify secret keys are identical
docker exec ai-nexus env | Select-String "SECRET"
# All three must match:
# SECRET_KEY
# JWT_SECRET_KEY
# MCP_AUTH_SECRET_KEY
# If different, edit config and restart
docker-compose down
docker-compose up -d
# Test Gemini API key
$apiKey = "YOUR_API_KEY"
$uri = "https://generativelanguage.googleapis.com/v1beta/models?key=$apiKey"
Invoke-RestMethod -Uri $uri
# If error: Get new key from https://makersuite.google.com/app/apikey
# Update in config.env.secure.local or Vault
# Restart: docker restart ai-nexus
# Increase timeout in start-services.sh
# Edit: MAX_WAIT=300 # 5 minutes
# Rebuild
docker build -t ai-nexus-image .
docker-compose down
docker-compose up -d
docker-compose ps
# All services
docker-compose logs -f
# Specific service
docker logs ai-nexus -f
docker logs mysql-db -f
# Last N lines
docker logs ai-nexus --tail 100
# Stop MariaDB AI RAG
docker-compose down
# Stop Vault (if using Vault mode)
docker-compose -f "Localvault/docker-compose.vault.yml" down
# Standalone mode
docker-compose up -d
# Vault mode
docker-compose --env-file config.env.vault.local up -d
# Restart all
docker-compose restart
# Restart specific service
docker restart ai-nexus
docker-compose down -v
docker exec -it ai-nexus /bin/bash
docker stats ai-nexus mysql-db
# Build
docker build -t ai-nexus-image .
# Start
docker-compose up -d
# Stop
docker-compose down
# Setup Vault (one-time)
.\Localvault\setup_vault_local.ps1
# Start
docker-compose --env-file config.env.vault.local up -d
# Stop
docker-compose down
docker-compose -f "Localvault/docker-compose.vault.yml" down
# Stop current mode
docker-compose down
# Start different mode
docker-compose up -d # Standalone
docker-compose --env-file config.env.vault.local up -d # Vault
# Check Ubuntu version
lsb_release -a
# Check disk space
df -h /
# Check ports are free
sudo netstat -tuln | grep -E ':(8000|8002|3306)'
# No output = ports available
# Start RAG API
/opt/rag-in-a-box/bin/rag-api --config /path/to/config.env
# Start MCP Server
CONFIG_FILE=/path/to/config.env /opt/rag-in-a-box/bin/mcp-server
sudo netstat -tuln | grep -E ':(8000|8002)'
# Test RAG API
curl http://localhost:8000/health
# Expected: {"status":"healthy","database":"connected"}
# Test MCP Server
curl http://localhost:8002/health
# Expected: {"status":"healthy"}
# Test API info
curl http://localhost:8000/
INFO: Started server process
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000
# Generate token
curl -X POST "http://localhost:8000/token" \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"your_password"}'
# Save token for next commands
export TOKEN="eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
# Create test document
echo "This is a test document for MariaDB AI RAG RAG system. It contains sample text for testing." > test_document.txt
# Upload document
curl -X POST "http://localhost:8000/documents/ingest" \
-H "Authorization: Bearer $TOKEN" \
-F "file=@test_document.txt"
# Expected output:
# {"document_id":1,"filename":"test_document.txt","chunks_created":1,"status":"success"}
# Query the document
curl -X POST "http://localhost:8000/orchestrate/generation" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"query":"What is this document about?"}'
# Expected: AI-generated response with sources
# Login to MariaDB
mariadb -u root -p kb_chunks
# Enter password: [your_password]
-- Show tables
SHOW TABLES;
-- Check documents
SELECT id, filename, created_at FROM documents_DEMO_gemini;
-- Check embeddings
SELECT COUNT(*) FROM vdb_tbl_DEMO_gemini;
-- Exit
EXIT;
hostname -I
sudo systemctl status mariadb
sudo systemctl start mariadb
nano /path/to/config.env
# Check for typos, missing values
sudo lsof -i :8000
sudo lsof -i :8002
# Stop conflicting service or kill process
# Verify all three secret keys are identical
sudo grep SECRET_KEY /path/to/config.env
# Should show same value for:
# SECRET_KEY=...
# JWT_SECRET_KEY=...
# MCP_AUTH_SECRET_KEY=...
# If different, fix and restart
nano /path/to/config.env
# Test Gemini API key
API_KEY="YOUR_KEY"
curl -s "https://generativelanguage.googleapis.com/v1beta/models?key=$API_KEY"
# If invalid, update config
nano /path/to/config.env
# Update: GEMINI_API_KEY=...
# Restart services
/opt/rag-in-a-box/bin/rag-api --config /path/to/config.env
CONFIG_FILE=/path/to/config.env /opt/rag-in-a-box/bin/mcp-server
# Find process using port
sudo lsof -i :8000
sudo lsof -i :8002
# Kill process (if safe)
sudo kill <PID>
# Check memory
free -h
top
# Add swap if needed (4GB example)
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Make permanent
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
# Check service status
sudo systemctl status mariadb
# Test RAG API
curl http://localhost:8000/health
# Expected: {"status":"healthy","database":"connected"}
# Test MCP Server
curl http://localhost:8002/health
# Expected: {"status":"healthy"}
# Test API info
curl http://localhost:8000/
# Monitor disk space
df -h
# Install new version
sudo apt install -y ./ai-nexus-new-version.deb
# Start services in their own terminals
/opt/rag-in-a-box/bin/rag-api --config /path/to/config.env
CONFIG_FILE=/path/to/config.env /opt/rag-in-a-box/bin/mcp-server
# Verify
curl http://localhost:8000/health
# Generate secure key
python3 -c "import secrets; print(secrets.token_urlsafe(64))"
# Use same value for all three keys in config
nano /path/to/config.env
# Create dedicated database user
sudo mariadb -u root -p
CREATE USER 'rag_user'@'localhost' IDENTIFIED BY 'your_secure_password';
GRANT ALL PRIVILEGES ON kb_chunks.* TO 'rag_user'@'localhost';
FLUSH PRIVILEGES;
EXIT;
/opt/rag-in-a-box/bin/rag-api # RAG API binary
/opt/rag-in-a-box/bin/mcp-server # MCP Server binary
/opt/rag-in-a-box/config/config.env.template # Configuration file
/var/log/mysql/error.log # MariaDB logs
MariaDB (Port 3306)
↓
RAG API (Port 8000)
↓
MCP Server (Port 8002)
Ubuntu System (Native)
├── MariaDB Service (systemd)
│ └── Database: kb_chunks (Port 3306)
├── RAG API Service (systemd)
│ └── FastAPI Server (Port 8000)
└── MCP Server Service (systemd)
└── FastAPI Server (Port 8002)
sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf
[mysqld]
# Adjust based on available RAM
innodb_buffer_pool_size = 4G # 50-70% of RAM
max_connections = 200
innodb_log_file_size = 512M
query_cache_size = 0
query_cache_type = 0
# Monitor resources
htop
# Or
top
# Check disk I/O
iostat -x 1
# Check network
iftop
Synchronous Multi-Master With Galera
MariaDB Enterprise Kubernetes Operator provides cloud native support for provisioning and operating multi-master MariaDB clusters using Galera. This setup enables the ability to perform writes on a single node and reads in all nodes, enhancing availability and allowing scalability across multiple nodes.
In certain circumstances, it could be the case that all the nodes of your cluster go down at the same time, something that Galera is not able to recover by itself, and it requires manual action to bring the cluster up again, as documented in the Galera documentation. The MariaDB Enterprise Kubernetes Operator encapsulates this operational expertise in the MariaDB CR. You just need to declaratively specify spec.galera, as explained in more detail later in this guide.
To accomplish this, after the MariaDB cluster has been provisioned, the operator will regularly monitor the cluster's status to make sure it is healthy. If any issues are detected, the operator will initiate the recovery process to restore the cluster to a healthy state. During this process, the operator will set status conditions in the MariaDB and emit Events so you have a better understanding of the recovery progress and the underlying activities being performed. For example, you may want to know which Pods were out of sync to further investigate infrastructure-related issues (i.e. networking, storage...) on the nodes where these Pods were scheduled.
MariaDB configuration
The easiest way to get a MariaDB Galera cluster up and running is setting spec.galera.enabled = true:
This relies on sensible defaults set by the operator, which may not be suitable for your Kubernetes cluster. This can be solved by overriding the defaults, so you have fine-grained control over the Galera configuration.
Refer to the to better understand the purpose of each field.
Storage
By default, the operator provisions two PVCs for running Galera:
Storage PVC: Used to back the MariaDB data directory, mounted at /var/lib/mysql.
Config PVC: Where the Galera config files are located, mounted at /etc/mysql/conf.d.
However, you are also able to use just one PVC for keeping both the data and the config files:
Wsrep provider
You are able to pass extra options to the Galera wsrep provider by using the galera.providerOptions field:
It is important to note that, the ist.recv_addr cannot be set by the user, as it is automatically configured to the Pod IP by the operator, something that an user won't be able to know beforehand.
A list of the available options can be found in the .
IPv6 support
If you have a Kubernetes cluster running with IPv6, the operator will automatically detect the IPv6 addresses of your Pods and it will configure several options to ensure that the Galera protocol runs smoothly with IPv6.
Galera cluster recovery
MariaDB Enterprise Kubernetes Operator monitors the Galera cluster and acts accordinly to recover it if needed. This feature is enabled by default, but you may tune it as you need:
The minClusterSize field indicates the minimum cluster size (either absolut number of replicas or percentage) for the operator to consider the cluster healthy. If the cluster is unhealthy for more than the period defined in clusterHealthyTimeout (30s by default), a cluster recovery process is initiated by the operator. The process is explained in the and consists of the following steps:
Recover the sequence number from the grastate.dat on each node.
Trigger a to obtain the sequence numbers in case that the previous step didn't manage to.
Mark the node with highest sequence (bootstrap node) as safe to bootstrap.
The operator monitors the Galera cluster health periodically and performs the cluster recovery described above if needed. You are able to tune the monitoring interval via the clusterMonitorInterval field.
Refer to the to better understand the purpose of each field.
Galera recovery Job
During the recovery process, a Job is triggered for each MariaDBPod to obtain the sequence numbers. It's crucial for this Job to succeed; otherwise, the recovery process will fail. As a user, you are responsible for adjusting this Job to allocate sufficient resources and provide the necessary metadata to ensure its successful completion.
For example, if you're using a service mesh like Istio, it's important to add the sidecar.istio.io/inject=false label. Without this label, the Job will not complete, which would prevent the recovery process from finishing successfully.
Force cluster bootstrap
Use this option only in exceptional circumstances. Not selecting the Pod with the highest sequence number may result in data loss.
Ensure you unset forceClusterBootstrapInPod after completing the bootstrap to allow the operator to choose the appropriate Pod to bootstrap from in an event of cluster recovery.
You have the ability to manually select which Pod is used to bootstrap a new cluster during the recovery process by setting forceClusterBootstrapInPod:
This should only be used in exceptional circumstances:
You are absolutely certain that the chosen Pod has the highest sequence number.
The operator has not yet selected a Pod to bootstrap from.
You can verify this with the following command:
In this case, assuming that mariadb-galera-2 sequence is lower than 350454, it should be safe to bootstrap from mariadb-galera-0.
Finally, after your cluster has been bootstrapped, remember to unset forceClusterBootstrapInPod to allow the operator to select the appropriate node for bootstrapping in the event of a cluster recovery.
Bootstrap Galera cluster from existing PVCs
MariaDB Enterprise Kubernetes Operator will never delete your MariaDB PVCs. Whenever you delete a MariaDB resource, the PVCs will remain intact so you could reuse them to re-provision a new cluster.
That said, Galera is unable to form a cluster from pre-existing state, it requires a process to identify which Pod has the highest sequence number to bootstrap a new cluster. That's exactly what the operator does: whenever a new MariaDB Galera cluster is created and previously created PVCs exist, a cluster recovery process is automatically triggered.
Quickstart
Apply the following manifests to get started with Galera in Kubernetes:
Next, check the MariaDB status and the resources created by the operator:
Let's now proceed with simulating a Galera cluster failure by deleting all the Pods at the same time:
After some time, we will see the MariaDB entering a non Ready state:
Eventually, the operator will kick in and recover the Galera cluster:
Finally, the MariaDB resource will become Ready and your Galera cluster will be operational again:
Troubleshooting
The aim of this section is showing you how to diagnose your Galera cluster when something goes wrong. In this situations, observability is a key factor to understand the problem, so we recommend following these steps before jumping into debugging the problem.
Inspect MariaDB status conditions.
Make sure network connectivity is fine by checking that you have an Endpoint per Pod in your Galera cluster.
Check the events associated with the MariaDB object, as they provide significant insights for diagnosis, particularly within the context of cluster recovery.
Enable debug logs in mariadb-enterprise-operator.
Get the logs of all the MariaDBPod containers, not only of the main mariadb container but also the agent and init ones.
Once you are done with these steps, you will have the context required to jump ahead to the section to see if any of them matches your case.
Common errors
Galera cluster recovery not progressing
If your MariaDB Galera cluster has been in GaleraNotReady state for a long time, the recovery process might not be progressing. You can diagnose this by checking:
Operator logs.
Galera recovery status:
MariaDB events:
If you have Pods named <mariadb-name>-<ordinal>-recovery-<suffix> running for a long time, check its logs to understand if something is wrong.
One of the reasons could be misconfigured Galera recovery Jobs, please make sure you read . If after checking all the points above, there are still no clear symptoms of what could be wrong, continue reading.
First af all, you could attempt to forcefully bootstrap a new cluster as it is described in . Please, refrain from doing so if the conditions described in the docs are not met.
Alternatively, if you can afford some downtime and your PVCs are in healthy state, you may follow this procedure:
Delete your existing MariaDB, this will leave your PVCs intact.
Create your MariaDB again, this will trigger a Galera recovery process as described in .
As a last resource, you can always delete the PVCs and bootstrap a new MariaDB from a backup as documented .
Permission denied writing Galera configuration
This error occurs when the user that runs the container does not have enough privileges to write in /etc/mysql/mariadb.conf.d:
To mitigate this, by default, the operator sets the following securityContext in the MariaDB's StatefulSet :
This enables the CSIDriver and the kubelet to recursively set the ownership ofr the /etc/mysql/mariadb.conf.d folder to the group 999, which is the one expected by MariaDB. It is important to note that not all the CSIDrivers implementations support this feature, see the for further information.
Unauthorized error disabling bootstrap
This situation occurs when the mariadb-enterprise-operator credentials passed to the agent as authentication are either invalid or the agent is unable to verify them. To confirm this, ensure that both the mariadb-enterprise-operator and the MariaDBServiceAccounts are able to create TokenReview objects:
If that's not the case, check that the following ClusterRole and ClusterRoleBindings are available in your cluster:
mariadb-enterprise-operator:auth-delegator is the ClusterRoleBinding bound to the mariadb-enterprise-operatorServiceAccount which is created by the helm chart, so you can re-install the helm release in order to recreate it:
mariadb-galera:auth-delegator is the ClusterRoleBinding bound to the mariadb-galeraServiceAccount which is created on the flight by the operator as part of the reconciliation logic. You may check the mariadb-enterprise-operator logs to see if there are any issues reconciling it.
Bear in mind that ClusterRoleBindings are cluster-wide resources that are not garbage collected when the MariaDB owner object is deleted, which means that creating and deleting MariaDBs could leave leftovers in your cluster. These leftovers can lead to RBAC misconfigurations, as the ClusterRoleBinding might not be pointing to the right ServiceAccount. To overcome this, you can override the ClusterRoleBinding name setting the spec.galera.agent.kubernetesAuth.authDelegatorRoleName field.
Timeout waiting for Pod to be Synced
This error appears in the mariadb-enterprise-operator logs when a Pod is in non synced state for a duration exceeding the spec.galera.recovery.podRecoveryTimeout. Just after, the operator will restart the Pod.
Increase this timeout if you consider that your Pod may take longer to recover.
Galera cluster bootstrap timed out
This is error is returned by the mariadb-enterprise-operator after exceeding the spec.galera.recovery.clusterBootstrapTimeout when recovering the cluster. At this point, the operator will reset the recovered sequence numbers and start again from a clean state.
Increase this timeout if you consider that your Galera cluster may take longer to recover.
A logical backup is a backup that contains the logical structure of the database, such as tables, indexes, and data, rather than the physical storage format. It is created using mariadb-dump, which generates SQL statements that can be used to recreate the database schema and populate it with data.
Logical backups serve not just as a source of restoration, but also enable data mobility between MariaDB instances. These backups are called "logical" because they are independent from the MariaDB topology, as they only contain DDLs and INSERT statements to populate data.
Although logical backups are a great fit for data mobility and migrations, they are not as efficient as for large databases. For this reason, physical backups are the recommended method for backing up MariaDB databases, especially in production environments.
Storage types
Currently, the following storage types are supported:
S3 compatible storage: Store backups in a S3 compatible storage, such as or .
PVCs: Use the available in your Kubernetes cluster to provision a PVC dedicated to store the backup files.
Kubernetes volumes: Use any of the supported natively by Kubernetes.
Our recommendation is to store the backups externally in a S3 compatible storage.
Backup CR
You can take a one-time backup of your MariaDB instance by declaring the following resource:
This will use the default StorageClass to provision a PVC that would hold the backup files, but ideally you should use a S3 compatible storage:
By providing the authentication details and the TLS configuration via references to Secret keys, this example will store the backups in a local Minio instance.
Alternatively you can use dynamic credentials from an EKS Service Account using EKS Pod Identity or IRSA:
By leaving out the accessKeyIdSecretKeyRef and secretAccessKeySecretKeyRef credentials and pointing to the correct serviceAccountName, the backup Job will use the dynamic credentials from EKS.
Scheduling
To minimize the Recovery Point Objective (RPO) and mitigate the risk of data loss, it is recommended to perform backups regularly. You can do so by providing a spec.schedule in your Backup resource:
This resource gets reconciled into a CronJob that periodically takes the backups.
It is important to note that regularly scheduled Backups complement very well the feature detailed below.
Retention policy
Given that the backups can consume a substantial amount of storage, it is crucial to define your retention policy by providing the spec.maxRetention field in your Backup resource:
Compression
You are able to compress backups by providing the compression algorithm you want to use in the spec.compression field:
Currently the following compression algorithms are supported:
bzip2: Good compression ratio, but slower compression/decompression speed compared to gzip.
gzip: Good compression/decompression speed, but worse compression ratio compared to bzip2.
none: No compression.
compression is defaulted to none by the operator.
Server-Side Encryption with Customer-Provided Keys (SSE-C)
You can enable server-side encryption using your own encryption key (SSE-C) by providing a reference to a Secret containing a 32-byte (256-bit) key encoded in base64:
When using SSE-C, you are responsible for managing and securely storing the encryption key. If you lose the key, you will not be able to decrypt your backups. Ensure you have proper key management procedures in place.
When restoring from SSE-C encrypted backups, the same key must be provided in the Restore CR or bootstrapFrom configuration.
Restore CR
You can easily restore a Backup in your MariaDB instance by creating the following resource:
This will trigger a Job that will mount the same storage as the Backup and apply the dump to your MariaDB database.
Nevertheless, the Restore resource doesn't necessarily need to specify a spec.backupRef, you can point to other storage source that contains backup files, for example a S3 bucket:
Target recovery time
If you have multiple backups available, specially after configuring a , the operator is able to infer which backup to restore based on the spec.targetRecoveryTime field.
The operator will look for the closest backup available and utilize it to restore your MariaDB instance. Only backups strictly before or at targetRecoveryTime will be matched.
By default, spec.targetRecoveryTime will be set to the current time, which means that the latest available backup will be used.
Bootstrap new MariaDB instances
To minimize your Recovery Time Objective (RTO) and to switfly spin up new clusters from existing Backups, you can provide a Restore source directly in the MariaDB object via the spec.bootstrapFrom field:
As in the Restore resource, you don't strictly need to specify a reference to a Backup, you can provide other storage types that contain backup files:
Under the hood, the operator creates a Restore object just after the MariaDB resource becomes ready. The advantage of using spec.bootstrapFrom over a standalone Restore is that the MariaDB is bootstrap-aware and this will allow the operator to hold primary switchover/failover operations until the restoration is finished.
Backup and restore specific databases
By default, all the logical databases are backed up when a Backup is created, but you may also select specific databases by providing the databases field:
When it comes to restore, all the databases available in the backup will be restored, but you may also choose a single database to be restored via the database field available in the Restore resource:
There are a couple of points to consider here:
The referred database (db1 in the example) must previously exist for the Restore to succeed.
The mariadb CLI invoked by the operator under the hood only supports selecting a single database to restore via the option, restoration of multiple specific databases is not supported.
Extra options
Not all the flags supported by mariadb-dump and mariadb have their counterpart field in the Backup and Restore CRs respectively, but you may pass extra options by using the args field. For example, setting the --verbose flag can be helpful to track the progress of backup and restore operations:
Refer to the mariadb-dump and mariadb CLI options in the section.
Staging area
S3 is the only storage type that supports a staging area.
When using S3 storage for backups, a staging area is used for keeping the external backups while they are being processed. By default, this staging area is an emptyDir volume, which means that the backups are temporarily stored in the node's local storage where the Backup/RestoreJob is scheduled. In production environments, large backups may lead to issues if the node doesn't have sufficient space, potentially causing the backup/restore process to fail.
To overcome this limitation, you are able to define your own staging area by setting the stagingStorage field to both the Backup and Restore CRs:
In the examples above, a PVC with the default StorageClass will be used as staging area. Refer to the for more configuration options.
Similarly, you may also use a custom staging area when :
Important considerations and limitations
Root credentials
When restoring a backup, the root credentials specified through the spec.rootPasswordSecretKeyRef field in the MariaDB resource must match the ones in the backup. These credentials are utilized by the liveness and readiness probes, and if they are invalid, the probes will fail, causing your MariaDBPods to restart after the backup restoration.
Restore job
Restoring large backups can consume significant compute resources and may cause RestoreJobs to become stuck due to insufficient resources. To prevent this, you can define the compute resources allocated to the Job:
Galera backup limitations
mysql.global_priv
Galera only replicates the tables with InnoDB engine, see the .
Something that does not include mysql.global_priv, the table used to store users and grants, which uses the MyISAM engine. This basically means that a Galera instance with mysql.global_priv populated will not replicate this data to an empty Galera instance. However, DDL statements (CREATE USER, ALTER USER ...) will be replicated.
Taking this into account, if we think now about a restore scenario where:
The backup file includes a DROP TABLE statement for the mysql.global_priv table.
The backup has some INSERT statements for the mysql.global_priv table.
This is what will happen under the scenes while restoring the backup:
The DROP TABLE statement is a DDL so it will be executed in galera-0, galera-1 and galera-2.
The INSERT statements are not DDLs, so they will only be applied to galera-0.
After the backup is fully restored, the liveness and readiness probes will kick in, they will succeed in galera-0, but they will fail in galera-1 and galera-2, as they rely in the root credentials available in mysql.global_priv, resulting in the galera-1 and galera-2 getting restarted.
To address this issue, when backing up MariaDB instances with Galera enabled, the mysql.global_priv table will be excluded from backups by using the --ignore-table option with mariadb-dump. This prevents the replication of the DROP TABLE statement for the mysql.global_priv table. You can opt-out from this feature by setting spec.ignoreGlobalPriv=false in the Backup resource.
Also, to avoid situations where mysql.global_priv is unreplicated, all the entries in that table must be managed via DDLs. This is the recommended approach suggested in the . There are a couple of ways that we can guarantee this:
Use the rootPasswordSecretKeyRef, username and passwordSecretKeyRef fields of the MariaDB CR to create the root and initial user respectively. This fields will be translated into DDLs by the image entrypoint.
Rely on the and CRs to create additional users and grants. Refer to the for further detail.
LOCK TABLES
Galera is not compatible with the LOCK TABLES statement:
For this reason, the operator automatically adds the --skip-add-locks option to the Backup to overcome this limitation.
Migrations using logical backups
Migrating an external MariaDB to a MariaDB running in Kubernetes
You can leverage logical backups to bring your external MariaDB data into a new MariaDB instance running in Kubernetes. Follow this runbook for doing so:
Take a logical backup of your external MariaDB using one of the commands below:
If you are using Galera or planning to migrate to a Galera instance, make sure you understand the and use the following command instead:
Ensure that your backup file is named in the following format: backup.2024-08-26T12:24:34Z.sql. If the file name does not follow this format, it will be ignored by the operator.
Upload the backup file to one of the supported . We recommend using S3.
Create your MariaDB resource declaring that you want to and providing a
If you are using Galera in your new instance, migrate your previous users and grants to use the User and Grant CRs. Refer to the for further detail.
Migrating to a MariaDB with different topology
Database mobility between MariaDB instances with different topologies is possible with logical backups. However, there are a couple of technical details that you need to be aware of in the following scenarios:
Migrating between standalone and replicated MariaDBs
This should be fully compatible, no issues have been detected.
Migrating from standalone/replicated to Galera MariaDBs
There are a couple of limitations regarding the backups in Galera, please make sure you read the section before proceeding.
To overcome this limitations, the Backup in the standalone/replicated instance needs to be taken with spec.ignoreGlobalPriv=true. In the following example, we are backing up a standalone MariaDB (single instance):
Once the previous Backup is completed, we will be able bootstrap a new Galera instance from it:
Reference
Troubleshooting
Galera Pods restarting after bootstrapping from a backup
Please make sure you understand the .
After doing so, ensure that your backup does not contain a DROP TABLE mysql.global_priv; statement, as it will make your liveness and readiness probes to fail after the backup restoration.
MaxScale is a sophisticated database proxy, router, and load balancer designed specifically for and by MariaDB. It provides a range of features that ensure optimal high availability:
Query-based routing: Transparently route write queries to the primary nodes and read queries to the replica nodes.
Connection-based routing: Load balance connections between multiple servers.
Bootstrap a new cluster in the bootstrap node.
Restart and wait until the bootstrap node becomes ready.
Restart the rest of the nodes one by one so they can join the new cluster.
kubectl get mariadbs
NAME READY STATUS PRIMARY POD AGE
mariadb-galera True Running mariadb-galera-0 48m
kubectl get events --field-selector involvedObject.name=mariadb-galera --sort-by='.lastTimestamp'
LAST SEEN TYPE REASON OBJECT MESSAGE
...
45m Normal GaleraClusterHealthy mariadb/mariadb-galera Galera cluster is healthy
kubectl get mariadb mariadb-galera -o jsonpath="{.status.conditions[?(@.type=='GaleraReady')]}" | jq
{
"lastTransitionTime": "2023-07-13T18:22:31Z",
"message": "Galera ready",
"reason": "GaleraReady",
"status": "True",
"type": "GaleraReady"
}
kubectl get mariadb mariadb-galera -o jsonpath="{.status.conditions[?(@.type=='GaleraConfigured')]}" | jq
{
"lastTransitionTime": "2023-07-13T18:22:31Z",
"message": "Galera configured",
"reason": "GaleraConfigured",
"status": "True",
"type": "GaleraConfigured"
}
kubectl get statefulsets
NAME READY AGE
mariadb-galera 3/3 58m
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mariadb-galera-0 2/2 Running 0 58m 10.244.2.4 mdb-worker3 <none> <none>
mariadb-galera-1 2/2 Running 0 58m 10.244.1.9 mdb-worker2 <none> <none>
mariadb-galera-2 2/2 Running 0 58m 10.244.5.4 mdb-worker4 <none> <none>
kubectl delete pods -l app.kubernetes.io/instance=mariadb-galera
pod "mariadb-galera-0" deleted
pod "mariadb-galera-1" deleted
pod "mariadb-galera-2" deleted
kubectl get mariadb mariadb-galera
NAME READY STATUS PRIMARY POD AGE
mariadb-galera False Galera not ready mariadb-galera-0 67m
kubectl get events --field-selector involvedObject.name=mariadb-galera --sort-by='.lastTimestamp'
LAST SEEN TYPE REASON OBJECT MESSAGE
...
48s Warning GaleraClusterNotHealthy mariadb/mariadb-galera Galera cluster is not healthy
kubectl get mariadb mariadb-galera -o jsonpath="{.status.conditions[?(@.type=='GaleraReady')]}" | jq
{
"lastTransitionTime": "2023-07-13T19:25:17Z",
"message": "Galera not ready",
"reason": "GaleraNotReady",
"status": "False",
"type": "GaleraReady"
}
kubectl get events --field-selector involvedObject.name=mariadb-galera --sort-by='.lastTimestamp'
LAST SEEN TYPE REASON OBJECT MESSAGE
...
16m Warning GaleraClusterNotHealthy mariadb/mariadb-galera Galera cluster is not healthy
16m Normal GaleraPodStateFetched mariadb/mariadb-galera Galera state fetched in Pod 'mariadb-galera-2'
16m Normal GaleraPodStateFetched mariadb/mariadb-galera Galera state fetched in Pod 'mariadb-galera-1'
16m Normal GaleraPodStateFetched mariadb/mariadb-galera Galera state fetched in Pod 'mariadb-galera-0'
16m Normal GaleraPodRecovered mariadb/mariadb-galera Recovered Galera sequence in Pod 'mariadb-galera-1'
16m Normal GaleraPodRecovered mariadb/mariadb-galera Recovered Galera sequence in Pod 'mariadb-galera-2'
17m Normal GaleraPodRecovered mariadb/mariadb-galera Recovered Galera sequence in Pod 'mariadb-galera-0'
17m Normal GaleraClusterBootstrap mariadb/mariadb-galera Bootstrapping Galera cluster in Pod 'mariadb-galera-2'
20m Normal GaleraClusterHealthy mariadb/mariadb-galera Galera cluster is healthy
kubectl get mariadb mariadb-galera -o jsonpath="{.status.galeraRecovery}" | jq
{
"bootstrap": {
"pod": "mariadb-galera-2",
"time": "2023-07-13T19:25:28Z"
},
"recovered": {
"mariadb-galera-0": {
"seqno": 3,
"uuid": "bf00b9c3-21a9-11ee-984f-9ba9ff0e9285"
},
"mariadb-galera-1": {
"seqno": 3,
"uuid": "bf00b9c3-21a9-11ee-984f-9ba9ff0e9285"
},
"mariadb-galera-2": {
"seqno": 3,
"uuid": "bf00b9c3-21a9-11ee-984f-9ba9ff0e9285"
}
},
"state": {
"mariadb-galera-0": {
"safeToBootstrap": false,
"seqno": -1,
"uuid": "bf00b9c3-21a9-11ee-984f-9ba9ff0e9285",
"version": "2.1"
},
"mariadb-galera-1": {
"safeToBootstrap": false,
"seqno": -1,
"uuid": "bf00b9c3-21a9-11ee-984f-9ba9ff0e9285",
"version": "2.1"
},
"mariadb-galera-2": {
"safeToBootstrap": false,
"seqno": -1,
"uuid": "bf00b9c3-21a9-11ee-984f-9ba9ff0e9285",
"version": "2.1"
}
}
}
kubectl get mariadb mariadb-galera -o jsonpath="{.status.conditions[?(@.type=='GaleraReady')]}" | jq
{
"lastTransitionTime": "2023-07-13T19:27:51Z",
"message": "Galera ready",
"reason": "GaleraReady",
"status": "True",
"type": "GaleraReady"
}
kubectl get mariadb mariadb-galera
NAME READY STATUS PRIMARY POD AGE
mariadb-galera True Running mariadb-galera-0 82m
kubectl get events --field-selector involvedObject.name=mariadb-galera --sort-by='.lastTimestamp'
LAST SEEN TYPE REASON OBJECT MESSAGE
...
16m Warning GaleraClusterNotHealthy mariadb/mariadb-galera Galera cluster is not healthy
16m Normal GaleraPodStateFetched mariadb/mariadb-galera Galera state fetched in Pod 'mariadb-galera-2'
16m Normal GaleraPodStateFetched mariadb/mariadb-galera Galera state fetched in Pod 'mariadb-galera-1'
16m Normal GaleraPodStateFetched mariadb/mariadb-galera Galera state fetched in Pod 'mariadb-galera-0'
16m Normal GaleraPodRecovered mariadb/mariadb-galera Recovered Galera sequence in Pod 'mariadb-galera-1'
16m Normal GaleraPodRecovered mariadb/mariadb-galera Recovered Galera sequence in Pod 'mariadb-galera-2'
17m Normal GaleraPodRecovered mariadb/mariadb-galera Recovered Galera sequence in Pod 'mariadb-galera-0'
17m Normal GaleraClusterBootstrap mariadb/mariadb-galera Bootstrapping Galera cluster in Pod 'mariadb-galera-2'
20m Normal GaleraClusterHealthy mariadb/mariadb-galera Galera cluster is healthy
kubectl get clusterrole system:auth-delegator
NAME CREATED AT
system:auth-delegator 2023-08-03T19:12:37Z
kubectl get clusterrolebinding | grep mariadb | grep auth-delegator
mariadb-galera:auth-delegator ClusterRole/system:auth-delegator 108m
mariadb-enterprise-operator:auth-delegator ClusterRole/system:auth-delegator 112m
Automatic primary failover based on MariaDB internals.
Replay pending transactions when a server goes down.
Support for Galera and Replication.
To better understand what MaxScale is capable of you may check the product page and the documentation.
MaxScale resources
Prior to configuring MaxScale within Kubernetes, it's essential to have a basic understanding of the resources managed through its API.
Servers
A server defines the backend database servers that MaxScale forwards traffic to. For more detailed information, please consult the .
Monitors
A monitor is an agent that queries the state of the servers and makes it available to the services in order to route traffic based on it. For more detailed information, please consult the monitor reference.
Depending on which highly available configuration your servers have, you will need to choose between the following modules:
Galera Monitor: Detects whether servers are part of the cluster, ensuring synchronization among them, and assigning primary and replica roles as needed.
MariaDB Monitor: Probes the state of the cluster, assigns roles to the servers, and executes failover, switchover, and rejoin operations as necessary.
Services
A service defines how the traffic is routed to the servers based on a routing algorithm that takes into account the state of the servers and its role. For more detailed information, please consult the .
Depending on your requirements to route traffic, you may choose between the following routers:
Readwritesplit: Route write queries to the primary server and read queries to the replica servers.
Readconnroute: Load balance connections between multiple servers.
Listeners
A listener specifies a port where MaxScale listens for incoming connections. It is associated with a service that handles the requests received on that port. For more detailed information, please consult the .
MaxScale CR
The minimal spec you need to provision a MaxScale instance is just a reference to a MariaDB resource:
This will provision a new StatefulSet for running MaxScale and configure the servers specified by the MariaDB resource. Refer to the Server configuration section if you want to manually configure the MariaDB servers.
The rest of the configuration uses reasonable defaults set automatically by the operator. If you need a more fine grained configuration, you can provide this values yourself:
As you can see, the MaxScale resources we previously mentioned have a counterpart resource in the MaxScale CR.
The previous example configured a MaxScale for a Galera cluster, but you may also configure MaxScale with a MariaDB that uses replication. It is important to note that the monitor module is automatically inferred by the operator based on the MariaDB reference you provided, however, its parameters are specific to each monitor module:
You also need to set a reference in the MariaDB resource to make it MaxScale-aware. This is explained in the MariaDB CR section.
You can set a spec.maxScaleRef in your MariaDB resource to make it MaxScale-aware. By doing so, the primary server reported by MaxScale will be used in MariaDB and the high availability tasks such the primary failover will be delegated to MaxScale:
MariaDB Enterprise Kubernetes Operator aims to provide highly configurable CRs, but at the same time maximize its usability by providing reasonable defaults. In the case of MaxScale, the following defaulting logic is applied:
spec.servers are inferred from spec.mariaDbRef.
spec.monitor.module is inferred from the spec.mariaDbRef.
spec.monitor.cooperativeMonitoring is set if is enabled.
If spec.services is not provided, a readwritesplit service is configured on port 3306 by default.
Server configuration
As an alternative to provide a reference to a MariaDB via spec.mariaDbRef, you can also specify the servers manually:
As you could see, you can refer to in-cluser MariaDB servers by providing the DNS names of the MariaDBPods as server addresses. In addition, you can also refer to external MariaDB instances running outside of the Kubernetes cluster where the operator was deployed:
Pointing to external MariaDBs has some limitations: Since the operator doesn't have a reference to a MariaDB resource (spec.mariaDbRef), it will be unable to perform the following actions:
Infer the monitor module (spec.monitor.module), so it will need to be provided by the user.
Autogenerate authentication credentials (spec.auth), so they will need to be provided by the user. See Authentication section.
Primary server switchover
Only the MariaDB Monitor, to be used with MariaDB replication, supports the primary switchover operation.
You can declaratively select the primary server by setting spec.primaryServer=<server>:
This will trigger a switchover operation and MaxScale will promote the specified server to be the new primary server.
Server maintenance
You can put servers in maintenance mode by setting the server field maintenance=true:
Configuration
Similar to MariaDB, MaxScale allows you to provide global configuration parameters in a maxscale.conf file. You don't need to provide this config file directly, but instead you can use the spec.config.params to instruct the operator to create the maxscale.conf:
Both this global configuration and the resources created by the operator using the MaxScale API are stored under a volume provisioned by the spec.config.volumeClaimTemplate. Refer to the troubleshooting if you are getting errors writing on this volume.
Refer to the for more details about the supported parameters.
Authentication
MaxScale requires authentication with different levels of permissions for the following components/actors:
MaxScale API consumed by MariaDB Enterprise Kubernetes Operator.
Clients connecting to MaxScale.
MaxScale connecting to MariaDB servers.
MaxScale monitor connecting to MariaDB servers.
MaxScale configuration syncer to connect to MariaDB servers. See section.
By default, the operator generates this credentials when spec.mariaDbRef is set and spec.auth.generate = true, but you are still able to provide your own:
As you could see, you are also able to limit the number of connections for each component/actor. Bear in mind that, when running in high availability, you may need to increase this number, as more MaxScale instances implies more connections.
Kubernetes Services
To enable your applications to communicate with MaxScale, a Kubernetes Service is provisioned with all the ports specified in the MaxScale listeners. You have the flexibility to provide a template to customize this Service:
This results in the reconciliation of the following Service:
There is also another Kubernetes Service to access the GUI, please refer to the MaxScale GUI section for further detail.
Connection
You can leverage the Connection resource to automatically configure connection strings as Secret resources that your applications can mount:
Alternatively, you can also provide a connection template to your MaxScale resource:
Note that, the Connection uses the Service described in the Kubernetes Service section and you are able to specify which MaxScale service to connect to by providing the port (spec.port) of the corresponding MaxScale listener.
High availability
To synchronize the configuration state across multiple replicas, MaxScale stores the configuration externally in a MariaDB table and conducts periodic polling across all replicas. By default, the table mysql.maxscale_config is used, but this can be configured by the user as well as the synchronization interval.
Another crucial aspect to consider regarding HA is that only one monitor can be running at any given time to avoid conflicts. This can be achieved via cooperative locking, which can be configured by the user. Refer to for more information.
Multiple MaxScale replicas can be specified by providing the spec.replicas field. Note that, MaxScale exposes the scale subresource, so you can scale/downscale it by running the following command:
Or even configuring an HorizontalPodAutoscaler to do the job automatically.
Suspend resources
In order to enable this feature, you must set the --feature-maxscale-suspend feature flag:
Then you will be able to suspend any MaxScale resources, for instance, you can suspend a monitor:
MaxScale GUI
MaxScale offers a great user interface that provides very useful information about the MaxScale resources. You can enable it by providing the following configuration:
The GUI is exposed via a dedicated Kubernetes Service in the same port as the MaxScale API. Once you access, you will need to enter the MaxScale API credentials configured by the operator in a Secret. See the Authentication section for more details.
MaxScale API
MariaDB Enterprise Kubernetes Operator interacts with the to reconcile the specification provided by the user, considering both the MaxScale status retrieved from the API and the provided spec.
Troubleshooting
The operator tracks both the MaxScale status in regards to Kubernetes resources as well as the status of the MaxScale API resources. This information is available on the status field of the MaxScale resource, it may be very useful for debugging purposes:
Kubernetes events emitted by mariadb-enterprise-operator may also be very relevant for debugging. For instance, an event is emitted whenever the primary server changes:
The operator logs can also be a good source of information for troubleshooting. You can increase its verbosity and enable MaxScale API request logs by running:
Common errors
Permission denied writing /var/lib/maxscale
This error occurs when the user that runs the container does not have enough privileges to write in /var/lib/maxscale:
To mitigate this, by default, the operator sets the following securityContext in the MaxScale's StatefulSet:
This enables the CSIDriver and the kubelet to recursively set the ownership ofr the /var/lib/maxscale folder to the group 999, which is the one expected by MaxScale. It is important to note that not all the CSIDrivers implementations support this feature, see the CSIDriver documentation for further information.
New innovations in MaxScale 25.01 and Enterprise Platform
Physical backups
What is a physical backup?
A physical backup is a snapshot of the entire data directory (/var/lib/mysql), including all data files. This type of backup captures the exact state of the database at a specific point in time, allowing for quick restoration in case of data loss or corruption.
Physical backups are the recommended method for backing up MariaDB databases, especially in production environments, as they are faster and more efficient than logical backups.
Backup strategies
Multiple strategies are available for performing physical backups, including:
mariadb-backup: Taken using the enterprise version of , specifically , which is available in the MariaDB enterprise images. The operator supports scheduling Jobs to perform backups using this utility.
Kubernetes VolumeSnapshot: Leverage to create snapshots of the persistent volumes used by the MariaDBPods. This method relies on a compatible CSI (Container Storage Interface) driver that supports volume snapshots. See the
In order to use VolumeSnapshots, you will need to provide a VolumeSnapshotClass that is compatible with your storage provider. The operator will use this class to create snapshots of the persistent volumes:
For the rest of compatible , the mariadb-backup CLI will be used to perform the backup. For instance, to use S3 as backup storage:
Storage types
Multiple storage types are supported for storing physical backups, including:
S3 compatible storage: Store backups in a S3 compatible storage, such as or .
Azure Blob Storage: Store backups in an .
Persistent Volume Claims (PVC): Use any of the available in your Kubernetes cluster to create a PersistentVolumeClaim (PVC) for storing backups.
Scheduling
Physical backup schedule can be optionally configured using the spec.schedule field in the PhysicalBackup resource. When empty, a single backup job is scheduled:
cron: to define the backup schedule.
suspend: Setting it to true, it prevents new backups from being scheduled.
immediate
It is very important to note that, by default, backups are only scheduled if the referred MariaDB resource is in ready state. You can override this behavior by setting mariaDbRef.waitForIt=false which allows backups to be scheduled even if the MariaDB resource is not ready.
Compression
When using physical backups based on mariadb-backup, you are able to choose the compression algorithm used to compress the backup files. The available options are:
bzip2: Good compression ratio, but slower compression/decompression speed compared to gzip.
gzip: Good compression/decompression speed, but worse compression ratio compared to bzip2.
none: No compression.
To specify the compression algorithm, you can use the compression field in the PhysicalBackup resource:
compression is defaulted to none by the operator.
Server-Side Encryption with Customer-Provided Keys (SSE-C) For S3
You can enable server-side encryption using your own encryption key (SSE-C) by providing a reference to a Secret containing a 32-byte (256-bit) key encoded in base64:
When using SSE-C, you are responsible for managing and securely storing the encryption key. If you lose the key, you will not be able to decrypt your backups. Ensure you have proper key management procedures in place.
When restoring from SSE-C encrypted backups via bootstrapFrom, the same key must be provided in the S3 configuration.
Retention policy
You can define a retention policy both for backups based on mariadb-backup and for VolumeSnapshots. The retention policy allows you to specify how long backups should be retained before they are automatically deleted. This can be defined via the maxRetention field in the PhysicalBackup resource:
When using physical backups based on mariadb-backup, the operator will automatically delete backups files in the specified storage older than the retention period. The cleanup process will be performed after each successful backup.
When using VolumeSnapshots, the operator will automatically delete the VolumeSnapshot resources older than the retention period using the Kubernetes API. The cleanup process will be performed after a VolumeSnapshot is successfully created.
Target policy
You can define a target policy both for backups based on mariadb-backup and for VolumeSnapshots. The target policy allows you to specify in which Pod the backup should be taken. This can be defined via the target field in the PhysicalBackup resource:
The following target policies are available:
Replica: The backup will be taken in a ready replica. If no ready replicas are available, the backup will not be scheduled.
PreferReplica: The backup will be taken in a ready replica if available, otherwise it will be taken in the primary Pod.
When using the PreferReplica target policy, you may be willing to schedule the backups even if the MariaDB resource is not ready. In this case, you can set mariaDbRef.waitForIt=false to allow scheduling the backup even if no replicas are available.
Restoration
Physical backups can only be restored in brand new MariaDB instances without any existing data. This means that you cannot restore a physical backup into an existing MariaDB instance that already has data.
To perform a restoration, you can specify a PhysicalBackup as restoration source under the spec.bootstrapFrom field in the MariaDB resource:
This will take into account the backup strategy and storage type used in the PhysicalBackup, and it will perform the restoration accordingly.
As an alternative, you can also provide a reference to an S3 bucket that was previously used to store the physical backup files:
It is important to note that the backupContentType field must be set to Physical when restoring from a physical backup. This ensures that the operator uses the correct restoration method.
To restore a VolumeSnapshot, you can provide a reference to a specific VolumeSnapshot resource in the spec.bootstrapFrom field:
Target recovery time
By default, the operator will match the closest backup available to the current time. You can specify a different target recovery time by using the targetRecoveryTime field in the PhysicalBackup resource. This lets you define the exact point in time you want to restore to:
Only backups strictly before or at targetRecoveryTime will be matched.
Timeout
By default, both backups based on mariadb-backup and VolumeSnapshots will have a timeout of 1 hour. You can change this timeout by using the timeout field in the PhysicalBackup resource:
When timed out, the operator will delete the Jobs or VolumeSnapshots resources associated with the PhysicalBackup resource. The operator will create new Jobs or VolumeSnapshots to retry the backup operation if the PhysicalBackup resource is still scheduled.
Log level
When taking backups based on mariadb-backup, you can specify the log level to be used by the mariadb-enterprise-operator container using the logLevel field in the PhysicalBackup resource:
Extra options
When taking backups based on mariadb-backup, you can specify extra options to be passed to the mariadb-backup command using the args field in the PhysicalBackup resource:
Refer to the for a list of available options.
Azure Blob Storage Credentials
Credentials for accessing Azure Blob Storage can be provided via the azureBlob key in the storage field of the PhysicalBackup resource. The credentials are provided as a reference to a Kubernetes Secret:
Alternatively, you may choose to omit the storageAccountKey and storageAccountName if you are using
S3 credentials
Credentials for accessing an S3 compatible storage can be provided via the s3 key in the storage field of the PhysicalBackup resource. The credentials can be provided as a reference to a Kubernetes Secret:
Alternatively, if you are running in EKS, you can use dynamic credentials from an EKS Service Account using EKS Pod Identity or IRSA:
By leaving out the accessKeyIdSecretKeyRef and secretAccessKeySecretKeyRef credentials and pointing to the correct serviceAccountName, the backup Job will use the dynamic credentials from EKS.
Staging area
S3 backups based on mariadb-backup are the only scenario that requires a staging area.
When using S3 storage for backups, a staging area is used for keeping the external backups while they are being processed. By default, this staging area is an emptyDir volume, which means that the backups are temporarily stored in the node's local storage where the PhysicalBackupJob is scheduled. In production environments, large backups may lead to issues if the node doesn't have sufficient space, potentially causing the backup/restore process to fail.
Additionally, when restoring these backups, the operator will pull the backup files from S3, uncompress them if needded, and restore them to each of the MariaDBPods in the cluster individually. To save network bandwidth and compute resources, a staging area is used to keep the uncompressed backup files after they have been restored to the first MariaDBPod. This allows the operator to restore the same backup to the rest of MariaDBPods seamlessly, without needing to pull and uncompress the backup again.
To configure the staging area, you can use the stagingStorage field in the PhysicalBackup resource:
Similarly, you may also use a staging area when , in the MariaDB resource:
In the examples above, a PVC with the default StorageClass will be provisioned to be used as staging area.
VolumeSnapshots
Before using this feature, ensure that you meet the following prerequisites :
and its CRs are installed in the cluster.
The operator is capable of creating of the PVCs used by the MariaDBPods. This allows you to create point-in-time snapshots of your data in a Kubernetes-native way, leveraging the capabilities of your storage provider.
Most of the fields described in this documentation apply to VolumeSnapshots, including scheduling, retention policy, and compression. The main difference with the mariadb-backup based backups is that the operator will not create a Job to perform the backup, but instead it will create a VolumeSnapshot resource directly.
In order to create consistent, point-in-time snapshots of the MariaDB data, the operator will perform the following steps:
Execute a BACKUP STAGE START statement followed by BACKUP STAGE BLOCK_COMMIT in one of the secondary Pods.
Create a VolumeSnapshot resource of the data PVC mounted by the MariaDB secondary Pod.
This backup process is described in the and is designed to be .
Non-blocking physical backups
Both for mariadb-backup and VolumeSnapshot , the enterprise operator performs non-blocking physical backups by leveraging the . This implies that the backups are taken without long read locks, enabling consistent, production-grade backups with minimal impact on running workloads, ideal for high-availability and performance-sensitive environments.
Important considerations and limitations
Root credentials
When restoring a backup, the root credentials specified through the spec.rootPasswordSecretKeyRef field in the MariaDB resource must match the ones in the backup. These credentials are utilized by the liveness and readiness probes, and if they are invalid, the probes will fail, causing your MariaDBPods to restart after the backup restoration.
Restore Job
When using backups based on mariadb-backup, restoring and uncompressing large backups can consume significant compute resources and may cause restoration Jobs to become stuck due to insufficient resources. To prevent this, you can define the compute resources allocated to the Job:
ReadWriteOncePod access mode partially supported
When using backups based on mariadb-backup, the data PVC used by the MariaDBPod cannot use the access mode, as it needs to be mounted at the same time by both the MariaDBPod and the PhysicalBackupJob. In this case, please use either the ReadWriteOnce or ReadWriteMany access modes instead.
Alternatively, if you want to keep using the ReadWriteOncePod access mode, you must use backups based on VolumeSnapshots, which do not require creating a Job to perform the backup and therefore avoid the volume sharing limitation.
PhysicalBackupJobs scheduling
PhysicalBackupJobs must mount the data PVC used by one of the secondary MariaDBPods. To avoid scheduling issues caused by the commonly used ReadWriteOnce access mode, the operator schedules backup Jobs on the same node as MariaDB by default.
If you prefer to disable this behavior and allow Jobs to run on any node, you can set podAffinity=false:
This configuration may be suitable when using the ReadWriteMany access mode, which allows multiple Pods across different nodes to mount the volume simultaneously.
Troubleshooting
Custom columns are used to display the status of the PhysicalBackup resource:
To get a higher level of detail, you can also check the status field directly:
You may also check the related events for the PhysicalBackup resource to see if there are any issues:
In some situations, when using the mariadb-backup strategy, you may encounter the following error in the backup Job logs:
This can be addressed by increasing the innodb_log_file_size in the MariaDB configuration. You can do this by adding the following to your MariaDB resource:
Refer to for further details on this issue.
mariadb-backupJob fails to start because the Pod cannot mount MariaDB PVC created with StorageClass provider
Without explicitly enabled the ReadWriteOnce access mode is treated as ReadWriteOncePod.
The operator supports provisioning and operating MariaDB clusters with replication as a highly availability topology. In the following sections we will be covering how to manage the full lifecycle of a replication cluster.
In a replication setup, one primary server handles all write operations while one or more replica servers replicate data from the primary, being able to handle read operations. More precisely, the primary has a binary log and the replicas asynchronously replicate the binary log events over the network.
Please refer to the for more details about replication.
kubectl patch maxscale maxscale-repl \
--type='merge' \
-p '{"spec":{"primaryServer":"mariadb-repl-1"}}'
kubectl get maxscale
NAME READY STATUS PRIMARY AGE
maxscale-repl False Switching primary to 'mariadb-repl-1' mariadb-repl-0 2m15s
kubectl get events --field-selector involvedObject.name=mariadb-repl-maxscale --sort-by='.lastTimestamp'
LAST SEEN TYPE REASON OBJECT MESSAGE
24s Normal MaxScalePrimaryServerChanged maxscale/mariadb-repl-maxscale MaxScale primary server changed from 'mariadb-repl-0' to 'mariadb-repl-1'
Kubernetes Volumes: Store backups in any of the in-tree storage providers supported by Kubernetes out of the box, such as NFS.
Kubernetes VolumeSnapshots: Use Kubernetes VolumeSnapshots to create snapshots of the persistent volumes used by the MariaDBPods. This method relies on a compatible CSI (Container Storage Interface) driver that supports volume snapshots. See the VolumeSnapshots section for more details.
: Setting it
true
, it schedules a backup immediately after creating the
PhysicalBackup
resource.
onDemand: Schedule identifier for triggering an on-demand backup. If the identifier is different from the one tracked under status.lastScheduleOnDemand, a new physical backup is triggered.
onPrimaryChange: By setting it to true, it schedules a new backup after the primary Pod in the referred MariaDB instance is changed. This is particularly useful for point-in-time recovery.
You have a compatible CSI driver that supports VolumeSnapshots installed in the cluster.
You have a VolumeSnapshotClass configured configured for your CSI driver.
Wait until the VolumeSnapshot is provisioned by the storage system. When timing out, the operator will delete the VolumeSnapshot resource and retry the operation.
kubectl get events --field-selector involvedObject.name=physicalbackup
LAST SEEN TYPE REASON OBJECT MESSAGE
116s Normal WaitForFirstConsumer persistentvolumeclaim/physicalbackup waiting for first consumer to be created before binding
116s Normal JobScheduled physicalbackup/physicalbackup Job physicalbackup-20250714140837 scheduled
116s Normal ExternalProvisioning persistentvolumeclaim/physicalbackup Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
116s Normal Provisioning persistentvolumeclaim/physicalbackup External provisioner is provisioning volume for claim "default/physicalbackup"
113s Normal ProvisioningSucceeded persistentvolumeclaim/physicalbackup Successfully provisioned volume pvc-7b7c71f9-ea7e-4950-b612-2d41d7ab35b7
mariadb [00] 2025-08-04 09:15:57 Was only able to copy log from 58087 to 59916, not 68968; try increasing
innodb_log_file_size
mariadb mariabackup: Stopping log copying thread.[00] 2025-08-04 09:15:57 Retrying read of log at LSN=59916
In order to provision a replication cluster, you need to configure a number of replicas greater than 1 and set the replication.enabled=true in the MariaDB CR:
After applying the previous CR, the operator will provision a replication cluster with one primary and two replicas. The operator will take care of setting up replication, configuring the replication user and monitoring the replication status:
As you can see, the primary can be identified in the PRIMARY column of the kubectl get mariadb output. You may also inspect the current replication status by checking the MariaDB CR status:
The operator continuously monitors the replication status via SHOW SLAVE STATUS, taking it into account for internal operations and updating the CR status accordingly.
Asynchronous vs semi-synchronous replication
By default, semi-synchronous replication is configured, which requires an acknowledgement from at least one replica before committing the transaction back to the client. This trades off performance for better consistency and facilitates failover and switchover operations.
If you are aiming for better performance, you can disable semi-synchronous replication, and go fully asynchronous, please refer to configuration section for doing so.
Configuration
The replication settings can be customized under the replication section of the MariaDB CR. The following options are available:
gtidStrictMode: Enables GTID strict mode. It is recommended and enabled by default. See MariaDB documentation.
semiSyncEnabled: Determines whether semi-synchronous replication should be enabled. It is enabled by default. See MariaDB documentation.
semiSyncAckTimeout: ACK timeout for the replicas to acknowledge transactions to the primary. It requires semi-synchronous replication. See .
semiSyncWaitPoint: Determines whether the transaction should wait for an ACK after having synced the binlog (AfterSync) or after having committed to the storage engine (AfterCommit, the default). It requires semi-synchronous replication. See .
syncBinlog: Number of events after which the binary log is synchronized to disk. See .
standaloneProbes: Determines whether to use regular non-HA startup and liveness probes. It is disabled by default.
These options are used by the operator to create a replication configuration file that is applied to all nodes in the cluster. When updating any of these options, an update of the cluster will be triggered in order to apply the new configuration.
For replica-specific configuration options, please refer to the replica configuration section. Additional system variables may be configured via the myCnf configuration field. Refer to the configuration documentation for more details.
Replica configuration
The following options are replica-specific and can be configured under the replication.replica section of the MariaDB CR:
replPasswordSecretKeyRef: Reference to the Secret key containing the password for the replication user, used by the replicas to connect to the primary. By default, a Secret with a random password will be created.
gtid: GTID position mode to be used (CurrentPos and SlavePos allowed). It defaults to CurrentPos. See .
connectionRetrySeconds: Number of seconds that the replica will wait between connection retries. See .
maxLagSeconds: Maximum acceptable lag in seconds between the replica and the primary. If the lag exceeds this value, the will fail and the replica will be marked as not ready. It defaults to 0, meaning that no lag is allowed. See section for more details.
syncTimeout: Timeout for the replicas to be synced during switchover and failover operations. It defaults to 10s. See the and sections for more details.
Probes
Kubernetes probes are resolved by the agent (see data-plane documentation) in the replication topology, taking into account both the MariaDB and replication status. Additionally, as described in the configuration documentation, probe thresholds may be tuned accordingly for a better reliability based on your environment.
In the following sub-sections we will be covering specifics about the replication topology.
Liveness probe
As part of the liveness probe, the agent checks that the MariaDB server is running and that the replication threads (Slave_IO_Running and Slave_SQL_Running) are both running on replicas. If any of these checks fail, the liveness probe will fail.
If such a behaviour is undesirable, it is possible to opt in for regular standalone startup/liveness probes (default SELECT 1 query). See standaloneProbes in the configuration section.
Readiness probe
The readiness probe checks that the MariaDB server is running and that the Seconds_Behind_Master value is within the acceptable lag range defined by the spec.replication.replica.maxLagSeconds configuration option. If the lag exceeds this value, the readiness probe will fail and the replica will be marked as not ready.
Lagged replicas
A replica is considered to be lagging behind the primary when the Seconds_Behind_Master value reported by SHOW SLAVE STATUS exceeds the spec.replication.replica.maxLagSeconds configuration option. This results in the readiness probe failing for that replica, and it has the following implications:
When taking a physical backup, lagged replicas will not be considered as a target for taking the backup.
During a primary switchover managed by the operator, lagged replicas will block switchover operations, as all the replicas must be in sync before promoting the new primary. This doesn't affect MaxScale switchover operation.
During a managed by the operator, lagged replicas will not be considered as candidates to be promoted as the new primary. MaxScale failover will not consider lagged replicas either.
During , lagged replicas will block the update operation, as each of the replicas must pass the readiness probe before proceeding to the update of the next one.
Backing up and restoring
In order to back up and restore a replication cluster, all the concepts and procedures described in the physical backup documentation apply.
Additionally, for the replication topology, the operator tracks the GTID position at the time of taking the backup, and sets this position based on the gtid_current_pos system variable when restoring the backup, as described in the MariaDB documentation.
Depending on the PhysicalBackup strategy used, the operator will track the GTID position accordingly:
mariadb-backup: When using PhysicalBackup with the mariadb-backup strategy, the GTID will be restored to a mariadb-enterprise-operator.info file in the data directory, which the agent will expose to the operator via HTTP.
VolumeSnapshot: When using PhysicalBackup with the VolumeSnapshot strategy, the GTID position will be kept in a enterprise.mariadb.com/gtid annotation in the VolumeSnapshot object, which later on the operator will read when restoring the backup.
When using PhysicalBackup with the mariadb-backup strategy, the GTID will be restored to a mariadb-enterprise-operator.info file in the data directory, which the agent will expose to the operator via HTTP.
It is important to note that, by default, physical backups are only taken in ready replicas when the MariaDB resource is in a ready state. If you are running with a single replica, it is recommended to set mariaDbRef.waitForIt=false and target=PreferReplica in the PhysicalBackup CR to allow taking backups from the primary when the replica is not ready. Please refer to the physical backup documentation for configuring this behaviour.
VolumeSnapshot
When using PhysicalBackup with the VolumeSnapshot strategy, the GTID position will be kept in a enterprise.mariadb.com/gtid annotation in the VolumeSnapshot object, which later on the operator will read when restoring the backup.
Refrain from removing the enterprise.mariadb.com/gtid annotation in the VolumeSnapshot object, as it is required for configuring the replica when restoring the backup.
You can declaratively trigger a primary switchover by updating the spec.replication.primary.podIndex field in the MariaDB CR to the index of the replica you want to promote as the new primary. For example, to promote the replica at index 1:
You can also do this imperatively using kubectl:
This will result in the MariaDB object reporting the following status:
The steps involved in the switchover operation are:
Lock the current primary using FLUSH TABLES WITH READ LOCK to ensure no new transactions are being processed.
Set the read_only system variable on the current primary to prevent any write operations.
Wait until all the replicas are in sync with the current primary. The timeout for this step can be configured via the spec.replication.replica.syncTimeout option. If the timeout is reached, the switchover operation will be retried from the beginning.
Promote the selected replica to be the new primary.
Connect replicas to the new primary.
Change the current primary to be a replica of the new primary.
If the switchover operation is stuck waiting for replicas to be in sync, you can check the MariaDB status to identify which replicas are causing the issue. Furthermore, if still in this step, you can cancel the switchover operation by setting back the spec.replication.primary.podIndex field back to the previous primary index.
Primary failover
Our recommendation for production environments is to rely on MaxScale for the failover process, as it provides several advantages.
You can configure the operator to automatically perform a primary failover whenever the current primary becomes unavailable:
Optionally, you may also specify a autoFailoverDelay, which will add a delay before triggering the failover operation. By default, the failover is immediate, but introducing a delay may be useful to avoid failovers due to transient issues. But note that the delay should be lower than the readiness probe failure threshold (e.g. 20 seconds delay when readiness threshold is 30 seconds), otherwise all the replicas will be marked as not ready and the automatic failover will not be able to proceed.
Whenever the primary becomes unavailable, the following status will be reported in the MariaDB CR:
The criteria for choosing a new primary is:
The Pod should be in Ready state, therefore not considering unavailable or lagged replicas (see readiness probe and lagged replicas sections).
Both the IO(Slave_IO_Running) and the SQL(Slave_SQL_Running) threads should be running.
The replica should not have relay log events.
Among the candidates, the one with the highest gtid_current_pos will be selected.
Once the new primary is selected, the failover process will be performed, consisting of the following steps:
Wait for the new primary to apply all relay log events.
Promote the selected replica to be the new primary.
Connect replicas to the new primary.
Updates
When updating a replication cluster, all the considerations and procedures described in the updates documentation apply.
Furthermore, for the replication topology, the operator will trigger an additional switchover operation once all the replicas have been updated, just before updating the primary. This ensures that the primary is always updated last, minimizing the impact on write operations.
The steps involved in updating a replication cluster are:
Update each replica one by one, waiting for each replica to be ready before proceeding to the next one (see readiness probe section).
Once all replicas are up to date and synced, perform a primary switchover to promote one of the replicas as the new primary. If MariaDB CR has a MaxScale configured using the spec.maxScaleRef field, the operator will trigger the primary switchover in MaxScale instead.
Update the previous primary, now running as a replica.
Scaling out
Scaling out a replication cluster implies adding new replicas to the cluster i.e scaling horizontally. The process involves taking a physical backup from a ready replica to setup the new replica PVC, and upscaling the replication cluster afterwards.
The first step is to define the PhysicalBackup strategy to be used for taking the backup. For doing so, we will be defining a PhysicalBackup CR, that will be used by the operator as template for creating the actual PhysicalBackup object during scaling out events. For instance, to use the mariadb-backup strategy, we can define the following PhysicalBackup:
It is important to note that, we set the spec.schedule.suspend=true to prevent scheduling this backup, as it will be only be used as a template.
Alternatively, you may also use a VolumeSnapshot strategy for taking the backup:
Once the PhysicalBackup template is created, you need to set a reference to it in the spec.replication.replica.bootstrapFrom, indicating that this will be the source for creating new replicas:
At this point, you can proceed to scale out the cluster by increasing the spec.replicas field in the MariaDB CR. For example, to scale out from 3 to 4 replicas:
You can also do this imperatively using kubectl:
This will trigger an scaling out operation, resulting in:
A PhysicalBackup based on the template being created.
Creating a new PVC for the new replica based on the PhysicalBackup.
Upscaling the StatefulSet, adding a Pod that mounts the newly created PVC.
The Pod is configured as a replica, connected to the primary by starting the replication in the GTID position stored in the backup.
It is important to note that, if there are no ready replicas available at the time of the scaling out operation, the PhysicalBackup will not become ready, and the scaling out operation will be stuck until a replica becomes ready. You have the ability to cancel the scaling out operation by setting back the spec.replicas field to the previous value.
Considering that we set mariaDbRef.waitForIt=false and target=PreferReplica in the PhysicalBackup template, it is important to note that, if there are no ready replicas available at the time of the scaling out operation, the operator will take the backup from the primary instead. Please refer to the physical backup documentation for configuring this behaviour.
You have the ability to cancel the scaling out operation by setting spec.replicas back to the previous value.
Replica recovery
The operator has the ability to automatically recover replicas that become unavailable and report a specific error code in the replication status. For doing so, the operator continuously monitors the replication status of each replica, and whenever a replica reports an error code listed in the table below, the operator will trigger an automated recovery process for that replica:
Error Code
Thread
Description
Documentation
1236
IO
Error 1236: Got fatal error from master when reading data from binary log.
To perform the recovery, the operator will take a physical backup from a ready replica, restore it to the failed replica PVC, and reconfigure the replica to connect to the primary from the GTID position stored in the backup.
Similarly to the scaling out operation, you need to define a PhysicalBackup template and set a reference to it in the spec.replication.replica.bootstrapFrom field of the MariaDB CR. Additionally, you need to explicitly enable the replica recovery, as it is disabled by default:
The errorDurationThreshold option defines the duration after which, a replica reporting an unknown error code will be considered for recovery. This is useful to avoid recovering replicas due to transient issues. It defaults to 5m.
We will be simulating a 1236 error in a replica to demonstrate how the recovery process works:
Do not perform the following steps in a production environment.
Purge the binary logs in the primary:
Delete the PVC and restart one of the replicas:
This will trigger a replica recovery operation, resulting in:
A PhysicalBackup based on the template being created.
Restoring the backup to the failed replica PVC.
Reconfigure the replica to connect to the primary from the GTID position stored in the backup.
Considering that we set mariaDbRef.waitForIt=false and target=PreferReplica in the PhysicalBackup template, it is important to note that, if there are no ready replicas available at the time of the replica recovery operation, the operator will take the backup from the primary instead. Please refer to the physical backup documentation for configuring this behaviour.
You have the ability to cancel the recovery operation by setting spec.replication.replica.recovery.enabled=false.
Troubleshooting
The operator tracks the current replication status under the MariaDB status subresource. This status is updated every time the operator reconciles the MariaDB resource, and it is the first place to look for when troubleshooting replication issues:
Additionally, also under the status subresource, the operator sets status conditions whenever a specific state of the MariaDB lifecycle is reached:
The operator also emits Kubernetes events during failover/switchover operations. You may check them to see how these operations progress:
Common errors
Primary has purged binary logs, unable to configure replica
The primary may purge binary log events at some point, after then, if a replica requests events before that point, it will fail with the following error:
This is a something the operator is able to recover from, please refer to the replica recovery section.
Scaling out/recovery operation stuck
These operations rely on a PhysicalBackup for setting up the new replicas. If this PhysicalBackup does not become ready, the operation will not progress. In order to debug this please refer to the PhysicalBackup troubleshooting section.
One of the reasons could be that you have no ready replicas for taking the backup and your PhysicalBackup CR does not allow taking the backup from the primary. You may set mariaDbRef.waitForIt=false and target=PreferReplica in the PhysicalBackup template to allow taking the backup from the primary when there are no ready replicas available. Please verify that this is the case by checking the status of your MariaDB resource and your Pods, and refer to the physical backup documentation for configuring the backup behaviour.
MaxScale switchover stuck during update
When using MaxScale, after having updated all the replica Pods, it could happen that MaxScale refuses to perform the switchover, as it considers the Pod chosen by the operator to be unsafe:
For this case, you can manually update the primaryServer field in the MaxScale resource to a safe Pod, and restart the operator. If the new primary server is the right Pod, MaxScale will start the switchover and the update will continue after it completes.
Scale out/replica recovery job names too long
This error happens when the name of the physical backup Job created for the scaling out or replica recovery operation exceeds the Kubernetes hard limit of 63 characters. We have truncated the job names already to significantly mitigate this problem, but the problem might still happen if your MariaDB resource name is too long.
kubectl get mariadb
NAME READY STATUS PRIMARY UPDATES AGE
mariadb-repl False Switching primary to 'mariadb-repl-1' mariadb-repl-0 ReplicasFirstPrimaryLast 3d2h
kubectl get mariadb
NAME READY STATUS PRIMARY UPDATES AGE
mariadb-repl True Running mariadb-repl-0 ReplicasFirstPrimaryLast 3d2h
kubectl delete pod mariadb-repl-0
pod "mariadb-repl-0" deleted
kubectl get mariadb
NAME READY STATUS PRIMARY UPDATES AGE
mariadb-repl False Switching primary to 'mariadb-repl-1' mariadb-repl-0 ReplicasFirstPrimaryLast 3d2h
kubectl get mariadb
NAME READY STATUS PRIMARY UPDATES AGE
mariadb-repl True Running mariadb-repl-1 ReplicasFirstPrimaryLast 3d2h
kubectl get events --field-selector involvedObject.name=mariadb-repl --sort-by='.lastTimestamp'
LAST SEEN TYPE REASON OBJECT MESSAGE
17s Normal PrimaryLock mariadb/mariadb-repl Locking primary with read lock
17s Normal PrimaryReadonly mariadb/mariadb-repl Enabling readonly mode in primary
17s Normal ReplicaSync mariadb/mariadb-repl Waiting for replicas to be synced with primary
17s Normal PrimaryNew mariadb/mariadb-repl Configuring new primary at index '0'
7s Normal ReplicaConn mariadb/mariadb-repl Connecting replicas to new primary at '0'
7s Normal PrimaryToReplica mariadb/mariadb-repl Unlocking primary '1' and configuring it to be a replica. New primary at '0'
7s Normal PrimaryLock mariadb/mariadb-repl Unlocking primary
7s Normal PrimarySwitched mariadb/mariadb-repl Primary switched from index '1' to index '0'
Error 1236: Got fatal error from master when reading data from binary log.
2025-10-27 15:17:11 error : [mariadbmon] 'mariadb-repl-1' is not a valid demotion target for switchover: it does not have a 'gtid_binlog_pos'.
error creating Job: Job.batch \"mariadb-repl-operator-test-new-physicalbackup-scale-out-20251208221943\"
is invalid: spec.template.labels:
Invalid value: \"mariadb-repl-operator-test-new-physicalbackup-scale-out-20251208221943\":
must be no more than 63 characters
Helm is the preferred way to install MariaDB Enterprise Kubernetes Operator in Kubernetes clusters. This documentation aims to provide guidance on how to manage the installation and upgrades of both the CRDs and the operator via Helm charts.
MariaDB Enterprise Kubernetes Operator is split into two different helm charts for better convenience:
mariadb-enterprise-operator-crds: Bundles the required by the operator.
mariadb-enterprise-operator: Contains all the template manifests required to install the operator. Refer to the section for detailed information about the supported values.
Control-plane
The operator extends the Kubernetes control plane and consists of the following components deployed via Helm:
operator: The mariadb-enterprise-operator itself that performs the CRD reconciliation.
webhook: The Kubernetes control-plane delegates CRD validations to this HTTP server. Kubernetes requires TLS to communicate with the webhook server.
Installing CRDs
Helm has certain . To address this, we are providing the CRDs in a separate chart, . This allows us to manage the installation and updates of the CRDs independently from the operator. For example, you can uninstall the operator without impacting your existing MariaDB CRDs.
CRDs can be installed in your cluster by running the following commands
Installing the operator
The first step is to prepare a values.yaml file to specify your previously configured :
Then, you can proceed to install the operator:
If you have the and already installed in your cluster, it is recommended to leverage them to scrape the operator metrics and provision the webhook certificate respectively:
Refer to the section for detailed information about the supported values.
Long-Term Support Versions
MariaDB Enterprise Kubernetes Operator provides stable Long-Term Support (LTS) versions.
Version
Supported Kubernetes Versions
Description
If you instead wish to install a specific LTS release, you can do:
Where: --version "25.10.*" installs the most recent available release within the 25.10 series.
Deployment modes
The following deployment modes are supported:
Cluster-wide
The operator watches CRDs in all namespaces and requires cluster-wide RBAC permissions to operate. This is the default deployment mode, enabled through the default configuration values:
Single namespace
By setting currentNamespaceOnly=true, the operator will only watch CRDs within the namespace it is deployed in, and the RBAC permissions will be restricted to that namespace as well:
Updates
Make sure you read and understand the before proceeding to update the operator.
To install a version instead, replace <new-version> with your desired LTS release. For example: --version "25.10.*" will automatically install the latest available patch within that LTS series.
The first step is upgrading the CRDs that the operator depends on:
Once updated, you may proceed to upgrade the operator:
Whenever a new version of the operator is released, an upgrade guide is linked in the if additional upgrade steps are required. Be sure to review the and follow the version-specific upgrade guides accordingly.
Operator high availability
The operator can run in high availability mode to prevent downtime during updates and ensure continuous reconciliation of your CRs, even if the node where the operator runs goes down. To achieve this, you need:
Multiple replicas
Configure Pod anti-affinity
Configure PodDisruptionBudgets
You can achieve this by providing the following values to the helm chart:
You may similarly configure the webhook and cert-controller components to run in high availability mode by providing the same values to their respective sections. Refer to the for detailed information.
Uninstalling
Uninstalling the mariadb-enterprise-operator-crds Helm chart will remove the CRDs and their associated resources, resulting in downtime.
First, uninstall the mariadb-enterprise-operator Helm chart. This action will not delete your CRDs, so your operands (i.e. MariaDB and MaxScale) will continue to run without the operator's reconciliation.
At this point, if you also want to delete CRDs and the operands running in your cluster, you may proceed to uninstall the mariadb-enterprise-operator-crds Helm chart:
Point-in-time recovery (PITR) is a feature that allows you to restore a MariaDB instance to a specific point in time. For achieving this, it combines a full base backup and the binary logs that record all changes made to the database after the backup. This is something fully automated by operator, covering archival and restoration up to a specific time, ensuring business continuity and reduced RTO and RPO.
Supported MariaDB versions and topologies
The operator uses to replay binary logs, in particular, it filters binlog events by passing a GTID to mariadb-binlog via the flag. This is only supported by MariaDB server 10.8 and later, so make sure you are using a compatible MariaDB version.
cert-controller: Provisions TLS certificates for the webhook. You can see it as a minimal cert-manager that is intended to work only with the webhook. It is optional and can be replaced by cert-manager.
certController.certLifetime
string
"2160h"
Certificate lifetime.
certController.enabled
bool
true
Specifies whether the cert-controller should be created.
certController.extraArgs
list
[]
Extra arguments to be passed to the cert-controller entrypoint
certController.extraVolumeMounts
list
[]
Extra volumes to mount to cert-controller container
certController.extraVolumes
list
[]
Extra volumes to pass to cert-controller Pod
certController.ha.enabled
bool
false
Enable high availability
certController.ha.replicas
int
3
Number of replicas
certController.image.pullPolicy
string
"IfNotPresent"
certController.image.repository
string
"docker.mariadb.com/mariadb-enterprise-operator"
certController.image.tag
string
""
Image tag to use. By default the chart appVersion is used
certController.imagePullSecrets
list
[]
certController.nodeSelector
object
{}
Node selectors to add to cert-controller container
certController.pdb.enabled
bool
false
Enable PodDisruptionBudget for the cert-controller.
certController.pdb.maxUnavailable
int
1
Maximum number of unavailable Pods. You may also give a percentage, like 50%
certController.podAnnotations
object
{}
Annotations to add to cert-controller Pod
certController.podSecurityContext
object
{}
Security context to add to cert-controller Pod
certController.priorityClassName
string
""
priorityClassName to add to cert-controller container
certController.privateKeyAlgorithm
string
"ECDSA"
Private key algorithm to be used for the CA and leaf certificate private keys. One of: ECDSA or RSA.
certController.privateKeySize
int
256
Private key size to be used for the CA and leaf certificate private keys. Supported values: ECDSA(256, 384, 521), RSA(2048, 3072, 4096)
certController.renewBeforePercentage
int
33
How long before the certificate expiration should the renewal process be triggered. For example, if a certificate is valid for 60 minutes, and renewBeforePercentage=25, cert-controller will begin to attempt to renew the certificate 45 minutes after it was issued (i.e. when there are 15 minutes (25%) remaining until the certificate is no longer valid).
certController.requeueDuration
string
"5m"
Requeue duration to ensure that certificate gets renewed.
certController.resources
object
{}
Resources to add to cert-controller container
certController.securityContext
object
{}
Security context to add to cert-controller Pod
certController.serviceAccount.annotations
object
{}
Annotations to add to the service account
certController.serviceAccount.automount
bool
true
Automounts the service account token in all containers of the Pod
certController.serviceAccount.enabled
bool
true
Specifies whether a service account should be created
certController.serviceAccount.extraLabels
object
{}
Extra Labels to add to the service account
certController.serviceAccount.name
string
""
The name of the service account to use. If not set and enabled is true, a name is generated using the fullname template
certController.serviceMonitor.additionalLabels
object
{}
Labels to be added to the cert-controller ServiceMonitor
certController.serviceMonitor.enabled
bool
true
Enable cert-controller ServiceMonitor. Metrics must be enabled
certController.serviceMonitor.interval
string
"30s"
Interval to scrape metrics
certController.serviceMonitor.metricRelabelings
list
[]
certController.serviceMonitor.relabelings
list
[]
certController.serviceMonitor.scrapeTimeout
string
"25s"
Timeout if metrics can't be retrieved in given time interval
certController.tolerations
list
[]
Tolerations to add to cert-controller container
certController.topologySpreadConstraints
list
[]
topologySpreadConstraints to add to cert-controller container
clusterName
string
"cluster.local"
Cluster DNS name
config.exporterImage
string
"mariadb/mariadb-prometheus-exporter-ubi:1.1.1"
Default MariaDB exporter image
config.exporterMaxscaleImage
string
"mariadb/maxscale-prometheus-exporter-ubi:1.1.1"
Default MaxScale exporter image
config.galeraLibPath
string
"/usr/lib64/galera/libgalera_enterprise_smm.so"
Galera Enterprise library path to be used with Galera
config.mariadbDefaultVersion
string
"11.8"
Default MariaDB Enterprise version to be used when unable to infer it via image tag
config.mariadbImage
string
"docker.mariadb.com/enterprise-server:11.8.5-2"
Default MariaDB Enterprise image
config.mariadbImageName
string
"docker.mariadb.com/enterprise-server"
Default MariaDB Enterprise image name
config.maxscaleImage
string
"docker.mariadb.com/maxscale:25.10.1"
Default MaxScale Enterprise image
crds
object
{"enabled":false}
CRDs
crds.enabled
bool
false
Whether the helm chart should create and update the CRDs. It is false by default, which implies that the CRDs must be managed independently with the mariadb-enterprise-operator-crds helm chart. WARNING This should only be set to true during the initial deployment. If this chart manages the CRDs and is later uninstalled, all MariaDB instances will be DELETED.
currentNamespaceOnly
bool
false
Whether the operator should watch CRDs only in its own namespace or not.
extraArgs
list
[]
Extra arguments to be passed to the controller entrypoint
extraEnv
list
[]
Extra environment variables to be passed to the controller
extraEnvFrom
list
[]
Extra environment variables from preexiting ConfigMap / Secret objects used by the controller using envFrom
extraVolumeMounts
list
[]
Extra volumes to mount to the container.
extraVolumes
list
[]
Extra volumes to pass to pod.
fullnameOverride
string
""
ha.enabled
bool
false
Enable high availability of the controller. If you enable it we recommend to set affinity and pdb
ha.replicas
int
3
Number of replicas
image.pullPolicy
string
"IfNotPresent"
image.repository
string
"docker.mariadb.com/mariadb-enterprise-operator"
image.tag
string
""
Image tag to use. By default the chart appVersion is used
imagePullSecrets
list
[]
logLevel
string
"INFO"
Controller log level
metrics.enabled
bool
false
Enable operator internal metrics. Prometheus must be installed in the cluster
metrics.serviceMonitor.additionalLabels
object
{}
Labels to be added to the controller ServiceMonitor
metrics.serviceMonitor.enabled
bool
true
Enable controller ServiceMonitor
metrics.serviceMonitor.interval
string
"30s"
Interval to scrape metrics
metrics.serviceMonitor.metricRelabelings
list
[]
metrics.serviceMonitor.relabelings
list
[]
metrics.serviceMonitor.scrapeTimeout
string
"25s"
Timeout if metrics can't be retrieved in given time interval
nameOverride
string
""
nodeSelector
object
{}
Node selectors to add to controller Pod
pdb.enabled
bool
false
Enable PodDisruptionBudget for the controller.
pdb.maxUnavailable
int
1
Maximum number of unavailable Pods. You may also give a percentage, like 50%
podAnnotations
object
{}
Annotations to add to controller Pod
podSecurityContext
object
{}
Security context to add to controller Pod
pprof.enabled
bool
false
Enable the pprof HTTP server.
pprof.port
int
6060
The port where the pprof HTTP server listens.
priorityClassName
string
""
priorityClassName to add to controller Pod
rbac.aggregation.enabled
bool
true
Specifies whether the cluster roles aggregate to view and edit predefinied roles
rbac.enabled
bool
true
Specifies whether RBAC resources should be created
resources
object
{}
Resources to add to controller container
securityContext
object
{}
Security context to add to controller container
serviceAccount.annotations
object
{}
Annotations to add to the service account
serviceAccount.automount
bool
true
Automounts the service account token in all containers of the Pod
serviceAccount.enabled
bool
true
Specifies whether a service account should be created
serviceAccount.extraLabels
object
{}
Extra Labels to add to the service account
serviceAccount.name
string
""
The name of the service account to use. If not set and enabled is true, a name is generated using the fullname template
tolerations
list
[]
Tolerations to add to controller Pod
topologySpreadConstraints
list
[]
topologySpreadConstraints to add to controller Pod
webhook.affinity
object
{}
Affinity to add to webhook Pod
webhook.annotations
object
{}
Annotations for webhook configurations.
webhook.cert.ca.key
string
""
File under 'ca.path' that contains the full CA trust chain.
webhook.cert.ca.path
string
""
Path that contains the full CA trust chain.
webhook.cert.certManager.duration
string
""
Duration to be used in the Certificate resource,
webhook.cert.certManager.enabled
bool
false
Whether to use cert-manager to issue and rotate the certificate. If set to false, mariadb-enterprise-operator's cert-controller will be used instead.
webhook.cert.certManager.issuerRef
object
{}
Issuer reference to be used in the Certificate resource. If not provided, a self-signed issuer will be used.
webhook.cert.certManager.privateKeyAlgorithm
string
"ECDSA"
Private key algorithm to be used for the CA and leaf certificate private keys. One of: ECDSA or RSA.
webhook.cert.certManager.privateKeySize
int
256
Private key size to be used for the CA and leaf certificate private keys. Supported values: ECDSA(256, 384, 521), RSA(2048, 3072, 4096)
webhook.cert.certManager.renewBefore
string
""
Renew before duration to be used in the Certificate resource.
webhook.cert.certManager.revisionHistoryLimit
int
3
The maximum number of CertificateRequest revisions that are maintained in the Certificate’s history.
webhook.cert.path
string
"/tmp/k8s-webhook-server/serving-certs"
Path where the certificate will be mounted. 'tls.crt' and 'tls.key' certificates files should be under this path.
webhook.cert.secretAnnotations
object
{}
Annotatioms to be added to webhook TLS secret.
webhook.cert.secretLabels
object
{}
Labels to be added to webhook TLS secret.
webhook.enabled
bool
true
Specifies whether the webhook should be created.
webhook.extraArgs
list
[]
Extra arguments to be passed to the webhook entrypoint
webhook.extraVolumeMounts
list
[]
Extra volumes to mount to webhook container
webhook.extraVolumes
list
[]
Extra volumes to pass to webhook Pod
webhook.ha.enabled
bool
false
Enable high availability
webhook.ha.replicas
int
3
Number of replicas
webhook.hostNetwork
bool
false
Expose the webhook server in the host network
webhook.image.pullPolicy
string
"IfNotPresent"
webhook.image.repository
string
"docker.mariadb.com/mariadb-enterprise-operator"
webhook.image.tag
string
""
Image tag to use. By default the chart appVersion is used
webhook.imagePullSecrets
list
[]
webhook.nodeSelector
object
{}
Node selectors to add to webhook Pod
webhook.pdb.enabled
bool
false
Enable PodDisruptionBudget for the webhook.
webhook.pdb.maxUnavailable
int
1
Maximum number of unavailable Pods. You may also give a percentage, like 50%
webhook.podAnnotations
object
{}
Annotations to add to webhook Pod
webhook.podSecurityContext
object
{}
Security context to add to webhook Pod
webhook.port
int
9443
Port to be used by the webhook server
webhook.priorityClassName
string
""
priorityClassName to add to webhook Pod
webhook.resources
object
{}
Resources to add to webhook container
webhook.securityContext
object
{}
Security context to add to webhook container
webhook.serviceAccount.annotations
object
{}
Annotations to add to the service account
webhook.serviceAccount.automount
bool
true
Automounts the service account token in all containers of the Pod
webhook.serviceAccount.enabled
bool
true
Specifies whether a service account should be created
webhook.serviceAccount.extraLabels
object
{}
Extra Labels to add to the service account
webhook.serviceAccount.name
string
""
The name of the service account to use. If not set and enabled is true, a name is generated using the fullname template
webhook.serviceMonitor.additionalLabels
object
{}
Labels to be added to the webhook ServiceMonitor
webhook.serviceMonitor.enabled
bool
true
Enable webhook ServiceMonitor. Metrics must be enabled
webhook.serviceMonitor.interval
string
"30s"
Interval to scrape metrics
webhook.serviceMonitor.metricRelabelings
list
[]
webhook.serviceMonitor.relabelings
list
[]
webhook.serviceMonitor.scrapeTimeout
string
"25s"
Timeout if metrics can't be retrieved in given time interval
webhook.tolerations
list
[]
Tolerations to add to webhook Pod
webhook.topologySpreadConstraints
list
[]
topologySpreadConstraints to add to webhook Pod
25.10
>=1.32.0-0 <= 1.34.0-0
LTS 25.10. It was tested to work up to kubernetes v1.34.
CA certificate lifetime. It must be greater than certLifetime.
Regarding supported MariaB topologies, at the moment, binary log archiving and point-in-time recovery are only supported by the asynchronous replication topology, which already relies on the binary logs for replication. Galera and standalone topologies will be supported in upcoming releases.
Storage types
Full base backups and binary logs can be stored in the following object storage types:
For additional details on configuring storage, please refer to the storage types section in the physical backup documentation, same settings are applicable to the PointInTimeRecovery object.
Configuration
To be able to perform a point-in-time restoration, a physical backup should be configured as full base backup. For example, you can configure a nightly backup:
Refer to the full base backup section for additional details on how to configure the full base backup.
Next step is configuring common aspects of both binary log archiving and point-in-time restoration by defining a PointInTimeRecovery object:
physicalBackupRef: It is a reference to the PhysicalBackup resource used as full base backup. See full base backup.
storage: Object storage configuration for binary logs. See storage types.
compression: Algorithm to be used for compressing binary logs. It is disabled by default. See .
archiveTimeout: Maximum duration for the binary log archival. If exceeded, agent will return an error and archival will be retried in the next archive cycle. Defaults to 1h.
archiveInterval: Interval at which the binary logs will be archived. Defaults to 10m. See for additional details.
maxParallel: Maximum number of workers that can be used for parallel binary log archival and restoration. Defaults to 1. See .
maxRetention: Maximum retention duration for binary logs. By default, binary logs are not automatically deleted. See .
strictMode: Controls the behavior when a point-in-time restoration cannot reach the exact target time. It is disabled by default. See .
With this configuration in place, you can enable binary log archival in a MariaDB instance by setting a reference to the PointInTimeRecovery object:
Once a full base backup has been completed and the binary logs have been archived, you can perform a point-in-time restoration. For example, you can create a new MariaDB instance with the following configuration:
To enable point-in-time recovery, a PhysicalBackup resource should be configured as full base backup. The backup should be a complete snapshot of the database at a specific point in time, and it will serve as the starting point for replaying the binary logs. Any of the supported backup strategies can be used as full base backup, as all of them provide a consistent snapshot of the database and a starting GTID position.
It is very important to note that a full physical backups should be completed before a point-in-time restoration can be performed. This is something that the operator accounts for when computing the last recoverable time.
To further expand the last recoverable time, it is recommended to take physical backups after the primary Pod has changed. This can be automated by setting schedule.onPrimaryChange, as documented in the physical backup docs:
Alternatively, you can schedule an on-demand physical backup or rely on the cron scheduling for doing so:
The backup taken in the new primary will establish a baseline for a new binlog timeline, which will be expanded when new binary logs are archived.
Archival
The mariadb-enterprise-operator sidecar agent will periodically check for new binary logs and archive them to the configured object storage. The archival process is controlled by the archiveInterval and archiveTimeout settings in the PointInTimeRecovery configuration, which determine how often the archival process runs and how long it can take before it is considered failed.
The archival process is performed on the primary Pod in the asynchronous replication topology, you may check the logs of the agent sidecar container, Kubernetes events and status of the MariaDB objects to monitor the current status of the archival process:
There are a couple of important considerations regarding binary log archival:
The archival process should start from a clean state, which means that the object storage should be empty at the time of the first archival.
It is not recommended to set archiveInterval to a very low value (< 1m), as it can lead to increased load on the database Pod and the storage system.
If the archival process fails (e.g., due to network issues or storage unavailability), it will be retried in the next archive cycle.
If server variable is configured, it should be set to a value higher than the archiveInterval to prevent automatic deletion of binary logs before they are archived.
Manually executing command on the database is not recommended, as it can lead to inconsistencies between the database and the archived binary logs.
Manually executing command on the database should be compatible with the archival process, it will force the active binary log to be closed and will be archived by the agent in the next archive cycle.
Binary log size
The server has a default max_binlog_size of 1GB, which means that a new binary log file will be created once the current one reaches that size. This is sensible default value for most cases, but it can be adjusted based on the data volume in order to enable a faster archival, and therefore a reduced RPO:
Environment
Recommended Size
Rationale
Low Traffic
128MB
Keeps file size minimal for slow-growing logs.
Standard
256MB
Balances rotation frequency with server overhead.
High Throughput
512MB - 1GB
Reduces the contention caused by frequent rotations in write-heavy environments.
The smaller the binlog file size, the more frequently the files will be rotated and archived, which can lead to increased load on the database Pod and the storage system. On the other hand, setting a very high binlog file size can lead to longer archival times and increased RPO.
Refer to the configuration documentation for instructions on how to set the max_binlog_size server variable in the MariaDB instance.
Compression
In order to reduce storage usage and save bandwidth during archival and restoration, the operator supports compressing the binary log files. Compression is enabled by setting the compression field in the PointInTimeRecovery configuration:
The supported compression algorithms are:
bzip2: Good compression ratio, but slower compression/decompression speed compared to gzip.
gzip: Good compression/decompression speed, but worse compression ratio compared to bzip2.
none: No compression.
Compression is disabled by default, and the are some important considerations before enabling it:
Compression is immutable, which means that once configured and binary logs have been archived with a specific algorithm, it cannot be changed. This also applies to restoration, the same compression algorithm should be configured as the one used for archival.
Although it saves storage space and bandwidth, the restoration process may take longer when compression is enabled, leading to an increased RTO. This can migrated by enabling parallelization.
Server-Side Encryption with Customer-Provided Keys (SSE-C) For S3
When using S3-compatible storage, you can enable server-side encryption using your own encryption key (SSE-C) by providing a reference to a Secret containing a 32-byte (256-bit) key encoded in base64:
When using SSE-C, you are responsible for managing and securely storing the encryption key. If you lose the key, you will not be able to decrypt your binary logs. Ensure you have proper key management procedures in place.
When replaying SSE-C encrypted binary logs via bootstrapFrom, the same key must be provided in the S3 configuration.
Parallelization
Several tasks during both archival an restoration process can take a significant amount of time, specially when managing large data volumes. These tasks include compressing and uploading binary logs during archival, and downloading and decompressing binary logs during restoration. This can lead to longer archival and restoration times, which can impact the RTO.
To mitigate this, the operator supports parallelization of these tasks by using multiple workers. The maximum number of workers can be configured via the maxParallel field in the PointInTimeRecovery configuration:
This will create up to 4 workers, each of them responsible for the operations related to a single binary log, which means that up to 4 binary logs can be processed in parallel. This can significantly reduce the archival and restoration times, specially when compression is enabled.
Parallelization is disabled by default (maxParallel: 1), and there are some important considerations to be taken into account when enabling it:
During archival, the workers will be spawn in the agent sidecar container, sharing storage with the primary database Pod. Using an elevated number of workers can exhaust IOPS and/or CPU resources of the primary Pod, which can impact the performance of the database.
During both archival and restoration, using an elevated number of workers can saturate the network bandwidth when pulling/pushing multiple binary logs in parallel, something that can degrade the performance of the database.
Retention policy
Binary logs can grow significantly in size, especially in write-heavy environments, which can lead to increased storage costs. To mitigate this, the operator supports automatic purging of binary logs based on a retention policy defined by the maxRetention field in the PointInTimeRecovery configuration:
The binary logs that exceed the defined retention will be automatically deleted from the object storage after each archival cycle.
By default, binary logs are never purged from object storage, and there are few considerations regarding configuring a retention policy:
The date of the last event in the binary logs is used to determine its age, and therefore whether it should be purged or not.
The maxRetention field should not be set to a value lower than the archiveInterval, as it can lead to situations where binary logs are purged before they can be archived.
Binlog inventory
The operator maintains an inventory of the archived binary logs in an index.yaml file located at the root of the configured object storage. This file contains a list of all the archived binary logs per each server, along with their GTIDs and other metadata utilized internally. Here is an example of the index.yaml file:
This file is used internally by the operator to keep track of the archived binary logs, and it is updated after each successful archival. It should not be modified manually, as it can lead to inconsistencies between the actual archived binary logs and the inventory.
Taking into account the last completed physical backup GTID and the archived binlogs in the inventory, the operator computes a timeline of binary logs that can replayed and its corresponding last recoverable time. The last recoverable time is the latest timestamp that the MariaDB instance can be restored to. This information is crucial for understanding the RPO of the system and for making informed decisions during a recovery process.
You can easily check the last recoverable time by looking at the status of the PointInTimeRecovery object:
Then, you may provide exactly this timestamp, or an earlier one, as target recovery time when bootstrapping a new MariaDB instance, as described in the point-in-time restoration section.
Point-in-time restoration
In order to perform a point-in-time restoration, you can create a new MariaDB instance with a reference to the PointInTimeRecovery object in the bootstrapFrom field, along with the targetRecoveryTime field indicating the desired point-in-time to restore to.
For setting the targetRecoveryTime, it is recommended to check the last recoverable time first in the PointInTimeRecovery object:
pointInTimeRecoveryRef: Reference to the PointInTimeRecovery object that contains the configuration for the point-in-time recovery.
targetRecoveryTime: The desired point in time to restore to. It should be in RFC3339 format. If not provided, the current time will be used as target recovery time, which means restoring up to the last recoverable time.
restoreJob: Compute resources and metadata configuration for the restoration job. To reduce RTO, it is recommended to properly tune compute resources.
logLevel: Log level for the operator container, part of the restoration job.
The restoration process will match the closest physical backup before or at the targetRecoveryTime, and then it will replay the archived binary logs from the backup GTID position up until the targetRecoveryTime:
As you can see, the restoration process includes the following steps:
Perform a rolling restore of the full base backup, one Pod at a time.
Configure replication in the MariaDB instance.
Get the base backup GTID, to be used as the starting point for replaying the binary logs.
Schedule the point-in-time restoration job, which will:
Build the based on the base backup GTID and the .
Pull the binary logs in the timeline into a .
Replay the binary logs using from the GTID position of the base backup up to the targetRecoveryTime
After having completed the restoration process, the following status conditions will be available for you to inspect the restoration process:
Strict mode
The strict mode controls whether the target recovery time provided during the bootstrap process should be strictly met or not. This is configured via the strictMode field in the PointInTimeRecovery configuration, and it is disabled by default:
When strict mode is enabled (recommended), if the target recovery time cannot be met, the initialization process will return an error early, and the MariaDB instance will not be created. This can happen, for example, if the target recovery time is later than the last recoverable time. Let's assume strict mode is enabled and the last recoverable time is:
If we attempt to provision the following MariaDB instance:
The following errors will be returned, as the target recovery time 2026-02-28T20:10:42Z is later than the last recoverable time 2026-02-27T20:10:42Z:
When strict mode is disabled (default), and the target recovery time cannot be met, the MariaDB provisioning will proceed and the last recoverable time will be used. This would mean that, the MariaDB instance will be provisioned with a recovery time of 2026-02-27T20:10:42Z, which is the last recoverable time:
After setting strictMode=false, if we attempt to create the same MariaDB instance as before, it will be successfully provisioned, but with a recovery time of 2026-02-27T20:10:42Z will be used instead of the requested 2026-02-28T20:10:42Z.
It is important to note that the last recoverable time is stored in the status field of the PointInTimeRecovery object, therefore if this object is deleted and recreated, the last recoverable time metadata will be lost, and it will not be available until recomputed. When it comes to restore, this implies that the error will be returned later in the process, when computing the binary log timeline, but the strict mode behaviour still applies. This is the error returned for that scenario:
Staging storage
The operator uses a staging area to temporarily store the binary logs during the restoration process. By default, the staging area is an emptyDir volume attached to the restoration job, which means that the binary logs are kept in the node storage where the job has been scheduled. This may not be suitable for large binary logs, as it can lead to exhausting the node's storage, resulting the restoration process to fail and potentially impacting other workloads running in the same node.
You are able to configure an alternative staging area using the stagingStorage field under the bootstrapFrom section in the MariaDB resource:
This will provision a PVC and attach it to the restoration job to be used as staging area.
Limitations
A PointInTimeRecovery object can only be referred by a single MariaDB object via the pointInTimeRecoveryRef field.
A combination object storage bucket + prefix can only be utilizied by a single MariaDB instance to archive binary logs.
Troubleshooting
The operator tracks the current archival status under the MariaDB status subresource. This status is updated after each archival cycle, and it contains metadata about the binary logs that have been archived, along with other useful information for troubleshooting:
Additionally, also under the status subresource, the operator sets status conditions whenever a specific state of the binlog archival or point-in-time restoration process is reached:
The operator also emits Kubernetes events during both archival and restoration process, to either report an outstanding event or error:
Common errors
Unable to start archival process
The following error will be returned if the archival process is configured pointing to a non-empty object storage, as the operator expects to start from a clean state:
To solve this, you can update the PointInTimeRecovery configuration pointing to another object storage bucket or prefix that is empty:
After updating the PointInTimeRecovery configuration, the error will be cleared in the next archival cycle, and a new archival operation will be attempted.
Alternatively, you can also consider deleting the existing binary logs and index.yaml inventory file, only after having double checked that they are not needed for recovery.
Target recovery time is after latest recoverable time
This error is returned in the MariaDB init process, when the targetRecoveryTime provided to bootstrap is later than the last recoverable time reported by the PointInTimeRecovery status.
For example, if you have configured the bootstrapFrom.targetRecoveryTime field with the value 2026-02-28T20:10:42Z, the following error will be returned:
There are two ways to solve this issue:
Update the targetRecoveryTime in the MariaDB resource to be earlier than or equal to the last recoverable time, which in this case is 2026-02-27T20:10:42Z.
Disable strictMode in the PointInTimeRecovery configuration, allowing to restore up until the latest recoverable time, in this case 2026-02-27T20:10:42Z.
Invalid binary log timeline: error getting binlog timeline between GTID and target time: timeline did not reach target time
This error is returned when computing the binary log timeline during the restoration process, and it means that the operator could not build a timeline that reaches the targetRecoveryTime provided in the bootstrapFrom field of the MariaDB resource.
And your targetRecoveryTime is 2026-02-28T20:10:42Z, the following error will be returned:
There are two ways to solve this issue:
Update the targetRecoveryTime in the MariaDB resource to be earlier than or equal to the last recoverable time, which in this case is 2026-02-27T16:04:15Z.
Disable strictMode in the PointInTimeRecovery configuration, allowing to restore up until the latest recoverable time, in this case 2026-02-27T16:04:15Z.
kubectl get events --field-selector involvedObject.name=mariadb-repl
LAST SEEN TYPE REASON OBJECT MESSAGE
41s Warning MariaDBInitError mariadb/mariadb-repl Unable to init MariaDB: target recovery time 2026-02-28 21:10:42 +0100 CET is after latest recoverable time 2026-02-27 20:10:42 +0000 UTC
kubectl get mariadb
NAME READY STATUS PRIMARY UPDATES AGE
mariadb-repl False Init error: target recovery time 2026-02-28 21:10:42 +0100 CET is after latest recoverable time 2026-02-27 20:10:42 +0000 UTC mariadb-repl-0 ReplicasFirstPrimaryLast 65s
kubectl get pitr
NAME PHYSICAL BACKUP LAST RECOVERABLE TIME STRICT MODE AGE
pitr physicalbackup-daily 2026-02-27T20:10:42Z false 43h
kubectl get events --field-selector involvedObject.name=mariadb-repl
LAST SEEN TYPE REASON OBJECT MESSAGE
12s Warning BinlogTimelineInvalid mariadb/mariadb-repl Invalid binary log timeline: error getting binlog timeline between GTID 0-10-4 and target time 2026-02-28T21:10:42+01:00: timeline did not reach target time: 2026-02-28T21:10:42+01:00, last recoverable time: 2026-02-27T21:10:42+01:00
kubectl get mariadb
NAME READY STATUS PRIMARY UPDATES AGE
mariadb-repl False Error replaying binlogs: Invalid binary log timeline: error getting binlog timeline between GTID 0-10-4 and target time 2026-02-28T21:10:42+01:00: timeline did not reach target time: 2026-02-28T21:10:42+01:00, last recoverable time: 2026-02-27T21:10:42+01:00 mariadb-repl-0 ReplicasFirstPrimaryLast 3m28s
kubectl get pitr
NAME PHYSICAL BACKUP LAST RECOVERABLE TIME STRICT MODE AGE
pitr physicalbackup-daily 2026-02-27T20:10:42Z true 43h
kubectl get mariadb
NAME READY STATUS PRIMARY UPDATES AGE
mariadb-repl False Init error: target recovery time 2026-02-28 21:10:42 +0100 CET is after latest recoverable time 2026-02-27 20:10:42 +0000 UTC mariadb-repl-0 ReplicasFirstPrimaryLast 65s
kubectl get mariadb
NAME READY STATUS PRIMARY UPDATES AGE
mariadb-repl False Error replaying binlogs: Invalid binary log timeline: error getting binlog timeline between GTID 0-10-4 and target time 2026-02-28T21:10:42+01:00: timeline did not reach target time: 2026-02-28T21:10:42+01:00, last recoverable time: 2026-02-27T16:04:15Z mariadb-repl-0 ReplicasFirstPrimaryLast 3m28s
Guide to securing database traffic with TLS/SSL certificates, covering internal communication between nodes and external client connections.
MariaDB Enterprise Kubernetes Operator supports issuing, configuring and rotating TLS certificates for both your MariaDB and MaxScale resources. It aims to be secure by default; for this reason, TLS certificates are issued and configured by the operator as a default behaviour.
MariaDB configuration
This section covers TLS configuration in new instances. If you are looking to migrate an existing instance to use TLS, please refer to instead.
TLS can be configured in MariaDB resources by setting tls.enabled=true:
As a result, the operator will generate a Certificate Authority (CA) and use it to issue the leaf certificates mounted by the instance. It is important to note that the TLS connections are not enforced in this case i.e. both TLS and non-TLS connections will be accepted. This is the default behaviour when no tls field is specified.
If you want to enforce TLS connections, you can set tls.required=true:
This approach ensures that any unencrypted connection will fail, effectively enforcing security best practices.
If you want to fully opt-out from TLS, you can set tls.enabled=false:
This will disable certificate issuance, resulting in all connections being unencrypted.
Refer to further sections for a more advanced TLS configuration.
MaxScale configuration
This section covers TLS configuration in new instances. If you are looking to migrate an existing instance to use TLS, please refer to instead.
TLS will be automatically enabled in MaxScale when the referred MariaDB (via mariaDbRef) has TLS enabled and enforced. Alternatively, you can explicitly enable TLS by setting tls.enabled=true:
As a result, the operator will generate a Certificate Authority (CA) and use it to issue the leaf certificates mounted by the instance. It is important to note that, unlike MariaDB, MaxScale does not support TLS and non-TLS connections simultaneously (see ). Therefore, TLS connections will be enforced in this case i.e. unencrypted connections will fail, ensuring security best practises.
If you want to fully opt-out from TLS, you can set tls.enabled=false. This should only be done when MariaDB TLS is not enforced or disabled:
This will disable certificate issuance, resulting in all connections being unencrypted.
Refer to further sections for a more advanced TLS configuration.
MariaDB certificate specification
The MariaDB TLS setup consists of the following certificates:
Certificate Authority (CA) keypair to issue the server certificate.
Server leaf certificate used to encrypt server connections.
Certificate Authority (CA) keypair to issue the client certificate.
As a default behaviour, the operator generates a single CA to be used for issuing both the server and client certificates, but the user can decide to use dedicated CAs for each case. Root CAs, and in some cases, are supported, see for further detail.
The server certificate contains the following Subject Alternative Names (SANs):
<mariadb-name>.<namespace>.svc.<cluster-name>
<mariadb-name>.<namespace>.svc
<mariadb-name>.<namespace>
Whereas the client certificate is only valid for the <mariadb-name>-client SAN.
MaxScale certificate specification
The MaxScale TLS setup consists of the following certificates:
Certificate Authority (CA) keypair to issue the admin certificate.
Admin leaf certificate used to encrypt the administrative REST API and GUI.
Certificate Authority (CA) keypair to issue the listener certificate.
As a default behaviour, the operator generates a single CA to be used for issuing both the admin and the listener certificates, but the user can decide to use dedicated CAs for each case. Client certificate and CA bundle configured in the referred MariaDB are used as server certificates by default, but the user is able to provide its own certificates. Root CAs, and in some cases, are supported, see for further detail.
Both the admin and listener certificates contain the following Subject Alternative Names (SANs):
<maxscale-name>.<namespace>.svc.<clusername>
<maxscale-name>.<namespace>.svc
<maxscale-name>.<namespace>
For details about the server certificate, see .
CA bundle
As you could appreciate in and , the TLS setup involves multiple CAs. In order to establish trust in a more convenient way, the operator groups the CAs together in a CA bundle that will need to be specified when . Every MariaDB and MaxScale resources have a dedicated bundle of its own available in a Secret named <instance-name>-ca-bundle.
These trust bundles contain non expired CAs needed to connect to the instances. New CAs are automatically added to the bundle after , whilst old CAs are removed after they expire. It is important to note that both the new and old CAs remain in the bundle for a while to ensure a smooth update when the new certificates are issued by the new CA.
Issue certificates with the operator
By setting tls.enabled=true, the operator will generate a root CA for each instance, which will be used to issue the certificates described in the and sections:
To establish trust with the instances, the CA's public key will be added to the . If you need a different trust chain, please refer to the section.
The advantage of this approach is that the operator fully manages the Secrets that contain the certificates without depending on any third party dependency. Also, since the operator fully controls the renewal process, it is able to pause a leaf certificate renewal if the CA is being updated at that moment, as described in the section.
Issue certificates with cert-manager
must be previously installed in the cluster in order to use this feature.
cert-manager is the de-facto standard for managing certificates in Kubernetes. It is a Kubernetes native certificate management controller that allows you to automatically provision, manage and renew certificates. It supports multiple (in-cluster, Hashicorp Vault...) which are configured as Issuer or ClusterIssuer resources.
As an example, we are going to setup an in-cluster root CA ClusterIssuer:
Then, you can reference the ClusterIssuer in the MariaDB and MaxScale resources:
The operator will create cert-manager's for each certificate, and will mount the resulting in the instances. These Secrets containing the certificates will be managed by cert-manager as well as its renewal process.
To establish trust with the instances, the in the Secret will be added to the . If you need a different trust chain, please refer to the section.
The advantage of this approach is that you can use any of the , such as the in-cluster CA or HashiCorp Vault, and potentially reuse the same Issuer/ClusterIssuer with multiple instances.
Provide your own certificates
Providing your own certificates is as simple as creating the Secrets with the appropriate structure and referencing them in the MariaDB and MaxScale resources. The certificates must be compliant with the and .
The CA certificate must be provided as a Secret with the following structure:
The ca.key field is only required if you want to the operator to automatically re-issue certificates with this CA, see for further detail. In other words, if only ca.crt is provided, the operator will trust this CA by adding it to the , but no certificates will be issued with it, the user will responsible for upating the certificate Secret manually with renewed certificates.
The enterprise.mariadb.com/watch label is required only if you want the operator to automatically trigger an update when the CA is renewed, see for more detail.
The leaf certificate must match the previous CA's public key, and it should provided as a with the following structure:
The enterprise.mariadb.com/watch label is required only if you want the operator to automatically trigger an update when the certificate is renewed, see for more detail.
Once the certificate Secrets are available in the cluster, you can create the MariaDB and MaxScale resources referencing them:
Bring your own CA
If you already have a CA setup outside of Kubernetes, you can use it with the operator by providing the CA certificate as a Secret with the following structure:
Just by providing a reference to this Secret, the operator will use it to issue leaf certificates instead of generating a new CA:
Intermediate CAs
Intermediate CAs are supported by the operator with . Leaf certificates issued by the intermediate CAs are slightly different, and include the intermediate CA public key as part of the certificate, in the following order: Leaf certificate -> Intermediate CA. This is a common practise to easily establish trust in complex PKI setups, where multiple CA are involved.
Many applications support this Leaf certificate -> Intermediate CA structure as a valid leaf certificate, and are able to establish trust with the intermediate CA. Normally, the intermediate CA will not be directly trusted, but used as a path to the root CA, which should be trusted by the application. If not trusted already, you can add the root CA to the by using a .
Custom trust
You are able to provide a set of CA public keys to be added to the by creating a Secret with the following structure:
And referencing it in the MariaDB and MaxScale resources, for instance:
This is specially useful when issuing certificates with an intermediate CA, see section for further detail.
Distributing trust
Distributing the to your application namespace is out of the scope of this operator, the bundles will remain in the same namespace as the MariaDB and MaxScale instances.
If your application is in a different namespace, you can copy the CA bundle to the application namespace. Projects like can help you to automate this process and continously reconcile bundle changes.
TLS version configuration
You may configure the supported TLS versions in MariaDB by setting:
If not specified, the MariaDB's default TLS versions will be used. See .
Regarding MaxScale, you can also configure the supported TLS versions, both for the Admin REST API and MariaDB servers:
If not specified, the MaxScale's default TLS versions will be used. See MaxScale docs:
Certificate lifetime configuration
By default, CA certificates are valid for 3 years, while leaf certificates have a validity of 3 months. This lifetime can be customized in both MariaDB and MaxScale resources through the certificate configuration fields. For example:
When issuing certificates with cert-manager, you can specify the certificate configuration field alongside the issuer reference:
Private key configuration
By default, private keys are generated with the ECDSA algorithm and a size of 256. You can customize the private key configuration in both MariaDB and MaxScale resources through the certificate configuration fields. For example:
When issuing certificates with cert-manager, you can specify the private key configuration field alongside the issuer reference:
The following set of algorithms and sizes are supported:
Algorithm
Key Sizes
CA renewal
Depending on the setup, CAs can be managed and renewed by either MariaDB Enterprise Kubernetes Operator or cert-manager.
When managed by the operator, CAs have a lifetime of 3 years by default, and are marked for renewal after 66% of its lifetime has passed i.e. ~2 years. After being renewed, the operator will trigger an update of the instances to include the new CA in the bundle.
When managed by cert-manager, the renewal process is fully controlled by cert-manager, but the operator will also update the CA bundle after the CA is renewed.
You may choose any of the available to control the instance update process.
Certificate renewal
Depending on the setup, certificates can be managed and renewed by the operator or cert-manager. In either case, certificates have a lifetime of 90 days by default, and marked for renewal after 66% of its lifetime has passed i.e. ~60 days.
When the , the operator is able to pause a leaf certificate renewal if the CA is being updated at that same moment. This approach ensures a smooth update by avoiding the simultaneous rollout of the new CA and its associated certificates. Rolling them out together could be problematic, as all Pods need to trust the new CA before its issued certificates can be utilized.
When the , the renewal process is fully managed by cert-manager, and the operator will not interfere with it. The operator will only update the instances whenever the CA or the certificates get renewed.
You may choose any of the available to control the instance update process.
Certificate status
To have a high level picture of the certificates status, you can check the status.tls field of the MariaDB and MaxScale resources:
TLS requirements for Users
You are able to declaratively manage access to your MariaDB instances by creating . In particular, when TLS is enabled, you can provide additional requirements for the user when connecting over TLS.
For instance, if you want to require a valid x509 certificate for the user to be able o connect:
In order to restrict which subject the user certificate should have and/or require a particular issuer, you may set:
When any of these TLS requirements are not met, the user will not be able to connect to the instance.
See and the for further detail.
Galera Enterprise SSL modes
MariaDB Enterprise Cluster (Galera) supports multiple SSL modes to secure the communication between the nodes. For configuring the SSL enforcement level on the server i.e. WSREP, you can set:
The following values are supported: SERVER_X509, SERVER and PROVIDER. Refer to the for further detail about these modes.
You may also configure the SSL enforcement level used during Snapshot State Transfers(SST) by setting:
The following values are supported: VERIFY_IDENTITY, VERIFY, REQUIRED and DISABLED. Refer to the for further detail about these modes.
If you are willing to increase the enforcement level in an existing instance, make sure you follow the migration guide provided in the section.
Secure application connections with TLS
In this guide, we will configure TLS for an application running in the app namespace to connect with MariaDB and MaxScale instances deployed in the default namespace. We assume that the following resources are already present in the default namespace with TLS enabled:
The first step is to create a User resource and grant the necessary permissions:
The app user will be able to connect to the MariaDB instance from the app namespace by providing a certificate with subject mariadb-galera-client and issued by the mariadb-galera-ca CA.
With the permissions in place, the next step is to prepare the certificates required for the application to connect:
CA Bundle: The trust bundle for MariaDB and MaxScale is available as a Secret named <instance-name>-ca-bundle in the default namespace. For more details, refer to the sections on and .
Client Certificate: MariaDB
In this example, we assume that the following Secrets are available in the app namespace:
mariadb-bundle: CA bundle for the MariaDB and MaxScale instances.
mariadb-galera-client-cert: Client certificate required to connect to the MariaDB instance.
With these Secrets in place, we can proceed to define our application:
The application will connect to the MariaDB instance using the app user, and will execute a simple query to check the connection status. The --ssl-ca, --ssl-cert, --ssl-key and --ssl-verify-server-cert flags are used to provide the CA bundle, client certificate and key, and to verify the server certificate respectively.
If the connection is successful, the output should be:
You can also point the application to the MaxScale instance by updating the host to maxscale-galera.default.svc.cluster.local:
If successful, the expected output is:
Test TLS certificates with Connections
In order to validate your TLS setup, and to ensure that you TLS certificates are correctly issued and configured, you can use the Connection resource to test the connection to both your MariaDB and MaxScale instances:
If successful, the Connection resource will be in a Ready state, which means that your TLS setup is correctly configured:
This could be specially useful when and issuing certificates for your applications.
Limitations
Galera and intermediate CAs
Leaf certificates issued by are not supported by Galera, see . This implies that a root CA must be used to issue the MariaDB certificates.
This doesn't affect MaxScale, as it is able to establish trust with intermediate CAs, and therefore you can still issue your application facing certificates (MaxScale listeners) with an intermediate CA, giving you more flexibility in your PKI setup.
MaxScale
Unlike MariaDB, TLS and non-TLS connections on the same port are not supported simultaneously.
TLS encryption must be enabled for listeners when they are created. For servers, the TLS can be enabled after creation but it cannot be disabled or altered.
Information on how to enable and collect performance metrics from managed database instances for monitoring with tools like Prometheus and Grafana.
MariaDB Enterprise Kubernetes Operator is able to configure Prometheus operator resources to scrape metrics from MariaDB and MaxScale instances. These metrics can be used later on to build Grafana dashboards or trigger Alertmanager alerts.
Operator metrics
In order to expose the operator internal metrics, you can install the operator Helm chart passing the metrics.enabled = true value. Refer to the Helm documentation for further detail.
Exporters
The operator configures exporters to query MariaDB and MaxScale, exposing metrics in Prometheus format through an HTTP endpoint.
It is important to note that these exporters run as standalone Deployments rather than as sidecars for each data-plane replica. Since they can communicate with all replicas of MariaDB and MaxScale, there is no need to run a separate exporter for each replica.
As a result, the lifecycle of MariaDB and MaxScale remains independent from the exporters, allowing for upgrades without impacting the availability of either component.
ServiceMonitor
Once the exporter Deployment is ready, the operator creates a object that will be eventually reconciled by the , resulting in the Prometheus instance being configured to scrape the exporter endpoint.
As you scale MariaDB and MaxScale by adjusting the number of replicas, the operator will reconcile the ServiceMonitor to dynamically add or remove targets corresponding to the updated instances.
Configuration
The easiest way to setup metrics in your MariaDB and MaxScale instances is just by setting spec.metrics.enabled = true:
The rest of the fields are defaulted by the operator. If you need a more fine grained configuration, refer to the and the following examples:
Grafana dashboards
The following community dashboards available on are compatible with the , and therefore they can be used to monitor MariaDB instances:
MariaDB metrics
The following metrics are available for MariaDB instances:
Metric Name
Description
Type
MaxScale metrics
The following metrics are available for MaxScale instances:
CronJobTemplate defines parameters for configuring CronJob objects.
Appears in:
Field
Description
Default
Validation
Database
Database is the Schema for the databases API. It is used to define a logical database as if you were running a 'CREATE DATABASE' statement.
Field
Description
Default
Validation
DatabaseSpec
DatabaseSpec defines the desired state of Database
Appears in:
Field
Description
Default
Validation
EmptyDirVolumeSource
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#emptydirvolumesource-v1-core.
Appears in:
Field
Description
Default
Validation
EnvFromSource
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#envfromsource-v1-core.
Appears in:
Field
Description
Default
Validation
EnvVar
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#envvarsource-v1-core.
Appears in:
Field
Description
Default
Validation
EnvVarSource
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#envvarsource-v1-core.
Appears in:
Field
Description
Default
Validation
ExecAction
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#execaction-v1-core.
Appears in:
Field
Description
Default
Validation
Exporter
Exporter defines a metrics exporter container.
Appears in:
Field
Description
Default
Validation
ExternalMariaDB
ExternalMariaDB is the Schema for the external MariaDBs API. It is used to define external MariaDB server.
Field
Description
Default
Validation
ExternalMariaDBSpec
ExternalMariaDBSpec defines the desired state of an External MariaDB
Appears in:
Field
Description
Default
Validation
ExternalTLS
ExternalTLS defines the TLS configuration for external MariaDB instances.
Appears in:
Field
Description
Default
Validation
Galera
Galera allows you to enable multi-master HA via Galera in your MariaDB cluster.
Appears in:
Field
Description
Default
Validation
GaleraConfig
GaleraConfig defines storage options for the Galera configuration files.
Appears in:
Field
Description
Default
Validation
GaleraInitJob
GaleraInitJob defines a Job used to be used to initialize the Galera cluster.
Appears in:
Field
Description
Default
Validation
GaleraRecovery
GaleraRecovery is the recovery process performed by the operator whenever the Galera cluster is not healthy. More info: https://galeracluster.com/library/documentation/crash-recovery.html.
Appears in:
Field
Description
Default
Validation
GaleraRecoveryJob
GaleraRecoveryJob defines a Job used to be used to recover the Galera cluster.
Appears in:
Field
Description
Default
Validation
GaleraSpec
GaleraSpec is the Galera desired state specification.
Appears in:
Field
Description
Default
Validation
GeneratedSecretKeyRef
GeneratedSecretKeyRef defines a reference to a Secret that can be automatically generated by mariadb-enterprise-operator if needed.
Appears in:
Field
Description
Default
Validation
Grant
Grant is the Schema for the grants API. It is used to define grants as if you were running a 'GRANT' statement.
Field
Description
Default
Validation
GrantSpec
GrantSpec defines the desired state of Grant
Appears in:
Field
Description
Default
Validation
Gtid
Underlying type:string
Gtid indicates which Global Transaction ID (GTID) position mode should be used when connecting a replica to the master. See: https://mariadb.com/kb/en/gtid/#using-current_pos-vs-slave_pos.
Appears in:
Field
Description
HTTPGetAction
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#httpgetaction-v1-core.
Appears in:
Field
Description
Default
Validation
HealthCheck
HealthCheck defines intervals for performing health checks.
Appears in:
Field
Description
Default
Validation
HostPathVolumeSource
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#hostpathvolumesource-v1-core
Appears in:
Field
Description
Default
Validation
InitContainer
InitContainer is an init container that runs in the MariaDB Pod and co-operates with mariadb-enterprise-operator.
Appears in:
Field
Description
Default
Validation
Job
Job defines a Job used to be used with MariaDB.
Appears in:
Field
Description
Default
Validation
JobContainerTemplate
JobContainerTemplate defines a template to configure Container objects that run in a Job.
Appears in:
Field
Description
Default
Validation
JobPodTemplate
JobPodTemplate defines a template to configure Container objects that run in a Job.
Appears in:
Field
Description
Default
Validation
KubernetesAuth
KubernetesAuth refers to the Kubernetes authentication mechanism utilized for establishing a connection from the operator to the agent. The agent validates the legitimacy of the service account token provided as an Authorization header by creating a TokenReview resource.
Appears in:
Field
Description
Default
Validation
LabelSelector
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#labelselector-v1-meta
Appears in:
Field
Description
Default
Validation
LabelSelectorRequirement
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#labelselectorrequirement-v1-meta
Appears in:
Field
Description
Default
Validation
LocalObjectReference
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#localobjectreference-v1-core.
Appears in:
Field
Description
Default
Validation
MariaDB
MariaDB is the Schema for the mariadbs API. It is used to define MariaDB clusters.
Field
Description
Default
Validation
MariaDBRef
MariaDBRef is a reference to a MariaDB object.
Appears in:
Field
Description
Default
Validation
MariaDBSpec
MariaDBSpec defines the desired state of MariaDB
Appears in:
Field
Description
Default
Validation
MariadbMetrics
MariadbMetrics defines the metrics for a MariaDB.
Appears in:
Field
Description
Default
Validation
MaxScale
MaxScale is the Schema for the maxscales API. It is used to define MaxScale clusters.
Field
Description
Default
Validation
MaxScaleAdmin
MaxScaleAdmin configures the admin REST API and GUI.
Appears in:
Field
Description
Default
Validation
MaxScaleAuth
MaxScaleAuth defines the credentials required for MaxScale to connect to MariaDB.
Appears in:
Field
Description
Default
Validation
MaxScaleConfig
MaxScaleConfig defines the MaxScale configuration.
Appears in:
Field
Description
Default
Validation
MaxScaleConfigSync
MaxScaleConfigSync defines how the config changes are replicated across replicas.
Appears in:
Field
Description
Default
Validation
MaxScaleListener
MaxScaleListener defines how the MaxScale server will listen for connections.
Appears in:
Field
Description
Default
Validation
MaxScaleMetrics
MaxScaleMetrics defines the metrics for a Maxscale.
Appears in:
Field
Description
Default
Validation
MaxScaleMonitor
MaxScaleMonitor monitors MariaDB server instances
Appears in:
Field
Description
Default
Validation
MaxScalePodTemplate
MaxScalePodTemplate defines a template for MaxScale Pods.
Appears in:
Field
Description
Default
Validation
MaxScaleServer
MaxScaleServer defines a MariaDB server to forward traffic to.
Appears in:
Field
Description
Default
Validation
MaxScaleService
Services define how the traffic is forwarded to the MariaDB servers.
Appears in:
Field
Description
Default
Validation
MaxScaleSpec
MaxScaleSpec defines the desired state of MaxScale.
Appears in:
Field
Description
Default
Validation
MaxScaleTLS
TLS defines the PKI to be used with MaxScale.
Appears in:
Field
Description
Default
Validation
Metadata
Metadata defines the metadata to added to resources.
Appears in:
Field
Description
Default
Validation
MonitorModule
Underlying type:string
MonitorModule defines the type of monitor module
Appears in:
Field
Description
NFSVolumeSource
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#nfsvolumesource-v1-core.
Appears in:
Field
Description
Default
Validation
NodeAffinity
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#nodeaffinity-v1-core
Appears in:
Field
Description
Default
Validation
NodeSelector
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#nodeselector-v1-core
Appears in:
Field
Description
Default
Validation
NodeSelectorRequirement
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#nodeselectorrequirement-v1-core
Appears in:
Field
Description
Default
Validation
NodeSelectorTerm
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#nodeselectorterm-v1-core
Appears in:
Field
Description
Default
Validation
ObjectFieldSelector
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#objectfieldselector-v1-core.
Appears in:
Field
Description
Default
Validation
ObjectReference
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#objectreference-v1-core.
Appears in:
Field
Description
Default
Validation
PasswordPlugin
PasswordPlugin defines the password plugin and its arguments.
Appears in:
Field
Description
Default
Validation
PersistentVolumeClaimRetentionPolicyType
Underlying type:string
PersistentVolumeClaimRetentionPolicyType describes the lifecycle of persistent volume claims. Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#statefulsetpersistentvolumeclaimretentionpolicy-v1-apps.
Appears in:
Field
Description
PersistentVolumeClaimSpec
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#persistentvolumeclaimspec-v1-core.
Appears in:
Field
Description
Default
Validation
PersistentVolumeClaimVolumeSource
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#persistentvolumeclaimvolumesource-v1-core.
Appears in:
Field
Description
Default
Validation
PhysicalBackup
PhysicalBackup is the Schema for the physicalbackups API. It is used to define physical backup jobs and its storage.
Field
Description
Default
Validation
PhysicalBackupPodTemplate
PhysicalBackupPodTemplate defines a template to configure Container objects that run in a PhysicalBackup.
Appears in:
Field
Description
Default
Validation
PhysicalBackupSchedule
PhysicalBackupSchedule defines when the PhysicalBackup will be taken.
Appears in:
Field
Description
Default
Validation
PhysicalBackupSpec
PhysicalBackupSpec defines the desired state of PhysicalBackup.
Appears in:
Field
Description
Default
Validation
PhysicalBackupStorage
PhysicalBackupStorage defines the storage for physical backups.
Appears in:
Field
Description
Default
Validation
PhysicalBackupTarget
Underlying type:string
PhysicalBackupTarget defines in which Pod the physical backups will be taken.
Appears in:
Field
Description
PhysicalBackupVolumeSnapshot
PhysicalBackupVolumeSnapshot defines parameters for the VolumeSnapshots used as physical backups.
Appears in:
Field
Description
Default
Validation
PodAffinityTerm
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podaffinityterm-v1-core.
Appears in:
Field
Description
Default
Validation
PodAntiAffinity
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podantiaffinity-v1-core.
Appears in:
Field
Description
Default
Validation
PodDisruptionBudget
PodDisruptionBudget is the Pod availability bundget for a MariaDB
Appears in:
Field
Description
Default
Validation
PodSecurityContext
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podsecuritycontext-v1-core
Appears in:
Field
Description
Default
Validation
PodTemplate
PodTemplate defines a template to configure Container objects.
Appears in:
Field
Description
Default
Validation
PointInTimeRecovery
PointInTimeRecovery is the Schema for the pointintimerecoveries API. It contains binlog archival and point-in-time restoration settings.
Field
Description
Default
Validation
PointInTimeRecoverySpec
PointInTimeRecoverySpec defines the desired state of PointInTimeRecovery. It contains binlog archive and point-in-time restoration settings.
Appears in:
Field
Description
Default
Validation
PointInTimeRecoveryStorage
PointInTimeRecoveryStorage stores the different storage options for PITR
Appears in:
Field
Description
Default
Validation
PreferredSchedulingTerm
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#preferredschedulingterm-v1-core
Appears in:
Field
Description
Default
Validation
PrimaryGalera
PrimaryGalera is the Galera configuration for the primary node.
Appears in:
Field
Description
Default
Validation
PrimaryReplication
PrimaryReplication is the replication configuration and operation parameters for the primary.
Appears in:
Field
Description
Default
Validation
Probe
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#probe-v1-core.
Appears in:
Field
Description
Default
Validation
ProbeHandler
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#probe-v1-core.
Appears in:
Field
Description
Default
Validation
ReplicaBootstrapFrom
ReplicaBootstrapFrom defines the sources for bootstrapping new relicas.
Appears in:
Field
Description
Default
Validation
ReplicaRecovery
ReplicaRecovery defines how the replicas should be recovered after they enter an error state.
Appears in:
Field
Description
Default
Validation
ReplicaReplication
ReplicaReplication is the replication configuration and operation parameters for the replicas.
Appears in:
Field
Description
Default
Validation
Replication
Replication defines replication configuration for a MariaDB cluster.
Appears in:
Field
Description
Default
Validation
ReplicationSpec
ReplicationSpec is the replication desired state.
Appears in:
Field
Description
Default
Validation
ResourceRequirements
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#resourcerequirements-v1-core.
Appears in:
Restore
Restore is the Schema for the restores API. It is used to define restore jobs and its restoration source.
Field
Description
Default
Validation
RestoreSource
RestoreSource defines a source for restoring a logical backup.
Appears in:
Field
Description
Default
Validation
RestoreSpec
RestoreSpec defines the desired state of restore
Appears in:
Field
Description
Default
Validation
S3
Appears in:
Field
Description
Default
Validation
SQLTemplate
SQLTemplate defines a template to customize SQL objects.
Appears in:
Field
Description
Default
Validation
SSECConfig
SSECConfig defines the configuration for SSE-C (Server-Side Encryption with Customer-Provided Keys).
Appears in:
Field
Description
Default
Validation
SST
Underlying type:string
SST is the Snapshot State Transfer used when new Pods join the cluster. More info: https://galeracluster.com/library/documentation/sst.html.
Appears in:
Field
Description
Schedule
Schedule contains parameters to define a schedule
Appears in:
Field
Description
Default
Validation
SecretKeySelector
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#secretkeyselector-v1-core.
Appears in:
Field
Description
Default
Validation
SecretTemplate
SecretTemplate defines a template to customize Secret objects.
Appears in:
Field
Description
Default
Validation
SecretVolumeSource
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#secretvolumesource-v1-core.
Appears in:
Field
Description
Default
Validation
SecurityContext
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#securitycontext-v1-core.
Appears in:
Field
Description
Default
Validation
ServiceMonitor
ServiceMonitor defines a prometheus ServiceMonitor object.
Appears in:
Field
Description
Default
Validation
ServicePort
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#serviceport-v1-core
Appears in:
Field
Description
Default
Validation
ServiceRouter
Underlying type:string
ServiceRouter defines the type of service router.
Appears in:
Field
Description
ServiceTemplate
ServiceTemplate defines a template to customize Service objects.
Appears in:
Field
Description
Default
Validation
SqlJob
SqlJob is the Schema for the sqljobs API. It is used to run sql scripts as jobs.
Field
Description
Default
Validation
SqlJobSpec
SqlJobSpec defines the desired state of SqlJob
Appears in:
Field
Description
Default
Validation
StagingStorage
StagingStorage defines the temporary storage used to keep external backups (i.e. S3) while they are being processed.
Appears in:
Field
Description
Default
Validation
StatefulSetPersistentVolumeClaimRetentionPolicy
StatefulSetPersistentVolumeClaimRetentionPolicy describes the lifecycle of PVCs created from volumeClaimTemplates. Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#statefulsetpersistentvolumeclaimretentionpolicy-v1-apps.
Appears in:
Field
Description
Default
Validation
Storage
Storage defines the storage options to be used for provisioning the PVCs mounted by MariaDB.
Appears in:
Field
Description
Default
Validation
StorageVolumeSource
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#volume-v1-core.
Appears in:
Field
Description
Default
Validation
SuspendTemplate
SuspendTemplate indicates whether the current resource should be suspended or not.
Appears in:
Field
Description
Default
Validation
TCPSocketAction
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#tcpsocketaction-v1-core.
Appears in:
Field
Description
Default
Validation
TLS
TLS defines the PKI to be used with MariaDB.
Appears in:
Field
Description
Default
Validation
TLSConfig
Appears in:
Field
Description
Default
Validation
TLSRequirements
TLSRequirements specifies TLS requirements for the user to connect. See: https://mariadb.com/kb/en/securing-connections-for-client-and-server/#requiring-tls.
Appears in:
Field
Description
Default
Validation
TopologySpreadConstraint
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#topologyspreadconstraint-v1-core.
Appears in:
Field
Description
Default
Validation
TypedLocalObjectReference
TypedLocalObjectReference is a reference to a specific object type.
Appears in:
Field
Description
Default
Validation
UpdateStrategy
UpdateStrategy defines how a MariaDB resource is updated.
Appears in:
Field
Description
Default
Validation
UpdateType
Underlying type:string
UpdateType defines the type of update for a MariaDB resource.
Appears in:
Field
Description
User
User is the Schema for the users API. It is used to define grants as if you were running a 'CREATE USER' statement.
Field
Description
Default
Validation
UserSpec
UserSpec defines the desired state of User
Appears in:
Field
Description
Default
Validation
Volume
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#volume-v1-core.
Appears in:
Field
Description
Default
Validation
VolumeClaimTemplate
VolumeClaimTemplate defines a template to customize PVC objects.
Appears in:
Field
Description
Default
Validation
VolumeMount
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#volumemount-v1-core.
Appears in:
Field
Description
Default
Validation
VolumeSource
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#volume-v1-core.
Appears in:
Field
Description
Default
Validation
WaitPoint
Underlying type:string
WaitPoint defines whether the transaction should wait for ACK before committing to the storage engine. More info: https://mariadb.com/kb/en/semisynchronous-replication/#rpl_semi_sync_master_wait_point.
Appears in:
Field
Description
WeightedPodAffinityTerm
Refer to the Kubernetes docs: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#weightedpodaffinityterm-v1-core.
EnvFrom represents the references (via ConfigMap and Secrets) to environment variables to be injected in the container.
volumeMounts array
VolumeMounts to be used in the Container.
livenessProbe
LivenessProbe to be used in the Container.
readinessProbe
ReadinessProbe to be used in the Container.
startupProbe
StartupProbe to be used in the Container.
resources
Resources describes the compute resource requirements.
securityContext
SecurityContext holds security configuration that will be applied to a container.
imagestring
Image name to be used by the MariaDB instances. The supported format is <image>:<tag>.
imagePullPolicy
ImagePullPolicy is the image pull policy. One of Always, Never or IfNotPresent. If not defined, it defaults to IfNotPresent.
Enum: [Always Never IfNotPresent]
portinteger
Port where the agent will be listening for API connections.
probePortinteger
Port where the agent will be listening for probe connections.
kubernetesAuth
KubernetesAuth to be used by the agent container
basicAuth
BasicAuth to be used by the agent container
gracefulShutdownTimeout
GracefulShutdownTimeout is the time we give to the agent container in order to gracefully terminate in-flight requests.
storageAccountNamestring
StorageAccountName is the name of the storage account. Pairs with StorageAccountKey for static credential authentication
storageAccountKey
StorageAccountKey is a reference to a Secret key containing the Azure Blob Storage Storage account Key. Pairs with StorageAccountKey for static credential authentication
tls
TLS provides the configuration required to establish TLS connections with Azure Blob Storage.
spec
podMetadata
PodMetadata defines extra metadata for the Pod.
imagePullSecrets array
ImagePullSecrets is the list of pull Secrets to be used to pull the image.
podSecurityContext
SecurityContext holds pod-level security attributes and common container settings.
serviceAccountNamestring
ServiceAccountName is the name of the ServiceAccount to be used by the Pods.
affinity
Affinity to be used in the Pod.
nodeSelectorobject (keys:string, values:string)
NodeSelector to be used in the Pod.
tolerations array
Tolerations to be used in the Pod.
priorityClassNamestring
PriorityClassName to be used in the Pod.
successfulJobsHistoryLimitinteger
SuccessfulJobsHistoryLimit defines the maximum number of successful Jobs to be displayed.
Minimum: 0
failedJobsHistoryLimitinteger
FailedJobsHistoryLimit defines the maximum number of failed Jobs to be displayed.
Minimum: 0
timeZonestring
TimeZone defines the timezone associated with the cron expression.
mariaDbRef
MariaDBRef is a reference to a MariaDB object.
Required: {}
compression
Compression algorithm to be used in the Backup.
Enum: [none bzip2 gzip]
stagingStorage
StagingStorage defines the temporary storage used to keep external backups (i.e. S3) while they are being processed.
It defaults to an emptyDir volume, meaning that the backups will be temporarily stored in the node where the Backup Job is scheduled.
The staging area gets cleaned up after each backup is completed, consider this for sizing it appropriately.
storage
Storage defines the final storage for backups.
Required: {}
schedule
Schedule defines when the Backup will be taken.
maxRetention
MaxRetention defines the retention policy for backups. Old backups will be cleaned up by the Backup Job.
It defaults to 30 days.
databasesstring array
Databases defines the logical databases to be backed up. If not provided, all databases are backed up.
ignoreGlobalPrivboolean
IgnoreGlobalPriv indicates to ignore the mysql.global_priv in backups.
If not provided, it will default to true when the referred MariaDB instance has Galera enabled and otherwise to false.
logLevelstring
LogLevel to be used in the Backup Job. It defaults to 'info'.
info
Enum: [debug info warn error dpanic panic fatal]
backoffLimitinteger
BackoffLimit defines the maximum number of attempts to successfully take a Backup.
restartPolicy
RestartPolicy to be added to the Backup Pod.
OnFailure
Enum: [Always OnFailure Never]
inheritMetadata
InheritMetadata defines the metadata to be inherited by children resources.
backupContentType
BackupContentType is the backup content type available in the source to bootstrap from.
It is inferred based on the BackupRef and VolumeSnapshotRef fields. If inference is not possible, it defaults to Logical.
Set this field explicitly when using physical backups from S3 or Volume sources.
Enum: [Logical Physical]
s3
S3 defines the configuration to restore backups from a S3 compatible storage.
This field takes precedence over the Volume source.
azureBlob
AzureBlob defines the configuration to restore from Azure Blob compatible storage.
This field takes precedence over the Volume source.
volume
Volume is a Kubernetes Volume object that contains a backup.
targetRecoveryTime
TargetRecoveryTime is a RFC3339 (1970-01-01T00:00:00Z) date and time that defines the point in time recovery objective.
It is used to determine the closest restoration source in time.
stagingStorage
StagingStorage defines the temporary storage used to keep external backups and binary logs (i.e. S3) while they are being processed.
It defaults to an emptyDir volume, meaning that the backups will be temporarily stored in the node where the Job is scheduled.
restoreJob
RestoreJob defines additional properties for the restoration Job.
logLevelstring
LogLevel to be used in the mariadb-enterprise-operator container of the restoration Job. It defaults to 'info'.
PrivateKeyAlgorithm is the key size to be used for the CA and leaf certificate private keys.
Supported values: ECDSA(256, 384, 521), RSA(2048, 3072, 4096)
Port to connect to. If not provided, it defaults to the MariaDB port or to the first MaxScale listener.
mariaDbRef
MariaDBRef is a reference to the MariaDB to connect to. Either MariaDBRef or MaxScaleRef must be provided.
maxScaleRef
MaxScaleRef is a reference to the MaxScale to connect to. Either MariaDBRef or MaxScaleRef must be provided.
usernamestring
Username to use for configuring the Connection.
Required: {}
passwordSecretKeyRef
PasswordSecretKeyRef is a reference to the password to use for configuring the Connection.
Either passwordSecretKeyRef or tlsClientCertSecretRef must be provided as client credentials.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the password.
tlsClientCertSecretRef
TLSClientCertSecretRef is a reference to a Kubernetes TLS Secret used as authentication when checking the connection health.
Either passwordSecretKeyRef or tlsClientCertSecretRef must be provided as client credentials.
If not provided, the client certificate provided by the referred MariaDB is used if TLS is enabled.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the client certificate.
hoststring
Host to connect to. If not provided, it defaults to the MariaDB host or to the MaxScale host.
SecurityContext holds pod-level security attributes and common container settings.
affinity
Affinity to be used in the Pod.
nodeSelectorobject (keys:string, values:string)
NodeSelector to be used in the Pod.
tolerations array
Tolerations to be used in the Pod.
priorityClassNamestring
PriorityClassName to be used in the Pod.
spec
inheritMetadata
InheritMetadata defines the metadata to be inherited by children resources.
hoststring
Hostname of the external MariaDB.
Required: {}
portinteger
Port of the external MariaDB.
3306
usernamestring
Username is the username to connect to the external MariaDB.
Required: {}
passwordSecretKeyRef
PasswordSecretKeyRef is a reference to the password to connect to the external MariaDB.
tls
TLS defines the PKI to be used with the external MariaDB.
connection
Connection defines a template to configure a Connection for the external MariaDB.
serverCASecretRef
ServerCASecretRef is a reference to a Secret containing the server certificate authority keypair. It is used to establish trust and issue server certificates.
One of:
- Secret containing both the 'ca.crt' and 'ca.key' keys. This allows you to bring your own CA to Kubernetes to issue certificates.
- Secret containing only the 'ca.crt' in order to establish trust. In this case, either serverCertSecretRef or serverCertIssuerRef must be provided.
If not provided, a self-signed CA will be provisioned to issue the server certificate.
serverCertSecretRef
ServerCertSecretRef is a reference to a TLS Secret containing the server certificate.
It is mutually exclusive with serverCertIssuerRef.
serverCertIssuerRef
ServerCertIssuerRef is a reference to a cert-manager issuer object used to issue the server certificate. cert-manager must be installed previously in the cluster.
It is mutually exclusive with serverCertSecretRef.
By default, the Secret field 'ca.crt' provisioned by cert-manager will be added to the trust chain. A custom trust bundle may be specified via serverCASecretRef.
serverCertConfig
ServerCertConfig allows configuring the server certificates, either issued by the operator or cert-manager.
If not set, the default settings will be used.
clientCASecretRef
ClientCASecretRef is a reference to a Secret containing the client certificate authority keypair. It is used to establish trust and issue client certificates.
One of:
- Secret containing both the 'ca.crt' and 'ca.key' keys. This allows you to bring your own CA to Kubernetes to issue certificates.
- Secret containing only the 'ca.crt' in order to establish trust. In this case, either clientCertSecretRef or clientCertIssuerRef fields must be provided.
If not provided, a self-signed CA will be provisioned to issue the client certificate.
clientCertSecretRef
ClientCertSecretRef is a reference to a TLS Secret containing the client certificate.
It is mutually exclusive with clientCertIssuerRef.
clientCertIssuerRef
ClientCertIssuerRef is a reference to a cert-manager issuer object used to issue the client certificate. cert-manager must be installed previously in the cluster.
It is mutually exclusive with clientCertSecretRef.
By default, the Secret field 'ca.crt' provisioned by cert-manager will be added to the trust chain. A custom trust bundle may be specified via clientCASecretRef.
clientCertConfig
ClientCertConfig allows configuring the client certificates, either issued by the operator or cert-manager.
If not set, the default settings will be used.
galeraSSTEnabledboolean
GaleraSSTEnabled determines whether Galera SST connections should use TLS.
It disabled by default.
galeraServerSSLModestring
GaleraServerSSLMode defines the server SSL mode for a Galera Enterprise cluster.
This field is only supported and applicable for Galera Enterprise >= 10.6 instances.
Refer to the MariaDB Enterprise docs for more detail: https://mariadb.com/docs/galera-cluster/galera-security/mariadb-enterprise-cluster-security#wsrep-tls-modes
Enum: [PROVIDER SERVER SERVER_X509]
galeraClientSSLModestring
GaleraClientSSLMode defines the client SSL mode for a Galera Enterprise cluster.
This field is only supported and applicable for Galera Enterprise >= 10.6 instances.
Refer to the MariaDB Enterprise docs for more detail: https://mariadb.com/docs/galera-cluster/galera-security/mariadb-enterprise-cluster-security#sst-tls-modes
Mutual specifies whether TLS must be mutual between server and client for external connections.
When set to false, the client certificate will not be sent during the TLS handshake.
It is enabled by default.
galeraLibPathstring
GaleraLibPath is a path inside the MariaDB image to the wsrep provider plugin. It is defaulted if not provided.
More info: https://galeracluster.com/library/documentation/mysql-wsrep-options.html#wsrep-provider.
replicaThreadsinteger
ReplicaThreads is the number of replica threads used to apply Galera write sets in parallel.
More info: https://mariadb.com/kb/en/galera-cluster-system-variables/#wsrep_slave_threads.
ProviderOptions is map of Galera configuration parameters.
More info: https://mariadb.com/kb/en/galera-cluster-system-variables/#wsrep_provider_options.
agent
Agent is a sidecar agent that co-operates with mariadb-enterprise-operator.
recovery
GaleraRecovery is the recovery process performed by the operator whenever the Galera cluster is not healthy.
More info: https://galeracluster.com/library/documentation/crash-recovery.html.
initContainer
InitContainer is an init container that runs in the MariaDB Pod and co-operates with mariadb-enterprise-operator.
initJob
InitJob defines a Job that co-operates with mariadb-enterprise-operator by performing initialization tasks.
config
GaleraConfig defines storage options for the Galera configuration files.
clusterNamestring
ClusterName is the name of the cluster to be used in the Galera config file.
enabledboolean
Enabled is a flag to enable Galera.
clusterHealthyTimeout
ClusterHealthyTimeout represents the duration at which a Galera cluster, that consistently failed health checks,
is considered unhealthy, and consequently the Galera recovery process will be initiated by the operator.
clusterBootstrapTimeout
ClusterBootstrapTimeout is the time limit for bootstrapping a cluster.
Once this timeout is reached, the Galera recovery state is reset and a new cluster bootstrap will be attempted.
clusterUpscaleTimeout
ClusterUpscaleTimeout represents the maximum duration for upscaling the cluster's StatefulSet during the recovery process.
clusterDownscaleTimeout
ClusterDownscaleTimeout represents the maximum duration for downscaling the cluster's StatefulSet during the recovery process.
podRecoveryTimeout
PodRecoveryTimeout is the time limit for recevorying the sequence of a Pod during the cluster recovery.
podSyncTimeout
PodSyncTimeout is the time limit for a Pod to join the cluster after having performed a cluster bootstrap during the cluster recovery.
forceClusterBootstrapInPodstring
ForceClusterBootstrapInPod allows you to manually initiate the bootstrap process in a specific Pod.
IMPORTANT: Use this option only in exceptional circumstances. Not selecting the Pod with the highest sequence number may result in data loss.
IMPORTANT: Ensure you unset this field after completing the bootstrap to allow the operator to choose the appropriate Pod to bootstrap from in an event of cluster recovery.
job
Job defines a Job that co-operates with mariadb-enterprise-operator by performing the Galera cluster recovery .
galeraLibPathstring
GaleraLibPath is a path inside the MariaDB image to the wsrep provider plugin. It is defaulted if not provided.
More info: https://galeracluster.com/library/documentation/mysql-wsrep-options.html#wsrep-provider.
replicaThreadsinteger
ReplicaThreads is the number of replica threads used to apply Galera write sets in parallel.
More info: https://mariadb.com/kb/en/galera-cluster-system-variables/#wsrep_slave_threads.
ProviderOptions is map of Galera configuration parameters.
More info: https://mariadb.com/kb/en/galera-cluster-system-variables/#wsrep_provider_options.
agent
Agent is a sidecar agent that co-operates with mariadb-enterprise-operator.
recovery
GaleraRecovery is the recovery process performed by the operator whenever the Galera cluster is not healthy.
More info: https://galeracluster.com/library/documentation/crash-recovery.html.
initContainer
InitContainer is an init container that runs in the MariaDB Pod and co-operates with mariadb-enterprise-operator.
initJob
InitJob defines a Job that co-operates with mariadb-enterprise-operator by performing initialization tasks.
config
GaleraConfig defines storage options for the Galera configuration files.
clusterNamestring
ClusterName is the name of the cluster to be used in the Galera config file.
WaitForIt indicates whether the controller using this reference should wait for MariaDB to be ready.
true
envFrom array
EnvFrom represents the references (via ConfigMap and Secrets) to environment variables to be injected in the container.
volumeMounts array
VolumeMounts to be used in the Container.
livenessProbe
LivenessProbe to be used in the Container.
readinessProbe
ReadinessProbe to be used in the Container.
startupProbe
StartupProbe to be used in the Container.
resources
Resources describes the compute resource requirements.
securityContext
SecurityContext holds security configuration that will be applied to a container.
podMetadata
PodMetadata defines extra metadata for the Pod.
imagePullSecrets array
ImagePullSecrets is the list of pull Secrets to be used to pull the image.
initContainers array
InitContainers to be used in the Pod.
sidecarContainers array
SidecarContainers to be used in the Pod.
podSecurityContext
SecurityContext holds pod-level security attributes and common container settings.
serviceAccountNamestring
ServiceAccountName is the name of the ServiceAccount to be used by the Pods.
affinity
Affinity to be used in the Pod.
nodeSelectorobject (keys:string, values:string)
NodeSelector to be used in the Pod.
tolerations array
Tolerations to be used in the Pod.
volumes array
Volumes to be used in the Pod.
priorityClassNamestring
PriorityClassName to be used in the Pod.
topologySpreadConstraints array
TopologySpreadConstraints to be used in the Pod.
suspendboolean
Suspend indicates whether the current resource should be suspended or not.
This can be useful for maintenance, as disabling the reconciliation prevents the operator from interfering with user operations during maintenance activities.
false
imagestring
Image name to be used by the MariaDB instances. The supported format is <image>:<tag>.
Only MariaDB official images are supported.
imagePullPolicy
ImagePullPolicy is the image pull policy. One of Always, Never or IfNotPresent. If not defined, it defaults to IfNotPresent.
Enum: [Always Never IfNotPresent]
inheritMetadata
InheritMetadata defines the metadata to be inherited by children resources.
rootPasswordSecretKeyRef
RootPasswordSecretKeyRef is a reference to a Secret key containing the root password.
rootEmptyPasswordboolean
RootEmptyPassword indicates if the root password should be empty. Don't use this feature in production, it is only intended for development and test environments.
databasestring
Database is the name of the initial Database.
usernamestring
Username is the initial username to be created by the operator once MariaDB is ready.
The initial User will have ALL PRIVILEGES in the initial Database.
passwordSecretKeyRef
PasswordSecretKeyRef is a reference to a Secret that contains the password to be used by the initial User.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the password.
passwordHashSecretKeyRef
PasswordHashSecretKeyRef is a reference to the password hash to be used by the initial User.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the password hash.
It requires the 'strict-password-validation=false' option to be set. See: https://mariadb.com/docs/server/server-management/variables-and-modes/server-system-variables#strict_password_validation.
passwordPlugin
PasswordPlugin is a reference to the password plugin and arguments to be used by the initial User.
It requires the 'strict-password-validation=false' option to be set. See: https://mariadb.com/docs/server/server-management/variables-and-modes/server-system-variables#strict_password_validation.
cleanupPolicy
CleanupPolicy defines the behavior for cleaning up the initial User, Database, and Grant created by the operator.
Enum: [Skip Delete]
myCnfstring
MyCnf allows to specify the my.cnf file mounted by Mariadb.
Updating this field will trigger an update to the Mariadb resource.
myCnfConfigMapKeyRef
MyCnfConfigMapKeyRef is a reference to the my.cnf config file provided via a ConfigMap.
If not provided, it will be defaulted with a reference to a ConfigMap containing the MyCnf field.
If the referred ConfigMap is labeled with "enterprise.mariadb.com/watch", an update to the Mariadb resource will be triggered when the ConfigMap is updated.
timeZonestring
TimeZone sets the default timezone. If not provided, it defaults to SYSTEM and the timezone data is not loaded.
bootstrapFrom
BootstrapFrom defines a source to bootstrap from.
storage
Storage defines the storage options to be used for provisioning the PVCs mounted by MariaDB.
metrics
Metrics configures metrics and how to scrape them.
tls
TLS defines the PKI to be used with MariaDB.
replication
Replication configures high availability via replication. This feature is still in alpha, use Galera if you are looking for a more production-ready HA.
galera
Galera configures high availability via Galera.
maxScaleRef
MaxScaleRef is a reference to a MaxScale resource to be used with the current MariaDB.
Providing this reference implies delegating high availability tasks such as primary failover to MaxScale.
pointInTimeRecoveryRef
PointInTimeRecoveryRef is a reference to a PointInTimeRecovery resource to be used with the current MariaDB.
Providing this reference implies configuring binary logs in the MariaDB instance and binary log archival in the sidecar agent.
replicasinteger
Replicas indicates the number of desired instances.
1
replicasAllowEvenNumberboolean
disables the validation check for an odd number of replicas.
false
portinteger
Port where the instances will be listening for connections.
3306
servicePorts array
ServicePorts is the list of additional named ports to be added to the Services created by the operator.
podDisruptionBudget
PodDisruptionBudget defines the budget for replica availability.
updateStrategy
UpdateStrategy defines how a MariaDB resource is updated.
service
Service defines a template to configure the general Service object.
The network traffic of this Service will be routed to all Pods.
connection
Connection defines a template to configure the general Connection object.
This Connection provides the initial User access to the initial Database.
It will make use of the Service to route network traffic to all Pods.
primaryService
PrimaryService defines a template to configure the primary Service object.
The network traffic of this Service will be routed to the primary Pod.
primaryConnection
PrimaryConnection defines a template to configure the primary Connection object.
This Connection provides the initial User access to the initial Database.
It will make use of the PrimaryService to route network traffic to the primary Pod.
secondaryService
SecondaryService defines a template to configure the secondary Service object.
The network traffic of this Service will be routed to the secondary Pods.
secondaryConnection
SecondaryConnection defines a template to configure the secondary Connection object.
This Connection provides the initial User access to the initial Database.
It will make use of the SecondaryService to route network traffic to the secondary Pods.
usernamestring
Username is the username of the monitoring user used by the exporter.
passwordSecretKeyRef
PasswordSecretKeyRef is a reference to the password of the monitoring user used by the exporter.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the password.
spec
deleteDefaultAdminboolean
DeleteDefaultAdmin determines whether the default admin user should be deleted after the initial configuration. If not provided, it defaults to true.
metricsUsernamestring
MetricsUsername is an metrics username to call the REST API. It is defaulted if metrics are enabled.
metricsPasswordSecretKeyRef
MetricsPasswordSecretKeyRef is Secret key reference to the metrics password to call the admib REST API. It is defaulted if metrics are enabled.
clientUsernamestring
ClientUsername is the user to connect to MaxScale. It is defaulted if not provided.
clientPasswordSecretKeyRef
ClientPasswordSecretKeyRef is Secret key reference to the password to connect to MaxScale. It is defaulted if not provided.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the password.
clientMaxConnectionsinteger
ClientMaxConnections defines the maximum number of connections that the client can establish.
If HA is enabled, make sure to increase this value, as more MaxScale replicas implies more connections.
It defaults to 30 times the number of MaxScale replicas.
serverUsernamestring
ServerUsername is the user used by MaxScale to connect to MariaDB server. It is defaulted if not provided.
serverPasswordSecretKeyRef
ServerPasswordSecretKeyRef is Secret key reference to the password used by MaxScale to connect to MariaDB server. It is defaulted if not provided.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the password.
serverMaxConnectionsinteger
ServerMaxConnections defines the maximum number of connections that the server can establish.
If HA is enabled, make sure to increase this value, as more MaxScale replicas implies more connections.
It defaults to 30 times the number of MaxScale replicas.
monitorUsernamestring
MonitorUsername is the user used by MaxScale monitor to connect to MariaDB server. It is defaulted if not provided.
monitorPasswordSecretKeyRef
MonitorPasswordSecretKeyRef is Secret key reference to the password used by MaxScale monitor to connect to MariaDB server. It is defaulted if not provided.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the password.
monitorMaxConnectionsinteger
MonitorMaxConnections defines the maximum number of connections that the monitor can establish.
If HA is enabled, make sure to increase this value, as more MaxScale replicas implies more connections.
It defaults to 30 times the number of MaxScale replicas.
syncUsernamestring
MonitoSyncUsernamerUsername is the user used by MaxScale config sync to connect to MariaDB server. It is defaulted when HA is enabled.
syncPasswordSecretKeyRef
SyncPasswordSecretKeyRef is Secret key reference to the password used by MaxScale config to connect to MariaDB server. It is defaulted when HA is enabled.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the password.
syncMaxConnectionsinteger
SyncMaxConnections defines the maximum number of connections that the sync can establish.
If HA is enabled, make sure to increase this value, as more MaxScale replicas implies more connections.
It defaults to 30 times the number of MaxScale replicas.
protocolstring
Protocol is the MaxScale protocol to use when communicating with the client. If not provided, it defaults to MariaDBProtocol.
paramsobject (keys:string, values:string)
Params defines extra parameters to pass to the listener.
Any parameter supported by MaxScale may be specified here. See reference:
https://mariadb.com/kb/en/mariadb-maxscale-2308-mariadb-maxscale-configuration-guide/#listener_1.
interval
Interval used to monitor MariaDB servers. It is defaulted if not provided.
cooperativeMonitoring
CooperativeMonitoring enables coordination between multiple MaxScale instances running monitors. It is defaulted when HA is enabled.
Enum: [majority_of_all majority_of_running]
paramsobject (keys:string, values:string)
Params defines extra parameters to pass to the monitor.
Any parameter supported by MaxScale may be specified here. See reference:
https://mariadb.com/kb/en/mariadb-maxscale-2308-common-monitor-parameters/.
Monitor specific parameter are also supported:
https://mariadb.com/kb/en/mariadb-maxscale-2308-galera-monitor/#galera-monitor-optional-parameters.
https://mariadb.com/kb/en/mariadb-maxscale-2308-mariadb-monitor/#configuration.
serviceAccountNamestring
ServiceAccountName is the name of the ServiceAccount to be used by the Pods.
affinity
Affinity to be used in the Pod.
nodeSelectorobject (keys:string, values:string)
NodeSelector to be used in the Pod.
tolerations array
Tolerations to be used in the Pod.
priorityClassNamestring
PriorityClassName to be used in the Pod.
topologySpreadConstraints array
TopologySpreadConstraints to be used in the Pod.
protocolstring
Protocol is the MaxScale protocol to use when communicating with this MariaDB server. If not provided, it defaults to MariaDBBackend.
maintenanceboolean
Maintenance indicates whether the server is in maintenance mode.
paramsobject (keys:string, values:string)
Params defines extra parameters to pass to the server.
Any parameter supported by MaxScale may be specified here. See reference:
https://mariadb.com/kb/en/mariadb-maxscale-2308-mariadb-maxscale-configuration-guide/#server_1.
listener
MaxScaleListener defines how the MaxScale server will listen for connections.
Required: {}
paramsobject (keys:string, values:string)
Params defines extra parameters to pass to the service.
Any parameter supported by MaxScale may be specified here. See reference:
https://mariadb.com/kb/en/mariadb-maxscale-2308-mariadb-maxscale-configuration-guide/#service_1.
Router specific parameter are also supported:
https://mariadb.com/kb/en/mariadb-maxscale-2308-readwritesplit/#configuration.
https://mariadb.com/kb/en/mariadb-maxscale-2308-readconnroute/#configuration.
envFrom array
EnvFrom represents the references (via ConfigMap and Secrets) to environment variables to be injected in the container.
volumeMounts array
VolumeMounts to be used in the Container.
livenessProbe
LivenessProbe to be used in the Container.
readinessProbe
ReadinessProbe to be used in the Container.
startupProbe
StartupProbe to be used in the Container.
resources
Resources describes the compute resource requirements.
securityContext
SecurityContext holds security configuration that will be applied to a container.
podMetadata
PodMetadata defines extra metadata for the Pod.
imagePullSecrets array
ImagePullSecrets is the list of pull Secrets to be used to pull the image.
podSecurityContext
SecurityContext holds pod-level security attributes and common container settings.
serviceAccountNamestring
ServiceAccountName is the name of the ServiceAccount to be used by the Pods.
affinity
Affinity to be used in the Pod.
nodeSelectorobject (keys:string, values:string)
NodeSelector to be used in the Pod.
tolerations array
Tolerations to be used in the Pod.
priorityClassNamestring
PriorityClassName to be used in the Pod.
topologySpreadConstraints array
TopologySpreadConstraints to be used in the Pod.
suspendboolean
Suspend indicates whether the current resource should be suspended or not.
This can be useful for maintenance, as disabling the reconciliation prevents the operator from interfering with user operations during maintenance activities.
false
mariaDbRef
MariaDBRef is a reference to the MariaDB that MaxScale points to. It is used to initialize the servers field.
primaryServerstring
PrimaryServer specifies the desired primary server. Setting this field triggers a switchover operation in MaxScale to the desired server.
This option is only valid when using monitors that support switchover, currently limited to the MariaDB monitor.
servers array
Servers are the MariaDB servers to forward traffic to. It is required if 'spec.mariaDbRef' is not provided.
imagestring
Image name to be used by the MaxScale instances. The supported format is <image>:<tag>.
Only MaxScale official images are supported.
imagePullPolicy
ImagePullPolicy is the image pull policy. One of Always, Never or IfNotPresent. If not defined, it defaults to IfNotPresent.
Enum: [Always Never IfNotPresent]
inheritMetadata
InheritMetadata defines the metadata to be inherited by children resources.
services array
Services define how the traffic is forwarded to the MariaDB servers. It is defaulted if not provided.
monitor
Monitor monitors MariaDB server instances. It is required if 'spec.mariaDbRef' is not provided.
admin
Admin configures the admin REST API and GUI.
config
Config defines the MaxScale configuration.
auth
Auth defines the credentials required for MaxScale to connect to MariaDB.
metrics
Metrics configures metrics and how to scrape them.
tls
TLS defines the PKI to be used with MaxScale.
connection
Connection provides a template to define the Connection for MaxScale.
replicasinteger
Replicas indicates the number of desired instances.
1
podDisruptionBudget
PodDisruptionBudget defines the budget for replica availability.
updateStrategy
UpdateStrategy defines the update strategy for the StatefulSet object.
kubernetesService
KubernetesService defines a template for a Kubernetes Service object to connect to MaxScale.
guiKubernetesService
GuiKubernetesService defines a template for a Kubernetes Service object to connect to MaxScale's GUI.
requeueInterval
RequeueInterval is used to perform requeue reconciliations. If not defined, it defaults to 10s.
adminCASecretRef
AdminCASecretRef is a reference to a Secret containing the admin certificate authority keypair. It is used to establish trust and issue certificates for the MaxScale's administrative REST API and GUI.
One of:
- Secret containing both the 'ca.crt' and 'ca.key' keys. This allows you to bring your own CA to Kubernetes to issue certificates.
- Secret containing only the 'ca.crt' in order to establish trust. In this case, either adminCertSecretRef or adminCertIssuerRef fields must be provided.
If not provided, a self-signed CA will be provisioned to issue the server certificate.
adminCertSecretRef
AdminCertSecretRef is a reference to a TLS Secret used by the MaxScale's administrative REST API and GUI.
adminCertIssuerRef
AdminCertIssuerRef is a reference to a cert-manager issuer object used to issue the MaxScale's administrative REST API and GUI certificate. cert-manager must be installed previously in the cluster.
It is mutually exclusive with adminCertSecretRef.
By default, the Secret field 'ca.crt' provisioned by cert-manager will be added to the trust chain. A custom trust bundle may be specified via adminCASecretRef.
adminCertConfig
AdminCertConfig allows configuring the admin certificates, either issued by the operator or cert-manager.
If not set, the default settings will be used.
listenerCASecretRef
ListenerCASecretRef is a reference to a Secret containing the listener certificate authority keypair. It is used to establish trust and issue certificates for the MaxScale's listeners.
One of:
- Secret containing both the 'ca.crt' and 'ca.key' keys. This allows you to bring your own CA to Kubernetes to issue certificates.
- Secret containing only the 'ca.crt' in order to establish trust. In this case, either listenerCertSecretRef or listenerCertIssuerRef fields must be provided.
If not provided, a self-signed CA will be provisioned to issue the listener certificate.
listenerCertSecretRef
ListenerCertSecretRef is a reference to a TLS Secret used by the MaxScale's listeners.
listenerCertIssuerRef
ListenerCertIssuerRef is a reference to a cert-manager issuer object used to issue the MaxScale's listeners certificate. cert-manager must be installed previously in the cluster.
It is mutually exclusive with listenerCertSecretRef.
By default, the Secret field 'ca.crt' provisioned by cert-manager will be added to the trust chain. A custom trust bundle may be specified via listenerCASecretRef.
listenerCertConfig
ListenerCertConfig allows configuring the listener certificates, either issued by the operator or cert-manager.
If not set, the default settings will be used.
serverCASecretRef
ServerCASecretRef is a reference to a Secret containing the MariaDB server CA certificates. It is used to establish trust with MariaDB servers.
The Secret should contain a 'ca.crt' key in order to establish trust.
If not provided, and the reference to a MariaDB resource is set (mariaDbRef), it will be defaulted to the referred MariaDB CA bundle.
serverCertSecretRef
ServerCertSecretRef is a reference to a TLS Secret used by MaxScale to connect to the MariaDB servers.
If not provided, and the reference to a MariaDB resource is set (mariaDbRef), it will be defaulted to the referred MariaDB client certificate (clientCertSecretRef).
verifyPeerCertificateboolean
VerifyPeerCertificate specifies whether the peer certificate's signature should be validated against the CA.
It is disabled by default.
verifyPeerHostboolean
VerifyPeerHost specifies whether the peer certificate's SANs should match the peer host.
It is disabled by default.
replicationSSLEnabledboolean
ReplicationSSLEnabled specifies whether the replication SSL is enabled. If enabled, the SSL options will be added to the server configuration.
It is enabled by default when the referred MariaDB instance (via mariaDbRef) has replication enabled.
If the MariaDB servers are manually provided by the user via the 'servers' field, this must be set by the user as well.
ServiceAccountName is the name of the ServiceAccount to be used by the Pods.
tolerations array
Tolerations to be used in the Pod.
priorityClassNamestring
PriorityClassName to be used in the Pod.
onDemandstring
OnDemand is an identifier used to trigger an on-demand backup.
If the identifier is different than the one tracked under status.lastScheduleOnDemand, a new physical backup will be triggered.
onPrimaryChangeboolean
OnPrimaryChange indicates whether a PhysicalBackup should be scheduled when the referred MariaDB has changed primary Pod.
podMetadata
PodMetadata defines extra metadata for the Pod.
imagePullSecrets array
ImagePullSecrets is the list of pull Secrets to be used to pull the image.
podSecurityContext
SecurityContext holds pod-level security attributes and common container settings.
serviceAccountNamestring
ServiceAccountName is the name of the ServiceAccount to be used by the Pods.
tolerations array
Tolerations to be used in the Pod.
priorityClassNamestring
PriorityClassName to be used in the Pod.
mariaDbRef
MariaDBRef is a reference to a MariaDB object.
Required: {}
target
Target defines in which Pod the physical backups will be taken. It defaults to "Replica", meaning that the physical backups will only be taken in ready replicas.
Enum: [Replica PreferReplica]
compression
Compression algorithm to be used in the Backup.
Enum: [none bzip2 gzip]
stagingStorage
StagingStorage defines the temporary storage used to keep external backups (i.e. S3) while they are being processed.
It defaults to an emptyDir volume, meaning that the backups will be temporarily stored in the node where the PhysicalBackup Job is scheduled.
The staging area gets cleaned up after each backup is completed, consider this for sizing it appropriately.
storage
Storage defines the final storage for backups.
Required: {}
schedule
Schedule defines when the PhysicalBackup will be taken.
maxRetention
MaxRetention defines the retention policy for backups. Old backups will be cleaned up by the Backup Job.
It defaults to 30 days.
timeout
Timeout defines the maximum duration of a PhysicalBackup job or snapshot.
If this duration is exceeded, the job or snapshot is considered expired and is deleted by the operator.
A new job or snapshot will then be created according to the schedule.
It defaults to 1 hour.
podAffinityboolean
PodAffinity indicates whether the Jobs should run in the same Node as the MariaDB Pods to be able to attach the PVC.
It defaults to true.
backoffLimitinteger
BackoffLimit defines the maximum number of attempts to successfully take a PhysicalBackup.
restartPolicy
RestartPolicy to be added to the PhysicalBackup Pod.
OnFailure
Enum: [Always OnFailure Never]
inheritMetadata
InheritMetadata defines the metadata to be inherited by children resources.
successfulJobsHistoryLimitinteger
SuccessfulJobsHistoryLimit defines the maximum number of successful Jobs to be displayed. It defaults to 5.
Minimum: 0
logLevelstring
LogLevel to be used in the PhysicalBackup Job. It defaults to 'info'.
info
Enum: [debug info warn error dpanic panic fatal]
volume
Volume is a Kubernetes volume specification.
volumeSnapshot
VolumeSnapshot is a Kubernetes VolumeSnapshot specification.
SecurityContext holds pod-level security attributes and common container settings.
serviceAccountNamestring
ServiceAccountName is the name of the ServiceAccount to be used by the Pods.
affinity
Affinity to be used in the Pod.
nodeSelectorobject (keys:string, values:string)
NodeSelector to be used in the Pod.
tolerations array
Tolerations to be used in the Pod.
volumes array
Volumes to be used in the Pod.
priorityClassNamestring
PriorityClassName to be used in the Pod.
topologySpreadConstraints array
TopologySpreadConstraints to be used in the Pod.
spec
archiveTimeout
ArchiveTimeout defines the maximum duration for the binary log archival.
If this duration is exceeded, the sidecar agent will log an error and it will be retried in the next archive cycle.
It defaults to 1 hour.
1h
strictModeboolean
StrictMode controls the behavior when a point-in-time restoration cannot reach the exact target time:
When enabled: Returns an error and avoids replaying binary logs if target time is not reached.
When disabled (default): Replays available binary logs until the last recoverable time. It logs logs an error if target time is not reached.
archiveInterval
ArchiveInterval defines the time interval at which the binary logs will be archived.
It defaults to 10 minutes.
10m
maxParallelinteger
MaxParallel defines the maximum number of parallel workers, both for archiving and restoring the binary logs.
It defaults to 1.
1
Minimum: 1
maxRetention
MaxRetention defines the retention policy for binary logs. Binary logs older than this duration will be cleaned up when the archival is completed.
It is not set by default, meaning that old binary logs will not be cleaned up.
This field is immutable, it cannot be updated after creation.
MaxLagSeconds is the maximum number of seconds that replicas are allowed to lag behind the primary.
If a replica exceeds this threshold, it is marked as not ready and read queries will no longer be forwarded to it.
If not provided, it defaults to 0, which means that replicas are not allowed to lag behind the primary (recommended).
Lagged replicas will not be taken into account as candidates for the new primary during failover,
and they will block other operations, such as switchover and upgrade.
This field is not taken into account by MaxScale, you can define the maximum lag as router parameters.
See: https://mariadb.com/docs/maxscale/reference/maxscale-routers/maxscale-readwritesplit#max_replication_lag.
syncTimeout
SyncTimeout defines the timeout for the synchronization phase during switchover and failover operations.
During switchover, all replicas must be synced with the current primary before promoting the new primary.
During failover, the new primary must be synced before being promoted as primary. This implies processing all the events in the relay log.
When the timeout is reached, the operator restarts the operation from the beginning.
It defaults to 10s.
See: https://mariadb.com/docs/server/reference/sql-functions/secondary-functions/miscellaneous-functions/master_gtid_wait
bootstrapFrom
ReplicaBootstrapFrom defines the data sources used to bootstrap new replicas.
This will be used as part of the scaling out and recovery operations, when new replicas are created.
If not provided, scale out and recovery operations will return an error.
recovery
ReplicaRecovery defines how the replicas should be recovered after they enter an error state.
This process deletes data from faulty replicas and recreates them using the source defined in the bootstrapFrom field.
It is disabled by default, and it requires the bootstrapFrom field to be set.
semiSyncEnabledboolean
SemiSyncEnabled determines whether semi-synchronous replication is enabled.
Semi-synchronous replication requires that at least one replica should have sent an ACK to the primary node
before committing the transaction back to the client.
See: https://mariadb.com/docs/server/ha-and-performance/standard-replication/semisynchronous-replication
It is enabled by default
semiSyncAckTimeout
SemiSyncAckTimeout for the replica to acknowledge transactions to the primary.
It requires semi-synchronous replication to be enabled.
See: https://mariadb.com/docs/server/ha-and-performance/standard-replication/semisynchronous-replication#rpl_semi_sync_master_timeout
semiSyncWaitPoint
SemiSyncWaitPoint determines whether the transaction should wait for an ACK after having synced the binlog (AfterSync)
or after having committed to the storage engine (AfterCommit, the default).
It requires semi-synchronous replication to be enabled.
See: https://mariadb.com/kb/en/semisynchronous-replication/#rpl_semi_sync_master_wait_point.
Enum: [AfterSync AfterCommit]
syncBinloginteger
SyncBinlog indicates after how many events the binary log is synchronized to the disk.
See: https://mariadb.com/docs/server/ha-and-performance/standard-replication/replication-and-binary-log-system-variables#sync_binlog
initContainer
InitContainer is an init container that runs in the MariaDB Pod and co-operates with mariadb-enterprise-operator.
agent
Agent is a sidecar agent that runs in the MariaDB Pod and co-operates with mariadb-enterprise-operator.
standaloneProbesboolean
StandaloneProbes indicates whether to use the default non-HA startup and liveness probes.
It is disabled by default
enabledboolean
Enabled is a flag to enable replication.
semiSyncEnabledboolean
SemiSyncEnabled determines whether semi-synchronous replication is enabled.
Semi-synchronous replication requires that at least one replica should have sent an ACK to the primary node
before committing the transaction back to the client.
See: https://mariadb.com/docs/server/ha-and-performance/standard-replication/semisynchronous-replication
It is enabled by default
semiSyncAckTimeout
SemiSyncAckTimeout for the replica to acknowledge transactions to the primary.
It requires semi-synchronous replication to be enabled.
See: https://mariadb.com/docs/server/ha-and-performance/standard-replication/semisynchronous-replication#rpl_semi_sync_master_timeout
semiSyncWaitPoint
SemiSyncWaitPoint determines whether the transaction should wait for an ACK after having synced the binlog (AfterSync)
or after having committed to the storage engine (AfterCommit, the default).
It requires semi-synchronous replication to be enabled.
See: https://mariadb.com/kb/en/semisynchronous-replication/#rpl_semi_sync_master_wait_point.
Enum: [AfterSync AfterCommit]
syncBinloginteger
SyncBinlog indicates after how many events the binary log is synchronized to the disk.
See: https://mariadb.com/docs/server/ha-and-performance/standard-replication/replication-and-binary-log-system-variables#sync_binlog
initContainer
InitContainer is an init container that runs in the MariaDB Pod and co-operates with mariadb-enterprise-operator.
agent
Agent is a sidecar agent that runs in the MariaDB Pod and co-operates with mariadb-enterprise-operator.
standaloneProbesboolean
StandaloneProbes indicates whether to use the default non-HA startup and liveness probes.
It is disabled by default
TargetRecoveryTime is a RFC3339 (1970-01-01T00:00:00Z) date and time that defines the point in time recovery objective.
It is used to determine the closest restoration source in time.
stagingStorage
StagingStorage defines the temporary storage used to keep external backups (i.e. S3) while they are being processed.
It defaults to an emptyDir volume, meaning that the backups will be temporarily stored in the node where the Restore Job is scheduled.
podMetadata
PodMetadata defines extra metadata for the Pod.
imagePullSecrets array
ImagePullSecrets is the list of pull Secrets to be used to pull the image.
podSecurityContext
SecurityContext holds pod-level security attributes and common container settings.
serviceAccountNamestring
ServiceAccountName is the name of the ServiceAccount to be used by the Pods.
affinity
Affinity to be used in the Pod.
nodeSelectorobject (keys:string, values:string)
NodeSelector to be used in the Pod.
tolerations array
Tolerations to be used in the Pod.
priorityClassNamestring
PriorityClassName to be used in the Pod.
backupRef
BackupRef is a reference to a Backup object. It has priority over S3 and Volume.
s3
S3 defines the configuration to restore backups from a S3 compatible storage. It has priority over Volume.
volume
Volume is a Kubernetes Volume object that contains a backup.
targetRecoveryTime
TargetRecoveryTime is a RFC3339 (1970-01-01T00:00:00Z) date and time that defines the point in time recovery objective.
It is used to determine the closest restoration source in time.
stagingStorage
StagingStorage defines the temporary storage used to keep external backups (i.e. S3) while they are being processed.
It defaults to an emptyDir volume, meaning that the backups will be temporarily stored in the node where the Restore Job is scheduled.
mariaDbRef
MariaDBRef is a reference to a MariaDB object.
Required: {}
databasestring
Database defines the logical database to be restored. If not provided, all databases available in the backup are restored.
IMPORTANT: The database must previously exist.
logLevelstring
LogLevel to be used n the Backup Job. It defaults to 'info'.
info
Enum: [debug info warn error dpanic panic fatal]
backoffLimitinteger
BackoffLimit defines the maximum number of attempts to successfully perform a Backup.
5
restartPolicy
RestartPolicy to be added to the Backup Job.
OnFailure
Enum: [Always OnFailure Never]
inheritMetadata
InheritMetadata defines the metadata to be inherited by children resources.
Prefix indicates a folder/subfolder in the bucket. For example: mariadb/ or mariadb/backups. A trailing slash '/' is added if not provided.
accessKeyIdSecretKeyRef
AccessKeyIdSecretKeyRef is a reference to a Secret key containing the S3 access key id.
secretAccessKeySecretKeyRef
AccessKeyIdSecretKeyRef is a reference to a Secret key containing the S3 secret key.
sessionTokenSecretKeyRef
SessionTokenSecretKeyRef is a reference to a Secret key containing the S3 session token.
tls
TLS provides the configuration required to establish TLS connections with S3.
ssec
SSEC is a reference to a Secret containing the SSE-C (Server-Side Encryption with Customer-Provided Keys) key.
The secret must contain a 32-byte key (256 bits) in the specified key.
This enables server-side encryption where you provide and manage the encryption key.
ScrapeTimeout defines the timeout for scraping metrics.
loadBalancerSourceRangesstring array
LoadBalancerSourceRanges Service field.
externalTrafficPolicy
ExternalTrafficPolicy Service field.
sessionAffinity
SessionAffinity Service field.
allocateLoadBalancerNodePortsboolean
AllocateLoadBalancerNodePorts Service field.
loadBalancerClassstring
LoadBalancerClass Service field.
spec
podMetadata
PodMetadata defines extra metadata for the Pod.
imagePullSecrets array
ImagePullSecrets is the list of pull Secrets to be used to pull the image.
podSecurityContext
SecurityContext holds pod-level security attributes and common container settings.
serviceAccountNamestring
ServiceAccountName is the name of the ServiceAccount to be used by the Pods.
affinity
Affinity to be used in the Pod.
nodeSelectorobject (keys:string, values:string)
NodeSelector to be used in the Pod.
tolerations array
Tolerations to be used in the Pod.
priorityClassNamestring
PriorityClassName to be used in the Pod.
successfulJobsHistoryLimitinteger
SuccessfulJobsHistoryLimit defines the maximum number of successful Jobs to be displayed.
Minimum: 0
failedJobsHistoryLimitinteger
FailedJobsHistoryLimit defines the maximum number of failed Jobs to be displayed.
Minimum: 0
timeZonestring
TimeZone defines the timezone associated with the cron expression.
mariaDbRef
MariaDBRef is a reference to a MariaDB object.
Required: {}
schedule
Schedule defines when the SqlJob will be executed.
usernamestring
Username to be impersonated when executing the SqlJob.
Required: {}
passwordSecretKeyRef
UserPasswordSecretKeyRef is a reference to the impersonated user's password to be used when executing the SqlJob.
Required: {}
tlsCASecretRef
TLSCACertSecretRef is a reference toa CA Secret used to establish trust when executing the SqlJob.
If not provided, the CA bundle provided by the referred MariaDB is used.
tlsClientCertSecretRef
TLSClientCertSecretRef is a reference to a Kubernetes TLS Secret used as authentication when executing the SqlJob.
If not provided, the client certificate provided by the referred MariaDB is used.
databasestring
Username to be used when executing the SqlJob.
dependsOn array
DependsOn defines dependencies with other SqlJob objectecs.
sqlstring
Sql is the script to be executed by the SqlJob.
sqlConfigMapKeyRef
SqlConfigMapKeyRef is a reference to a ConfigMap containing the Sql script.
It is defaulted to a ConfigMap with the contents of the Sql field.
backoffLimitinteger
BackoffLimit defines the maximum number of attempts to successfully execute a SqlJob.
5
restartPolicy
RestartPolicy to be added to the SqlJob Pod.
OnFailure
Enum: [Always OnFailure Never]
inheritMetadata
InheritMetadata defines the metadata to be inherited by children resources.
ResizeInUseVolumes indicates whether the PVCs can be resized. The 'StorageClassName' used should have 'allowVolumeExpansion' set to 'true' to allow resizing.
It defaults to true.
waitForVolumeResizeboolean
WaitForVolumeResize indicates whether to wait for the PVCs to be resized before marking the MariaDB object as ready. This will block other operations such as cluster recovery while the resize is in progress.
It defaults to true.
volumeClaimTemplate
VolumeClaimTemplate provides a template to define the PVCs.
pvcRetentionPolicy
PersistentVolumeClaimRetentionPolicy describes the lifecycle of PVCs created from volumeClaimTemplates.
By default, all persistent volume claims are created as needed and retained until manually deleted.
This policy allows the lifecycle to be altered, for example by deleting PVCs when their statefulset is deleted,
or when their pod is scaled down.
ServerCASecretRef is a reference to a Secret containing the server certificate authority keypair. It is used to establish trust and issue server certificates.
One of:
- Secret containing both the 'ca.crt' and 'ca.key' keys. This allows you to bring your own CA to Kubernetes to issue certificates.
- Secret containing only the 'ca.crt' in order to establish trust. In this case, either serverCertSecretRef or serverCertIssuerRef must be provided.
If not provided, a self-signed CA will be provisioned to issue the server certificate.
serverCertSecretRef
ServerCertSecretRef is a reference to a TLS Secret containing the server certificate.
It is mutually exclusive with serverCertIssuerRef.
serverCertIssuerRef
ServerCertIssuerRef is a reference to a cert-manager issuer object used to issue the server certificate. cert-manager must be installed previously in the cluster.
It is mutually exclusive with serverCertSecretRef.
By default, the Secret field 'ca.crt' provisioned by cert-manager will be added to the trust chain. A custom trust bundle may be specified via serverCASecretRef.
serverCertConfig
ServerCertConfig allows configuring the server certificates, either issued by the operator or cert-manager.
If not set, the default settings will be used.
clientCASecretRef
ClientCASecretRef is a reference to a Secret containing the client certificate authority keypair. It is used to establish trust and issue client certificates.
One of:
- Secret containing both the 'ca.crt' and 'ca.key' keys. This allows you to bring your own CA to Kubernetes to issue certificates.
- Secret containing only the 'ca.crt' in order to establish trust. In this case, either clientCertSecretRef or clientCertIssuerRef fields must be provided.
If not provided, a self-signed CA will be provisioned to issue the client certificate.
clientCertSecretRef
ClientCertSecretRef is a reference to a TLS Secret containing the client certificate.
It is mutually exclusive with clientCertIssuerRef.
clientCertIssuerRef
ClientCertIssuerRef is a reference to a cert-manager issuer object used to issue the client certificate. cert-manager must be installed previously in the cluster.
It is mutually exclusive with clientCertSecretRef.
By default, the Secret field 'ca.crt' provisioned by cert-manager will be added to the trust chain. A custom trust bundle may be specified via clientCASecretRef.
clientCertConfig
ClientCertConfig allows configuring the client certificates, either issued by the operator or cert-manager.
If not set, the default settings will be used.
galeraSSTEnabledboolean
GaleraSSTEnabled determines whether Galera SST connections should use TLS.
It disabled by default.
galeraServerSSLModestring
GaleraServerSSLMode defines the server SSL mode for a Galera Enterprise cluster.
This field is only supported and applicable for Galera Enterprise >= 10.6 instances.
Refer to the MariaDB Enterprise docs for more detail: https://mariadb.com/docs/galera-cluster/galera-security/mariadb-enterprise-cluster-security#wsrep-tls-modes
Enum: [PROVIDER SERVER SERVER_X509]
galeraClientSSLModestring
GaleraClientSSLMode defines the client SSL mode for a Galera Enterprise cluster.
This field is only supported and applicable for Galera Enterprise >= 10.6 instances.
Refer to the MariaDB Enterprise docs for more detail: https://mariadb.com/docs/galera-cluster/galera-security/mariadb-enterprise-cluster-security#sst-tls-modes
PasswordSecretKeyRef is a reference to the password to be used by the User.
If not provided, the account will be locked and the password will expire.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the password.
passwordHashSecretKeyRef
PasswordHashSecretKeyRef is a reference to the password hash to be used by the User.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the password hash.
It requires the 'strict-password-validation=false' option to be set. See: https://mariadb.com/docs/server/server-management/variables-and-modes/server-system-variables#strict_password_validation.
passwordPlugin
PasswordPlugin is a reference to the password plugin and arguments to be used by the User.
It requires the 'strict-password-validation=false' option to be set. See: https://mariadb.com/docs/server/server-management/variables-and-modes/server-system-variables#strict_password_validation.
require
Require specifies TLS requirements for the user to connect. See: https://mariadb.com/kb/en/securing-connections-for-client-and-server/#requiring-tls.
maxUserConnectionsinteger
MaxUserConnections defines the maximum number of simultaneous connections that the User can establish.
10
namestring
Name overrides the default name provided by metadata.name.
MaxLength: 80
hoststring
Host related to the User.
MaxLength: 255
csi
hostPath
persistentVolumeClaim
secret
configMap
storageClassNamestring
metadata
Refer to Kubernetes API documentation for fields of metadata.
AntiAffinityEnabled configures PodAntiAffinity so each Pod is scheduled in a different Node, enabling HA.
Make sure you have at least as many Nodes available as the replicas to not end up with unscheduled Pods.
BackupRef is reference to a backup object. If the Kind is not specified, a logical Backup is assumed.
This field takes precedence over S3 and Volume sources.
PointInTimeRecoveryRef is a reference to a PointInTimeRecovery object.
Providing this field implies restoring the PhysicalBackup referenced in the PointInTimeRecovery object and replaying the
archived binary logs up to the point-in-time restoration target, defined by the targetRecoveryTime field.
Refer to Kubernetes API documentation for fields of metadata.
imagestring
Image name to be used to perform operations on the external MariaDB, for example, for taking backups.
The supported format is <image>:<tag>. Only MariaDB official images are supported.
If not provided, the MariaDB image version be inferred by the operator in runtime. The default MariaDB image will be used in this case,
ImagePullSecrets is the list of pull Secrets to be used to pull the image.
enabledboolean
Enabled indicates whether TLS is enabled, determining if certificates should be issued and mounted to the MariaDB instance.
It is enabled by default.
requiredboolean
Required specifies whether TLS must be enforced for all connections.
User TLS requirements take precedence over this.
It disabled by default.
versionsstring array
Versions specifies the supported TLS versions for this MariaDB instance.
By default, the MariaDB's default supported versions are used. See: https://mariadb.com/kb/en/ssltls-system-variables/#tls_version.
SST is the Snapshot State Transfer used when new Pods join the cluster.
More info: https://galeracluster.com/library/documentation/sst.html.
Enum: [rsync mariabackup mysqldump]
availableWhenDonorboolean
AvailableWhenDonor indicates whether a donor node should be responding to queries. It defaults to false.
reuseStorageVolumeboolean
ReuseStorageVolume indicates that storage volume used by MariaDB should be reused to store the Galera configuration files.
It defaults to false, which implies that a dedicated volume for the Galera configuration files is provisioned.
MinClusterSize is the minimum number of replicas to consider the cluster healthy. It can be either a number of replicas (1) or a percentage (50%).
If Galera consistently reports less replicas than this value for the given 'ClusterHealthyTimeout' interval, a cluster recovery is initiated.
It defaults to '1' replica, and it is highly recommendeded to keep this value at '1' in most cases.
If set to more than one replica, the cluster recovery process may restart the healthy replicas as well.
SecurityContext holds pod-level security attributes and common container settings.
enabledboolean
Enabled is a flag to enable KubernetesAuth
authDelegatorRoleNamestring
AuthDelegatorRoleName is the name of the ClusterRoleBinding that is associated with the "system:auth-delegator" ClusterRole.
It is necessary for creating TokenReview objects in order for the agent to validate the service account token.
AdminPasswordSecretKeyRef is Secret key reference to the admin password to call the admin REST API. It is defaulted if not provided.
paramsobject (keys:string, values:string)
Params is a key value pair of parameters to be used in the MaxScale static configuration file.
Any parameter supported by MaxScale may be specified here. See reference:
https://mariadb.com/kb/en/mariadb-maxscale-2308-mariadb-maxscale-configuration-guide/#global-settings.
Sync defines how to replicate configuration across MaxScale replicas. It is defaulted when HA is enabled.
databasestring
Database is the MariaDB logical database where the 'maxscale_config' table will be created in order to persist and synchronize config changes. If not provided, it defaults to 'mysql'.
Interval defines the config synchronization timeout. It is defaulted if not provided.
suspendboolean
Suspend indicates whether the current resource should be suspended or not.
This can be useful for maintenance, as disabling the reconciliation prevents the operator from interfering with user operations during maintenance activities.
false
namestring
Name is the identifier of the listener. It is defaulted if not provided
portinteger
Port is the network port where the MaxScale server will listen.
Suspend indicates whether the current resource should be suspended or not.
This can be useful for maintenance, as disabling the reconciliation prevents the operator from interfering with user operations during maintenance activities.
false
namestring
Name is the identifier of the monitor. It is defaulted if not provided.
SecurityContext holds pod-level security attributes and common container settings.
namestring
Name is the identifier of the MariaDB server.
Required: {}
addressstring
Address is the network address of the MariaDB server.
Required: {}
portinteger
Port is the network port of the MariaDB server. If not provided, it defaults to 3306.
suspendboolean
Suspend indicates whether the current resource should be suspended or not.
This can be useful for maintenance, as disabling the reconciliation prevents the operator from interfering with user operations during maintenance activities.
Env represents the environment variables to be injected in a container.
enabledboolean
Enabled indicates whether TLS is enabled, determining if certificates should be issued and mounted to the MaxScale instance.
It is enabled by default when the referred MariaDB instance (via mariaDbRef) has TLS enabled and enforced.
adminVersionsstring array
Versions specifies the supported TLS versions in the MaxScale REST API.
By default, the MaxScale's default supported versions are used. See: https://mariadb.com/kb/en/mariadb-maxscale-25-mariadb-maxscale-configuration-guide/#admin_ssl_version
items:Enum: [TLSv10 TLSv11 TLSv12 TLSv13 MAX]
serverVersionsstring array
ServerVersions specifies the supported TLS versions in both the servers and listeners managed by this MaxScale instance.
By default, the MaxScale's default supported versions are used. See: https://mariadb.com/kb/en/mariadb-maxscale-25-mariadb-maxscale-configuration-guide/#ssl_version.
labelsobject (keys:string, values:string)
Labels to be added to children resources.
annotationsobject (keys:string, values:string)
Annotations to be added to children resources.
mariadbmon
MonitorModuleMariadb is a monitor to be used with MariaDB servers.
galeramon
MonitorModuleGalera is a monitor to be used with Galera servers.
PluginNameSecretKeyRef is a reference to the authentication plugin to be used by the User.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the authentication plugin.
PluginArgSecretKeyRef is a reference to the arguments to be provided to the authentication plugin for the User.
If the referred Secret is labeled with "enterprise.mariadb.com/watch", updates may be performed to the Secret in order to update the authentication plugin arguments.
Delete
PersistentVolumeClaimRetentionPolicyDelete deletes PVCs when their owning pods or StatefulSet are deleted.
Retain
PersistentVolumeClaimRetentionPolicyRetain retains PVCs when their owning pods or StatefulSet are deleted.
PersistentVolumeClaim is a Kubernetes PVC specification.
Replica
PhysicalBackupTargetReplica indicates that the physical backup will be taken in a ready replica.
PreferReplica
PhysicalBackupTargetReplica indicates that the physical backup will preferably be taken in a ready replica.
If no ready replicas are available, physical backups will be taken in the primary.
PhysicalBackupTemplateRef is a reference to a PhysicalBackup object that will be used as template to create a new PhysicalBackup object
used synchronize the data from an up to date replica to the new replica to be bootstrapped.
ErrorDurationThreshold defines the time duration after which, if a replica continues to report errors,
the operator will initiate the recovery process for that replica.
This threshold applies only to error codes not identified as recoverable by the operator.
Errors identified as recoverable will trigger the recovery process immediately.
It defaults to 5 minutes.
ReplPasswordSecretKeyRef provides a reference to the Secret to use as password for the replication user.
By default, a random password will be generated.
Gtid indicates which Global Transaction ID (GTID) position mode should be used when connecting a replica to the master.
By default, CurrentPos is used.
See: https://mariadb.com/docs/server/reference/sql-statements/administrative-sql-statements/replication-statements/change-master-to#master_use_gtid.
Enum: [CurrentPos SlavePos]
connectionRetrySecondsinteger
ConnectionRetrySeconds is the number of seconds that the replica will wait between connection retries.
See: https://mariadb.com/docs/server/reference/sql-statements/administrative-sql-statements/replication-statements/change-master-to#master_connect_retry.
ReplicaReplication is the replication configuration for the replica nodes.
gtidStrictModeboolean
GtidStrictMode determines whether the GTID strict mode is enabled.
See: https://mariadb.com/docs/server/ha-and-performance/standard-replication/gtid#gtid_strict_mode.
It is enabled by default.
ReplicaReplication is the replication configuration for the replica nodes.
gtidStrictModeboolean
GtidStrictMode determines whether the GTID strict mode is enabled.
See: https://mariadb.com/docs/server/ha-and-performance/standard-replication/gtid#gtid_strict_mode.
It is enabled by default.
CustomerKeySecretKeyRef is a reference to a Secret key containing the SSE-C customer-provided encryption key.
The key must be a 32-byte (256-bit) key encoded in base64.
Required: {}
rsync
SSTRsync is an SST based on rsync.
mariabackup
SSTMariaBackup is an SST based on mariabackup. It is the recommended SST.
mysqldump
SSTMysqldump is an SST based on mysqldump.
cronstring
Cron is a cron expression that defines the schedule.
Required: {}
suspendboolean
Suspend defines whether the schedule is active or not.
Size of the PVCs to be mounted by MariaDB. Required if not provided in 'VolumeClaimTemplate'. It supersedes the storage size specified in 'VolumeClaimTemplate'.
storageClassNamestring
StorageClassName to be used to provision the PVCS. It supersedes the 'StorageClassName' specified in 'VolumeClaimTemplate'.
If not provided, the default 'StorageClass' configured in the cluster is used.
Suspend indicates whether the current resource should be suspended or not.
This can be useful for maintenance, as disabling the reconciliation prevents the operator from interfering with user operations during maintenance activities.
Enabled indicates whether TLS is enabled, determining if certificates should be issued and mounted to the MariaDB instance.
It is enabled by default.
requiredboolean
Required specifies whether TLS must be enforced for all connections.
User TLS requirements take precedence over this.
It disabled by default.
versionsstring array
Versions specifies the supported TLS versions for this MariaDB instance.
By default, the MariaDB's default supported versions are used. See: https://mariadb.com/kb/en/ssltls-system-variables/#tls_version.
CASecretKeyRef is a reference to a Secret key containing a CA bundle in PEM format used to establish TLS connections with S3.
By default, the system trust chain will be used, but you can use this field to add more CAs to the bundle.
sslboolean
SSL indicates that the user must connect via TLS.
x509boolean
X509 indicates that the user must provide a valid x509 certificate to connect.
issuerstring
Issuer indicates that the TLS certificate provided by the user must be issued by a specific issuer.
RollingUpdate defines parameters for the RollingUpdate type.
autoUpdateDataPlaneboolean
AutoUpdateDataPlane indicates whether the Galera data-plane version (agent and init containers) should be automatically updated based on the operator version. It defaults to false.
Updating the operator will trigger updates on all the MariaDB instances that have this flag set to true. Thus, it is recommended to progressively set this flag after having updated the operator.
ReplicasFirstPrimaryLast
ReplicasFirstPrimaryLastUpdateType indicates that the update will be applied to all replica Pods first and later on to the primary Pod.
The updates are applied one by one waiting until each Pod passes the readiness probe
i.e. the Pod gets synced and it is ready to receive traffic.
RollingUpdate
RollingUpdateUpdateType indicates that the update will be applied by the StatefulSet controller using the RollingUpdate strategy.
This strategy is unaware of the roles that the Pod have (primary or replica) and it will
perform the update following the StatefulSet ordinal, from higher to lower.
OnDelete
OnDeleteUpdateType indicates that the update will be applied by the StatefulSet controller using the OnDelete strategy.
The update will be done when the Pods get manually deleted by the user.
Never
NeverUpdateType indicates that the StatefulSet will never be updated.
This can be used to roll out updates progressively to a fleet of instances.
WaitPointAfterSync indicates that the primary waits for the replica ACK before committing the transaction to the storage engine.
It trades off performance for consistency.
AfterCommit
WaitPointAfterCommit indicates that the primary commits the transaction to the storage engine and waits for the replica ACK afterwards.
It trades off consistency for performance.