Orchestrator Overview

You are viewing an old version of this article. View the current version here.

Orchestrator is a MySQL and MariaDB high availability and replication management tool. It is released by Shlomi Noach under the terms of the Apache License, version 2.0.

Orchestrator provides automation for MariaDB replication in the following ways:

  • It can be used to perform certain operations, like repairing broken replication or moving a replica from one master to another. These operations can be requested using CLI commands, or via the GUI provided with Orchestrator. The actual commands sent to MariaDB are automated by Orchestrator, and the user doesn't have to worry about the details.
  • Orchestrator can also automatically perform a failover in case a master crashes or is unreachable by its replicas. If that is the case, Orchestrator will promote one of the replicas to a master. The replica to promote is chosen based on several criteria, like the servers versions, the binary log formats in use and the datacenters locations.

Supported Topologies

Currently, Orchestrator fully supports MariaDB GTID, replication, and semi-synchronous replication. While Orchestrator does not support Galera specific logic, it works with Galera clusters. For details, see Supported Topologies and Versions in Orchestrator documentation.

Architecture

Orchestrator consists of a single executable called orchestrator. This is a process that periodically connects to the target servers. It will run SQL queries against target servers, so it needs a user with proper permissions. When the process is running, a GUI is available via a web browser, at the URL 'https://localhost:3000'.

Orchestrator expects to find a JSON configuration file called orchestrator.conf.json, in /etc.

A database is used to store the configuration and the state of the target servers. By default, this is done using built-in SQLite. However, it is possible to use an external MariaDB or MySQL server.

If a cluster of Orchestrator instances is running, only one central database is used. One Orchestrator node is active, while the others are passive and are only used for failover. If the active node crashes or becomes unreachable, one of the other nodes becomes the active instance. The active_node table shows which node is active. Nodes communicate between them using the Raft protocol.

CLI Examples

As mentioned, Orchestrator can be used form the comman-line. Here you can find some examples.

List clusters:

orchestrator -c clusters

Discover a specified instance and add it to the known topology:

orchestrator -c discover -i <host>:<port>

Forget about an instance:

orchestrator -c topology -i <host>:<port>

Move a replica to a different master:

orchestrator -c move-up -i <replica-host>:<replica-port> -d <master-host>:<master-port>

Move a replica up, so that it becomes a "sibling" of its master:

orchestrator -c move-up -i <replica-host>:<replica-port>

Move a replica down, so that it becomes a replica of its"sibling":

orchestrator -c move-below -i <replica-host>:<replica-port> -d <master-host>:<master-port>

Make a node read-only:

orchestrator -c set-read-only -i <host>:<port>

Make a node writeable:

orchestrator -c set-writeable -i <host>:<port>

The --debug and --stack options can be added to the above commands to make them more verbose.

Orchestrator Resources


Content initially contributed by Vettabase Ltd.

Comments

Comments loading...
Content reproduced on this site is the property of its respective owners, and this content is not reviewed in advance by MariaDB. The views, information and opinions expressed by this content do not necessarily represent those of MariaDB or any other party.