Making MaxScale Dynamic: New in MariaDB MaxScale 2.1

spacer

The beta version of the upcoming MaxScale 2.1 has been released. This new release takes MaxScale to the next level by introducing functionality that makes it possible to configure parts of MaxScale’s configuration at runtime.

How Has MaxScale Changed?

One of the big new features in 2.1 is the ability to add, remove and alter the server definitions at runtime. By decoupling the servers from the services that use them by making the server list dynamic, MaxScale can be defined as a more abstract service instead of a concrete service comprised of a pre-defined set of servers. This opens up new use cases for MaxScale, such as template configurations, that make the use and maintenance of MaxScale a lot easier.

The logical extension to being able to add new servers is the ability to add new monitors. After all, the monitors are just an abstraction of a cluster of servers. What this gives us is the capability to radically change the cluster layout by bringing in a whole new cluster of servers. With this, it is possible to do a runtime migration from one cluster to another by simply adding a new monitor, then adding the servers that it monitors and finally swapping the old cluster to the new one.

While services are a somewhat of a more static concept, the network ports that they listen on aren’t. For this reason, we added the ability to add new listeners for services. What this means is that you can extend the set of ports that a service listens on at runtime. One example of how this could be used is to connect a new appliance which uses a different port to MaxScale. This removes some of the burden on the developer of the appliance and offloads it to MaxScale.

How to Use It?

A practical use case for the new features in 2.1 can be demonstrated by one of the tests that are in our testing framework. It starts MaxScale with a minimal configuration similar to the following.

[maxscale]
threads=4


[rwsplit-service]
type=service
router=readwritesplit
user=maxskysql
passwd=skysql


[CLI]
type=service
router=cli


[CLI Listener]
type=listener
service=CLI
protocol=maxscaled
socket=default

Next it defines a monitor.

maxadmin create monitor cluster-monitor mysqlmon
maxadmin alter monitor cluster-monitor user=maxuser password=maxpwd monitor_interval=1000
maxadmin restart monitor cluster-monitor

Since the monitors require some configuration before they can be used, they are created in a stopped state. This allows the user to configure the credentials that the monitor will use when it connects to the servers in the cluster. Once those are set, the monitor is started and then the test proceeds to create a listener for the rwsplit-service on port 4006.

maxadmin create listener rwsplit-service rwsplit-listener 0.0.0.0 4006

The listeners require no extra configuration and are started immediately. The next step is the definition of the two servers that the service will use.

maxadmin create server db-serv-west 172.0.10.2 3306
maxadmin create server db-serv-east 172.0.10.4 3306

Once the servers are defined, they need to be linked to the monitor so that it will start monitoring the status of the servers and also to the service that will use them.

maxadmin add server db-serv-west cluster-monitor rwsplit-service
maxadmin add server db-serv-east cluster-monitor rwsplit-service

After the servers have been added to the service and the monitor, the system is ready for use. At this point, our test performs various health checks and then proceeds to scale the service down.

Dynamic Scaling

When the demand on the database cluster lowers, servers can be shut down as they are no longer needed. When demand grows, new servers can be started to handle the increased load. Combined with on-demand, cloud-based database infrastructure, MaxScale can handle changing traffic loads.

Scaling up on demand is simply done by creating new servers and adding them to the monitors and services that use them. Once the demand for the service lowers, the servers can be scaled down by removing them from the service.

All of the changes done at runtime are persisted in a way that will survive a restart of MaxScale. This makes it a reliable way to adapt MaxScale to its environment.

Summary

The new features in 2.1 make MaxScale a dynamic part of the database infrastructure. The support of live changes in clusters makes MaxScale the ideal companion for your application whether you’re using an on-premise, cloud-based or a hybrid database setup.
The changes to the configuration in MaxScale 2.1 are backwards compatible with earlier versions of MaxScale which makes taking it into use is easy. Going forward, our goal is to make MaxScale easier to use for the DBAs by providing new ways to manage MaxScale while at the same time taking the modular ideology of MaxScale to the limits by introducing new and interesting modules.

There’s More

The 2.1 release of MaxScale is packed with new features designed to supercharge your cluster. One example is the newly added concept of module commands that allow the modules in MaxScale to expand their capabilities beyond that of the module type. Remember to read the MariaDB blog as this new functionality will be explored in a follow-up blog post that focuses on the new module command capabilities and takes a practical look at the dbfwfilter firewall filter module and how it implements these.