Stale slave becomes master if no master or slave is available
I have configured MaxScale to monitor and perform automatic failover (or rejoin) on cluster of 3 MariaDB servers. Automatic failover is working most of the time except one scenario:
Let servers be M1, M2, M3. M1 is initial master; M2 and M3 are slaves of M1. Replication is GTID-based with `gtid_strict_mode` enabled.
> turn off M1 > M1 becomes down > M2 becomes master due to automatic failover > execute some writes to database > M3 replicated successfully > turn off M2 and M3 (MaxScale stays ON) > turn on M1
The problem is that M1 becomes master node when no other database is accessible, even though previous master (M2) has higher GTID value. This makes me assume that MaxScale does not "remember" previous GTID value to tell stale slave (M1) that it has stale data in comparison to newer master (M2).
Is there any option to prevent slave with stale data becoming master node? Problem exists ONLY if no other nodes are available.
I would be grateful for any kind of help.
Answer Answered by Esa Korhonen in this comment.
This situation is indeed not handled. The general assumption is that there is never a situation in which the entire cluster is down. MaxScale tries to get the cluster running with the best server available. This may require a new configuration setting. Please make a feature request on our JIRA.