Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Explore the Spider storage engine in MariaDB Server. Learn how to shard data across multiple MariaDB and MySQL servers, enabling horizontal scaling and distributed database solutions.
Explains the architecture of Spider, where a "Spider node" processes queries and distributes them to one or more "Data nodes" that actually store the data.
Learn about the schema design of the Spider storage engine.
Spider Tables can be created with MariaDB Enterprise Spider using the CREATE TABLE statement. The Enterprise Spider storage engine can be chosen for the table by specifying ENGINE=Spider.
For each Spider Table, MariaDB Enterprise Spider requires connection details for the Data Nodes. The connection details are provided by specifying Connection Options in the COMMENT option for the table or partition, depending on the topology.
In a Federated MariaDB Enterprise Spider topology, the are specified in the COMMENT table option for the Spider Table:
An alternative syntax is available. When you don't want to create a server object, the full connection details for the Data Node can be specified in the COMMENT table option:
In a Sharded MariaDB Enterprise Spider topology, the are specified in the COMMENT partition option for each partition of the Spider Table:
An alternative syntax is available. When you don't want to create a server object, the full connection details for the Data Nodes can be specified in the COMMENT partition option:
The following connection options are supported in the COMMENT table option for Federated Spider Tables and in the COMMENT partition option for Sharded Spider Tables:
This page is: Copyright © 2025 MariaDB. All rights reserved.
socket
String
The path to the Unix socket file to use when connecting to the Data Node. This option is mutually exclusive with port.
ssl_ca
String
The path to the file with the TLS certificate authority chain.
ssl_capath
String
The path to the directory with the TLS certificate authority chain files.
ssl_cert
String
The path to the TLS client certificate.
ssl_key
String
The path to the TLS private key.
ssl_cipher
String
The path to the TLS certificate authority chain.
table
String
The table to query.
user
String
The username to use when connecting to the Data Node.
wrapper
String
The foreign data wrapper should be mariadb for the Federated and Sharded topologies or odbc for the ODBC topology.
database
String
The database to select when connecting to the Data Node.
host
String
The hostname or IP address for the Data Node. This option is mutually exclusive with socket.
password
String
The password to use when connecting to the Data Node.
port
Integer
The port to use when connecting to the Data Node. This option is mutually exclusive with socket.
CREATE SERVER hq_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.2",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "hq_sales"
);
CREATE DATABASE spider_hq_sales;
CREATE TABLE spider_hq_sales.invoices (
branch_id INT NOT NULL,
invoice_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=Spider
COMMENT='server "hq_server", table "invoices"';CREATE TABLE spider_hq_sales.invoices_alternate (
branch_id INT NOT NULL,
invoice_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=Spider
COMMENT='table "invoices", host "192.0.2.2", port "5801", user "spider_user", password "password", database "hq_sales"';CREATE SERVER hq_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.2",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "hq_sales"
);
CREATE SERVER eastern_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.3",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "eastern_sales"
);
CREATE SERVER western_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.4",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "western_sales"
);
CREATE DATABASE spider_sharded_sales;
CREATE TABLE spider_sharded_sales.invoices (
branch_id INT NOT NULL,
invoice_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=Spider
PARTITION BY LIST(branch_id) (
PARTITION hq_partition VALUES IN (1) COMMENT = 'server "hq_server", table "invoices"',
PARTITION eastern_partition VALUES IN (2) COMMENT = 'server "eastern_server", table "invoices"',
PARTITION western_partition VALUES IN (3) COMMENT = 'server "western_server", table "invoices"'
);CREATE TABLE spider_sharded_sales.invoices (
branch_id INT NOT NULL,
invoice_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=Spider
PARTITION BY LIST(branch_id) (
PARTITION hq_partition VALUES IN (1) COMMENT = 'table "invoices", host "192.0.2.2", port 5801, user "spider_user", password "password", database "hq_sales"',
PARTITION eastern_partition VALUES IN (2) COMMENT = 'table "invoices", host "192.0.2.3", port 5801, user "spider_user", password "password", database "eastern_sales"',
PARTITION western_partition VALUES IN (3) COMMENT = 'table "invoices", host "192.0.2.4", port 5801, user "spider_user", password "password", database "western_sales"'
);This section covers operational guides for managing a Federated Spider topology, including migrating tables and performing backup and restore.
Learn about ODBC operations in MariaDB Enterprise Spider topology. This section covers configuring and managing ODBC connections to distributed Spider environments for seamless data access.
Explore Spider functions in MariaDB Server. Learn about the specialized functions that enhance data access and manipulation across sharded and distributed databases using the Spider storage engine.
Learn about ODBC operations in MariaDB Enterprise Spider topology. This section covers configuring and managing ODBC connections to distributed Spider environments for seamless data access.
Use this UDF to refresh the cache used by Spider's monitoring threads, ensuring that the status of remote tables and connections is up to date.
SPIDER_FLUSH_TABLE_MON_CACHE()A UDF installed with the Spider Storage Engine, this function is used for refreshing monitoring server information. It returns a value of 1.
This page is licensed: CC BY-SA / Gnu FDL
An introduction to the Spider storage engine, which provides built-in sharding by linking to tables on remote MariaDB servers, supporting partitioning and XA transactions.
The Spider storage engine is a with built-in sharding features. It supports partitioning and , and allows tables of different MariaDB instances to be handled as if they were on the same instance. It refers to one possible implementation of ISO/IEC 9075-9:2008 SQL/MED.
When a table is created with the Spider storage engine, the table links to the table on a remote server. The remote table can be of any storage engine. The table link is concretely achieved by the establishment of the connection from a local MariaDB server to a remote MariaDB server. The link is shared for all tables that are part of a the same transaction.
The Spider documentation on the MariaDB documentation is currently incomplete. See the Spider website for more:, as well as the spider-1.0-doc and spider-2.0-doc repositories.
This UDF allows you to execute SQL statements on remote data nodes in the background, enabling concurrent processing and non-blocking operations.
Executes the given SQL statement in the background on the remote server, as defined in the parameters listing. If the query returns a result-set, it sttores the results in the given temporary table. When the given SQL statement executes successfully, this function returns the number of called UDF's. It returns
SELECT SPIDER_FLUSH_TABLE_MON_CACHE();
+--------------------------------+
| SPIDER_FLUSH_TABLE_MON_CACHE() |
+--------------------------------+
| 1 |
+--------------------------------+0Description: Returns empty results on network error.
0 : Return error on getting network error.
1: Return 0 records on getting network error.
Default Table Value: 0
DSN Parameter Name: erwm
This page is licensed: CC BY-SA / Gnu FDL
SPIDER_BG_DIRECT_SQL('sql', 'tmp_table_list', 'parameters')SELECT SPIDER_BG_DIRECT_SQL('SELECT * FROM example_table', '',
'srv "node1", port "8607"') AS "Direct Query";
+--------------+
| Direct Query |
+--------------+
| 1 |
+--------------+Spider 3.3.15
,
Stable
Spider 3.3.15
Gamma
Spider 3.3.14
,
Stable
Spider 3.3.13
To create a table in the Spider storage engine format, the COMMENT and/or CONNECTION clauses of the CREATE TABLE statement are used to pass connection information about the remote server.
For example, the following table exists on a remote server (in this example, the remote node was created with the MySQL Sandbox tool, an easy way to test with multiple installations)::
On the local server, a Spider table can be created as follows:
Records can now be inserted on the local server, and they are stored on the remote server:
MariaDB starting with
Alternative to specifying the data node information in the COMMENT, certain information (server, database, table) can also be specified using Table Options, like so:
Preparing 10M record table using the sysbench utility
Make a first read only benchmark to check the initial single node performance.
Define an easy way to access the nodes from the MariaDB or MySQL client.
Create the empty tables to hold the data and repeat for all available backend nodes.
Without connection pool or MariaDB thread pool, HaProxy and Spider have been protecting the tcp socket overflow without specific TCP tuning. In reality with a well tuned TCP stack or thread pool the curve should not decrease so abruptly to 0. Refer to the MariaDB Thread Pool to explore this feature.
Create the spider table on the Spider Node
Copy the data from the original sysbench table to the spider table
We observe a common issue with partitioning is a non uniform distribution of data between the backends. based on the partition key hashing algorithm.
Rerun the Benchmark with less queries
The response time decreases to 0.04. This is expected because the query latency is increased from multiple network round trips and condition push down is not implemented yet. Sysbench doing a lot of range queries. Just consider for now that this range query can be a badly optimized query.
We need to increase the concurrency to get better throughput.
We have no background search available in MariaDB. It won't be available before , but the next table definition mainly enables improving the performance of a single complex query plan with background search that can be found via the upstream spiral binaries MariaDB branch.
We have 4 cores per backend and 2 backends .
On backend1
On backend2
On Spider Node
Now test the following query :
MariaDB starting with
Spider's high availability feature has been deprecated (MDEV-28479), and are deleted. Please use other high availability solutions like replication or .
What is happening if we stop one backend?
Let's fix this with spider monitoring. Note that msi is the list of spider nodes @@server_id variable participating in the quorum.
Monitoring should be setup between Spider nodes participating in the cluster. We only have one Spider Node and spider_link_mon_servers represent the inter-connection of all Spider nodes in our setup.
This simple setup does not bring HA in case the Spider Node is not available. In a production setup the number of Spider Nodes in the spider_link_mon_servers table should be at least 3 to get a majority consensus.
Checking the state of the nodes:
No change has been made to cluster, so let's create a divergence:
Reintroducing the failed backend1 in the cluster:
This page is licensed: CC BY-SA / Gnu FDL

Explains the fundamental concepts behind Spider, including its architecture as a proxy storage engine, sharding capabilities, and support for XA transactions across data nodes.
A typical Spider deployment has a shared-nothing clustered architecture. The system works with any inexpensive hardware, and with a minimum of specific requirements for hardware or software. It consists of a set of computers, with one or more MariaDB processes known as nodes.
The nodes that store the data are designed as Backend Nodes, and can be any MariaDB, MySQL, Oracle server instances using any storage engine available inside the backend.
The Spider Proxy Nodes are instances running at least MariaDB 10. Spider Proxy Nodes are used to declare per table attachment to the backend nodes. In addition Spider Proxy Nodes can be setup to enable the tables to be split and mirrored to multiple Backend Nodes.
In the default high availability setup Spider Nodes produce SQL errors when a backend server is not responding. Per table monitoring can be setup to enable availability in case of unresponsive backends monotoring_bg_kind=1 or monotoring_bg_kind=2. The Monitoring Spider Nodes are inter-connected with usage of the system table mysql.link_mon_servers to manage network partitioning. Better known as split brain, an even number of Spider Monitor Nodes should be setup to allow a consensus based on the majority. Rather a single separated shared Monitoring Node instance or a minimum set of 3 Spider Nodes. More information can be found .
MariaDB starting with
Spider's high availability feature has been deprecated (MDEV-28479), and are deleted. Please use other high availability solutions like or .
Spider is a pluggable Storage Engine, acting as a proxy between the optimizer and the remote backends. When the optimizer requests multiple calls to the storage engine, Spider enforces consistency using the 2 phase commit protocol to the backends and by creating transactions on the backends to preserve atomic operations for a single SQL execution.
Preserving atomic operation during execution is used at multiple levels in the architecture. For the regular optimizer plan, it refers to multiple split reads and for concurrent partition scans, it will refer to semi transactions.
Costly queries can be more efficient when it is possible to fully push down part of the execution plan on each backend and reduce the result afterwards. Spider enables such execution with some direct execution shortcuts.
Spider uses the per partitions and per table model to concurrently access the remote backend nodes. For memory workload that property can be used to define multiple partitions on a single remote backend node to better adapt the concurrency to available CPUs in the hardware.
Spider maintains an internal dictionary of Table and Index statistics based on separated threads. The statistics are pulled per default on a time line basis and refer to crd for cardinality and sts for table status.
Spider stores resultsets into memory, but =3 stores resultsets into internal temporary tables if the resultsets are larger than quick_table_size.
This page is licensed: CC BY-SA / Gnu FDL
This function executes an SQL string directly on a specified remote backend server, allowing for maintenance tasks or queries that bypass local parsing.
SPIDER_DIRECT_SQL('sql', 'tmp_table_list', 'parameters')A UDF installed with the Spider Storage Engine, this function is used to execute the SQL string sql on the remote server, as defined in parameters. If any resultsets are returned, they are stored in the tmp_table_list.
The function returns 1 if the SQL executes successfully, or 0 if it fails.
This page is licensed: CC BY-SA / Gnu FDL
Learn how to use this function to copy table data from one Spider link ID to another, useful for migrating data or rebalancing shards without stopping the service.
SPIDER_COPY_TABLES(spider_table_name,
source_link_id, destination_link_id_list [,parameters])A UDF installed with the Spider Storage Engine, this function copies table data from source_link_id to destination_link_id_list. The service does not need to be stopped in order to copy.
If the Spider table is partitioned, the name must be of the format table_name#P#partition_name. The partition name can be viewed in the mysql.spider_tables table, for example:
Returns 1 if the data was copied successfully, or 0 if copying the data failed.
This page is licensed: CC BY-SA / Gnu FDL
A list of real-world companies and projects using Spider for high-volume data handling, gaming, and analytics, illustrating its scalability.
A list of users or clients that are using Spider and agree to be referenced:
Tencent Games. They handle 100TB data on 396 Spider nodes and 2800 data nodes. They use this cluster for their online games.
Kadokawa Corporation
MicroAd, Inc.
Sansan, Inc.
teamLab Inc.
CCM Benchmark
Softlink
Gavo
Blablacar Using for storing various logs
Believe Digital Using for back office analytics queries to aggregate multi billions tables in real time
This page is licensed: CC BY-SA / Gnu FDL
An overview of the federated topology for Spider, where a single Spider node aggregates data from multiple remote data nodes, acting as a unified access point.
Choose an operation for the Sharded MariaDB Enterprise Spider topology:
How to create a new backup or restore an existing backup.
The character set or collation for the Spider Table can be updated or modified using the statement.
On the Spider Node, alter the Spider Table's character set and collation:
If the new character set and collation are not compatible with the character set and collation used by the Data Table, you must also alter the character set and collation for the Data Table on the Data Node.
In a Federated MariaDB Enterprise Spider topology, the connection options for a Data Node can be updated using the statement.
On the Spider Node*, alter the table's connection details:
Describes the SPIDER_WRAPPER_PROTOCOLS table, which lists the available foreign data wrappers (like `mysql`) that Spider can use to connect to remote servers.
MariaDB starting with
The Information Schema SPIDER_WRAPPER_PROTOCOLS table is installed along with the Spider storage engine from .
It contains the following columns:
WRAPPER_NAME
varchar(64)
This page is licensed: CC BY-SA / Gnu FDL
Performance benchmark results for Spider, demonstrating its throughput and latency characteristics under various workloads compared to other configurations.
This is best run on a cluster of 3 nodes intel NUC servers 12 virtual cores model name : Intel® Core(TM) i3-3217U CPU @ 1.80GHz
All nodes have been running a mysqlslap client attached to the local spider node in the best run.
spider_conn_recycle_mode=1;
The read point select is produce with a 10M rows sysbench table
The write insert a single string into a memory table
Before Engine Condition Push Down patch .
A guide on how to migrate tables from a standard MariaDB deployment to a Federated Spider topology, distributing data across multiple backend nodes.
This page outlines the differences between the standalone SpiderForMySQL distribution and the version integrated into MariaDB Server, including version correspondence and feature availability.
node1 >CREATE TABLE s(
id INT NOT NULL AUTO_INCREMENT,
code VARCHAR(10),
PRIMARY KEY(id));CREATE TABLE s(
id INT NOT NULL AUTO_INCREMENT,
code VARCHAR(10),
PRIMARY KEY(id)
)
ENGINE=SPIDER
COMMENT='host "127.0.0.1", user "msandbox", password "msandbox", port "8607"';INSERT INTO s(code) VALUES ('a');
node1 > SELECT * FROM s;
+----+------+
| id | code |
+----+------+
| 1 | a |
+----+------+CREATE SERVER srv FOREIGN DATA WRAPPER mysql OPTIONS(
HOST '127.0.0.1',
USER 'msandbox',
PASSWORD 'msandbox',
PORT 8607);
CREATE TABLE s(
id INT NOT NULL AUTO_INCREMENT,
code VARCHAR(10),
PRIMARY KEY(id)
)
ENGINE=SPIDER REMOTE_SERVER="srv" REMOTE_DATABASE="db" REMOTE_TABLE="s";/usr/local/skysql/sysbench/bin/sysbench --test=oltp --db-driver=mysql --mysql-table-engine=innodb --mysql-user=skysql --mysql-password=skyvodka --mysql-host=192.168.0.202 --mysql-port=5054 --oltp-table-size=10000000 --mysql-db=test prepare/usr/local/skysql/sysbench/bin/sysbench --test=oltp --db-driver=mysql --mysql-table-engine=innodb --mysql-user=skysql --mysql-password=skyvodka --mysql-host=192.168.0.202 --mysql-port=5054 --mysql-db=test --oltp-table-size=10000000 --num-threads=4 --max-requests=100000 --oltp-read-only=on runsysbench 0.4.12: multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 4
Doing OLTP test.
Running mixed OLTP test
Doing read-only test
Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Maximum number of requests for OLTP test is limited to 100000
Threads started!
Done.
OLTP test statistics:
queries performed:
read: 1400196
write: 0
other: 200028
total: 1600224
transactions: 100014 (1095.83 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 1400196 (15341.58 per sec.)
other operations: 200028 (2191.65 per sec.)
Test execution summary:
total time: 91.2681s
total number of events: 100014
total time taken by event execution: 364.3693
per-request statistics:
min: 1.85ms
avg: 3.64ms
max: 30.70ms
approx. 95 percentile: 4.66ms
Threads fairness:
events (avg/stddev): 25003.5000/84.78
execution time (avg/stddev): 91.0923/0.00alias backend1='/usr/local/skysql/mysql-client/bin/mysql --user=skysql --password=skyvodka --host=192.168.0.202 --port=5054'
alias backend2='/usr/local/skysql/mysql-client/bin/mysql --user=skysql --password=skyvodka --host=192.168.0.203 --port=5054'
alias spider1='/usr/local/skysql/mysql-client/bin/mysql --user=skysql --password=skyvodka --host=192.168.0.201 --port=5054'backend1 << EOF
CREATE DATABASE backend;
CREATE TABLE backend.sbtest (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
EOF
backend2 << EOF
CREATE DATABASE backend;
CREATE TABLE backend.sbtest (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
EOFspider1 << EOF
CREATE SERVER backend
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.202',
DATABASE 'test',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
CREATE TABLE test.sbtest
(
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql",srv "backend"';
SELECT * FROM test.sbtest LIMIT 10;
EOF#spider1 << EOF
CREATE SERVER backend1
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.202',
DATABASE 'backend',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
CREATE SERVER backend2
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.203',
DATABASE 'backend',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
CREATE DATABASE IF NOT EXISTS backend;
CREATE TABLE backend.sbtest
(
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql", TABLE "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1"',
PARTITION pt2 COMMENT = 'srv "backend2"'
) ;
EOF#/usr/local/skysql/mariadb/bin/mysqldump --user=skysql --password=skyvodka --host=192.168.0.202 --port=5054 --no-create-info test sbtest | spider1 backend
#backend2 -e"SELECT count(*) FROM backend.sbtest;"
+----------+
| count(*) |
+----------+
| 3793316 |
+----------+
#backend1 -e"SELECT count(*) FROM backend.sbtest;"
+----------+
| count(*) |
+----------+
| 6206684 |
+----------+#/usr/local/skysql/sysbench/bin/sysbench --test=oltp --db-driver=mysql --mysql-table-engine=innodb --mysql-user=skysql --mysql-password=skyvodka --mysql-host=192.168.0.201 --mysql-port=5054 --mysql-db=backend --mysql-engine-trx=yes --oltp-table-size=10000000 --num-threads=4 --max-requests=100 --oltp-read-only=on runOLTP test statistics:
queries performed:
read: 1414
write: 0
other: 202
total: 1616
transactions: 101 (22.95 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 1414 (321.30 per sec.)
other operations: 202 (45.90 per sec.)
Test execution summary:
total time: 4.4009s
total number of events: 101
total time taken by event execution: 17.2960
per-request statistics:
min: 114.48ms
avg: 171.25ms
max: 200.98ms
approx. 95 percentile: 195.12ms
Threads fairness:
events (avg/stddev): 25.2500/0.43
execution time (avg/stddev): 4.3240/0.04#backend1 << EOF
CREATE DATABASE bsbackend1;
CREATE DATABASE bsbackend2;
CREATE DATABASE bsbackend3;
CREATE DATABASE bsbackend4;
CREATE TABLE bsbackend1.sbtest (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend2.sbtest (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend3.sbtest (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend4.sbtest (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
EOF#backend2 << EOF
CREATE DATABASE bsbackend5;
CREATE DATABASE bsbackend6;
CREATE DATABASE bsbackend7;
CREATE DATABASE bsbackend8;
CREATE TABLE bsbackend5.sbtest (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend6.sbtest (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend7.sbtest (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
CREATE TABLE bsbackend8.sbtest (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=InnoDB;
EOF#spider2 << EOF
CREATE SERVER bsbackend1 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.202', DATABASE 'bsbackend1',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend2 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.202', DATABASE 'bsbackend2',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend3 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.202', DATABASE 'bsbackend3',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend4 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.202', DATABASE 'bsbackend4',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend5 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.203', DATABASE 'bsbackend5',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend6 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.203', DATABASE 'bsbackend6',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend7 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.203', DATABASE 'bsbackend7',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE SERVER bsbackend8 FOREIGN DATA WRAPPER mysql OPTIONS( HOST '192.168.0.203', DATABASE 'bsbackend8',USER 'skysql', PASSWORD 'skyvodka',PORT 5054);
CREATE DATABASE IF NOT EXISTS bsbackend;
CREATE TABLE bsbackend.sbtest
(
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql", TABLE "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "bsbackend1"',
PARTITION pt2 COMMENT = 'srv "bsbackend2"',
PARTITION pt3 COMMENT = 'srv "bsbackend3"',
PARTITION pt4 COMMENT = 'srv "bsbackend4"',
PARTITION pt5 COMMENT = 'srv "bsbackend5"',
PARTITION pt6 COMMENT = 'srv "bsbackend6"',
PARTITION pt7 COMMENT = 'srv "bsbackend7"',
PARTITION pt8 COMMENT = 'srv "bsbackend8"'
) ;
EOF
INSERT INTO bsbackend.sbtest SELECT * FROM backend.sbtest;SELECT count(*) FROM sbtest;
+----------+
| count(*) |
+----------+
| 10000001 |
+----------+
1 row IN SET (8,38 sec)
SET spider_casual_read=1;
SET spider_bgs_mode=2;
SELECT count(*) FROM sbtest;
+----------+
| count(*) |
+----------+
| 10000001 |
+----------+
1 row IN SET (4,25 sec)
mysql> SELECT sum(k) FROM sbtest;
+--------+
| sum(k) |
+--------+
| 0 |
+--------+
1 row IN SET (5,67 sec)
mysql> SET spider_casual_read=0;
mysql> SELECT sum(k) FROM sbtest;
+--------+
| sum(k) |
+--------+
| 0 |
+--------+
1 row IN SET (12,56 sec)#backend1 -e "CREATE DATABASE backend_rpl"
#backend2 -e "CREATE DATABASE backend_rpl"
#/usr/local/skysql/mariadb/bin/mysqldump --user=skysql --password=skyvodka --host=192.168.0.202 --port=5054 backend sbtest | backend1 backend_rpl
#/usr/local/skysql/mariadb/bin/mysqldump --user=skysql --password=skyvodka --host=192.168.0.203 --port=5054 backend sbtest | backend2 backend_rpl
#spider1 << EOF
DROP TABLE backend.sbtest;
CREATE SERVER backend1_rpl
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.202',
DATABASE 'backend_rpl',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
CREATE SERVER backend2_rpl
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.203',
DATABASE 'backend_rpl',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
CREATE TABLE backend.sbtest
(
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql", TABLE "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1 backend2_rpl"',
PARTITION pt2 COMMENT = 'srv "backend2 backend1_rpl"'
) ;
INSERT INTO backend.sbtest SELECT 10000001, 0, '' ,'replicas test';
EOF
#backend1 -e "SELECT * FROM backend.sbtest WHERE id=10000001";
+----------+---+---+---------------+
| id | k | c | pad |
+----------+---+---+---------------+
| 10000001 | 0 | | replicas test |
+----------+---+---+---------------+
# backend2 -e "SELECT * FROM backend.sbtest WHERE id=10000001";
# backend2 -e "SELECT * FROM backend_rpl.sbtest WHERE id=10000001";
+----------+---+---+---------------+
| id | k | c | pad |
+----------+---+---+---------------+
| 10000001 | 0 | | replicas test |
+----------+---+---+---------------+#spider1 -e "SELECT * FROM backend.sbtest WHERE id=10000001";
ERROR 1429 (HY000) at line 1: Unable to connect to foreign data source: backend1#spider1 << EOF
DROP TABLE backend.sbtest;
CREATE TABLE backend.sbtest
(
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT '',
PRIMARY KEY (id),
KEY k (k)
) ENGINE=spider COMMENT='wrapper "mysql", TABLE "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1 backend2_rpl", mbk "2", mkd "2", msi "5054", link_status "0 0"',
PARTITION pt2 COMMENT = 'srv "backend2 backend1_rpl", mbk "2", mkd "2", msi "5054", link_status "0 0" '
) ;
CREATE SERVER mon
FOREIGN DATA WRAPPER mysql
OPTIONS(
HOST '192.168.0.201’,
DATABASE 'backend',
USER 'skysql',
PASSWORD 'skyvodka',
PORT 5054
);
INSERT INTO `mysql`.`spider_link_mon_servers` VALUES
('%','%','%',5054,'mon',NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,0,NULL,NULL);
SELECT spider_flush_table_mon_cache();
EOF#spider1 -e "SELECT * FROM backend.sbtest WHERE id=10000001"
+----------+---+---+---------------+
| id | k | c | pad |
+----------+---+---+---------------+
| 10000001 | 0 | | replicas test |
+----------+---+---+---------------+#spider1 -e "SELECT db_name, table_name,server FROM mysql.spider_tables WHERE link_status=3"
+---------+--------------+----------+
| db_name | table_name | server |
+---------+--------------+----------+
| backend | sbtest#P#pt1 | backend1 |
+---------+--------------+----------+# spider1 -e "INSERT INTO backend.sbtest SELECT 10000003, 0, '' ,'replicas test';"
# backend1 -e "SELECT * FROM backend.sbtest WHERE id=10000003"
# backend2 -e "SELECT * FROM backend_rpl.sbtest WHERE id=10000003"
+----------+---+---+---------------+
| id | k | c | pad |
+----------+---+---+---------------+
| 10000003 | 0 | | replicas test |
+----------+---+---+---------------+#spider1 << EOF
ALTER TABLE backend.sbtest
ENGINE=spider COMMENT='wrapper "mysql", TABLE "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1 backend2_rpl" mbk "2", mkd "2", msi "5054", link_status "2 0"',
PARTITION pt2 COMMENT = 'srv "backend2 backend1_rpl" mbk "2", mkd "2", msi "5054", link_status "0 2" '
) ;
SELECT spider_copy_tables('backend.sbtest#P#pt1','0','1');
SELECT spider_copy_tables('backend.sbtest#P#pt2','1','0');
ALTER TABLE backend.sbtest
ENGINE=spider COMMENT='wrapper "mysql", TABLE "sbtest"'
PARTITION BY KEY (id)
(
PARTITION pt1 COMMENT = 'srv "backend1 backend2_rpl" mbk "2", mkd "2", msi "5054", link_status "1 0"',
PARTITION pt2 COMMENT = 'srv "backend2 backend1_rpl" mbk "2", mkd "2", msi "5054", link_status "0 1" '
) ;
EOFSELECT table_name FROM mysql.spider_tables;
+-------------+
| table_name |
+-------------+
| spt_a#P#pt1 |
| spt_a#P#pt2 |
| spt_a#P#pt3 |
+-------------+This page is licensed: CC BY-SA / Gnu FDL




The MariaDB Enterprise Server node with the destination table is configured as a Spider Node.
The Data Table is the source table on the Data Node.
A Spider Table is created on the Spider Node that references the Data Table on the Data Node.
On the Spider node, the Data Table's data is migrated to the destination table by querying the Spider Table like the following:
Follow the steps below to migrate tables using the Federated MariaDB Enterprise Spider topology.
Before you can migrate tables, the Federated MariaDB Enterprise Spider topology must be deployed.
For additional information, see "Deploy MariaDB Enterprise Spider".
A local copy of the table must be created. This new table will contain the migrated data.
On the Spider Node*, create a local copy of each table that is being migrated:
The table data can be migrated to the local table using the Spider Tables.
On the Spider Node*, migrate the table data to the local copy of the table using the INSERT SELECT statement:
On the Spider Node, read from the local copy of the table using a SELECT statement to confirm that the data has been migrated:
This page is: Copyright © 2025 MariaDB. All rights reserved.
SELECT SPIDER_DIRECT_SQL('SELECT * FROM s', '', 'srv "node1", port "8607"');
+----------------------------------------------------------------------+
| SPIDER_DIRECT_SQL('SELECT * FROM s', '', 'srv "node1", port "8607"') |
+----------------------------------------------------------------------+
| 1 |
+----------------------------------------------------------------------+/usr/local/skysql/mysql-client/bin/mysqlslap --user=skysql --password=skyvodka --host=192.168.0.201 --port=5012 -i1000000 -c32 -q "insert into test(c) values('0-31091-138522330')" --create-schema=testSET global use_stat_tables='preferably';
USE backend;
ANALYZE TABLE sbtest;INSERT INTO innodb_tab
SELECT * FROM spider_tab;CREATE DATABASE hq_sales;
CREATE SEQUENCE hq_sales.invoice_seq;
CREATE TABLE hq_sales.invoices (
branch_id INT NOT NULL DEFAULT (1) CHECK (branch_id=1),
invoice_id INT NOT NULL DEFAULT (NEXT VALUE FOR migrated_hq_sales.invoice_seq),
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=InnoDB;INSERT INTO hq_sales.invoices
SELECT * FROM spider_hq_sales.invoices;SELECT * FROM hq_sales.invoices;+-----------+------------+-------------+----------------------------+---------------+----------------+
| branch_id | invoice_id | customer_id | invoice_date | invoice_total | payment_method |
+-----------+------------+-------------+----------------------------+---------------+----------------+
| 1 | 1 | 1 | 2020-05-10 12:35:10.000000 | 1087.23 | CREDIT_CARD |
| 1 | 2 | 2 | 2020-05-10 14:17:32.000000 | 1508.57 | WIRE_TRANSFER |
| 1 | 3 | 3 | 2020-05-10 14:25:16.000000 | 227.15 | CASH |
+-----------+------------+-------------+----------------------------+---------------+----------------+HANDLER can not be translated to SQL in MariaDB
Concurrent background search is not yet implemented in MariaDB
Vertical partitioning storage engine VP is not implemented in MariaDB
CREATE TABLE can use table discovery in MariaDB
performance improvement using >1and in MariaDB
This page is licensed: CC BY-SA / Gnu FDL
Stable
Spider 3.3.13
Gamma
Spider 3.2.37
,
Gamma
Spider 3.2.21
,
Gamma
Spider 3.2.18
Gamma
Spider 3.2.11
Gamma
Spider 3.2.4
Gamma
Spider 3.2
Gamma
Spider 3.0
Beta







How to update the character set or collation.
How to update the character set or collation.
How to update the connection options for a Data Node.
WRAPPER_VERSION
varchar(20)
WRAPPER_DESCRIPTION
longtext
WRAPPER_MATURITY
varchar(12)




Provides an overview of using Spider for sharding in MariaDB Enterprise Server, allowing data distribution across multiple nodes for horizontal scalability.
Choose an operation for the Sharded MariaDB Enterprise Spider topology:
How to add a new shard to a Spider Table.
The character set or collation for the Spider Table can be updated or modified using the statement.
On the Spider Node, alter the Spider Table's character set and collation:
If the new character set and collation are not compatible with the character set and collation used by the Data Table, you must also alter the character set and collation for the Data Table on the Data Node.
In a Sharded MariaDB Enterprise Spider topology, the connection options for a Data Node can be updated using the statement.
On the Spider Node, alter the partition's connection details:
A guide to installing the Spider storage engine on Debian/Ubuntu and other Linux distributions, including loading the plugin and configuring data nodes.
The Spider storage engine supports partitioning and XA transactions, and allows tables of different database host instances to be handled as if they were on the same instance.
To make sensible use of Spider, you need two or more database host instances, typically running on separate hosts. Those database instances can be two or more MariaDB Server hosts, or a MariaDB Server host and an Oracle Database, etc. The Spider node is the MariaDB server that receives queries from your application. It then processes these queries, connecting to one or more data nodes. The data nodes are the MariaDB servers that actually store the table data.
In order for this to work, you need to configure the data nodes to accept queries from the Spider node and you need to configure the Spider node to use the data nodes as remote storage.
You don't need to install any additional packages to use it, but it does require some configuration.
ALTER TABLE spider_hq_sales.invoices
DEFAULT CHARACTER SET 'utf8mb4'
DEFAULT COLLATE 'utf8mb4_general_ci';ALTER TABLE spider_hq_sales.invoices
COMMENT = 'server "new_hq_server", table "invoices"'This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
How to create a new backup or restore an existing backup.
How to update the character set or collation for a Spider Table.
How to update the connection options for a Data Node.
ALTER TABLE spider_sharded_sales.invoices
DEFAULT CHARACTER SET 'utf8mb4'
DEFAULT COLLATE 'utf8mb4_general_ci';ALTER TABLE spider_sharded_sales.invoices
REORGANIZE PARTITION hq_partition INTO (
PARTITION hq_partition VALUES IN (1) COMMENT = 'server "new_hq_server", table "invoices"'
);Spider deployments use data nodes to store the actual table data. In order for a MariaDB server to operate as a data node for Spider, you need to create a table or tables on which to store the data and configure the server to accept client connections from the Spider node.
For instance, first create the table:
Next, create a user for the Spider node and set a password for that user. For the sake of the example, assume the Spider node is at the IP address 192.168.1.1:
Then grant the spider user privileges on the example table.
The data node is now ready for use. You can test it by attempting to connect the MariaDB client to the data from the Spider node. For instance, assuming the data node is at the IP address 192.168.1.5, SSH into the Spider node then try to establish a client connection.
The Spider storage engine must be installed on the Spider node. The Spider node is the MariaDB server that receives queries for the table, (in this case test.spider_example). It then uses the Spider storage engine to connect to the tables on the data nodes to retrieve data and return the result-set.
To install the Spider storage engine, complete the installation process shown below.
On other Linux distributions, the Spider storage engine is installed with MariaDB Server.
The Spider storage engine can be loaded as a normal plugin, and Spider automatically creates its dependencies. There are two primary ways to load the plugin.
The plugin can be loaded dynamically without a server restart by executing INSTALL SONAME or INSTALL PLUGIN:
Alternatively, the plugin can be loaded by adding plugin_load_add=ha_spider to a configuration file:
If the plugin is loaded in a configuration file, then the server will load the plugin after the server has been restarted.
Loading the plugin also creates a series of new tables in the mysql database, including:
spider_xa
spider_xa_member
spider_xa_failed_log
spider_tables
spider_link_mon_servers
spider_link_failed_log
spider_table_position_for_recovery
spider_table_sts
spider_table_crd
You can verify that the Spider plugin has been loaded by querying the information_schema.ENGINES table:
If the Spider plugin is not loaded, then the query does not return any results.
With the data node or data nodes configured, you can set up the Spider node to use them. The Spider node is the MariaDB server that receives queries for the table, (in this case test.spider_example). It then uses the Spider storage engine to connect to the tables on the data nodes to retrieve data and return the result-set.
In order to connect the Spider node to the data nodes, you may issue a CREATE SERVER statement for each data node. You can then use the server definition in creating the Spider table.
In the event that you need to modify or replace this server after setting up the Spider table, remember to issue a FLUSH statement to update the server definition.
Alternatively, you could also choose not to create a server, but specify the connection info in the spider table creation.
With the data nodes set up and the Spider node configured for use, you can create the Spider table. The Spider table must have the same column definitions as the tables on the data nodes. Spider can be configured through table parameters passed to the COMMENT or CONNECTION option.
This configures Spider to use the server dataNode1, (defined above), as a remote table. Any data you write to this table is actually stored on the MariaDB server at 192.168.1.5.
Alternatively, starting from , one can specify spider table parameters using table options:
This page is licensed: CC BY-SA / Gnu FDL
Instructions on how to expand a sharded Spider topology by adding new data nodes (shards) and rebalancing the data distribution.
In a Sharded MariaDB Enterprise Spider topology, new shards can be added using the following procedure.
Each data node requires a user account that the Spider Node uses to connect.
On the Data Node hosting the new shard, create the Spider user account for the Spider Node using the statement:
Privileges are granted to the user account in a later step.
On the Spider Node, confirm that the Spider user account can connect to the Data Node using :
The Spider Node requires connection details for each Data Node.
On the Spider Node, create a server object to configure the connection details for the Data Node hosting the new shard using the statement:
The Data Node runs MariaDB Enterprise Server, so the FOREIGN DATA WRAPPER is set to mariadb.
Using a server object for connection details is optional. Alternatively, the connection details for the Data Node can be specified in the COMMENT table option of the statement when creating the Spider Table.
When queries read and write to a Spider Table, Spider reads and writes to the Data Tables for each partition on the Data Nodes. The Data Tables must be created on the Data Nodes with the same structure as the Spider Table.
If your Data Tables already exist, grant privileges on the tables to the Spider user.
On the Data Node hosting the new shard, create the Data Tables:
The Spider Node reads and writes to the Data Table using the server and user account configured previously. The user account must have privileges on the table.
The Spider Node connects to the Data Nodes with the user account configured previously.
On the Data Node hosting the new shard, grant the Spider user sufficient privileges to operate on the Data Table:
By default, the Spider user also requires the privilege on the database containing the Data Table. The privilege is required, because Spider uses temporary tables to optimize read queries when Spider BKA Mode is 1.
Spider BKA Mode is configured using the following methods:
The session value is configured by setting the system variable on the Spider Node. The default value is -1. When the session value is -1, the value for each Spider Table is used.
The value for each Spider Table is configured by setting the option in the COMMENT table option. When the bka_mode option is not set, the implicit value is 1.
The default spider_bka_mode value is -1, and the implicit Spider Table value is 1, so the default is 1.
On the Data Node hosting the new shard, grant the Spider user the privilege on the database:
A partition for the new shard must be added to the Spider Table on the Spider Node.
On the Spider Node, alter the Spider Table to add the partition and reference the name of the Data Node hosting the new shard in the COMMENT partition option:
On the Spider Node, read from the Spider Table using a statement:
This page is: Copyright © 2025 MariaDB. All rights reserved.
$ sudo apt install mariadb-plugin-spiderCREATE TABLE test.spider_example (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(50)
) ENGINE=InnoDB;CREATE USER spider@192.168.1.1;
SET PASSWORD FOR spider@192.168.1.1 = PASSWORD('passwd');GRANT ALL ON test.spider_example TO spider@192.168.1.1;$ mysql -u spider -p -h 192.168.1.5 test -e "SHOW TABLES;"
+----------------+
| Tables_in_test |
+----------------+
| spider_example |
+----------------+INSTALL SONAME "ha_spider";[mariadb]
...
plugin_load_add = "ha_spider"SELECT ENGINE, SUPPORT
FROM information_schema.ENGINES
WHERE ENGINE = 'SPIDER';
+--------------------+---------+
| ENGINE | SUPPORT |
+--------------------+---------+
| SPIDER | YES |
+--------------------+---------+CREATE SERVER dataNode1 FOREIGN DATA WRAPPER mysql
OPTIONS (
HOST '192.168.1.5',
DATABASE 'test',
USER 'spider',
PASSWORD 'passwd',
PORT 3306);FLUSH TABLES;CREATE TABLE test.spider_example (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(50)
) ENGINE=Spider
COMMENT='wrapper "mysql", srv "dataNode1", table "spider_example"';CREATE TABLE test.spider_example (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(50)
) ENGINE=Spider
REMOTE_SERVER=dataNode1 REMOTE_TABLE=spider_example;CREATE USER spider_user@192.0.2.1 IDENTIFIED BY "password";$ mariadb --user spider_user --host 192.0.2.2 --passwordCREATE SERVER southern_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST '192.0.2.6',
PORT 5004,
USER 'spider_user',
PASSWORD 'password',
DATABASE 'southern_sales'
);CREATE DATABASE southern_sales;
CREATE SEQUENCE southern_sales.invoice_seq;
CREATE TABLE southern_sales.invoices (
branch_id INT NOT NULL DEFAULT (4) CHECK (branch_id=4),
invoice_id INT NOT NULL DEFAULT (NEXT VALUE FOR hq_sales.invoice_seq),
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=InnoDB;
INSERT INTO southern_sales.invoices
(customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, '2020-05-25 10:22:11', 2083.56, 'CREDIT_CARD'),
(2, '2020-05-25 11:42:33', 515.22, 'WIRE_TRANSFER'),
(3, '2020-05-25 13:15:16', 1213.80, 'CASH');GRANT ALL PRIVILEGES ON southern_sales.invoices TO 'spider_user'@'192.0.2.1';GRANT CREATE TEMPORARY TABLES ON southern_sales.* TO 'spider_user'@'192.0.2.1';ALTER TABLE spider_sharded_sales.invoices
ADD PARTITION (
PARTITION southern_partition VALUES IN (4) COMMENT = 'server "southern_server", table "invoices"'
);SELECT * FROM spider_sharded_sales.invoices;+-----------+------------+-------------+----------------------------+---------------+----------------+
| branch_id | invoice_id | customer_id | invoice_date | invoice_total | payment_method |
+-----------+------------+-------------+----------------------------+---------------+----------------+
| 1 | 1 | 1 | 2020-05-10 12:35:10.000000 | 1087.23 | CREDIT_CARD |
| 1 | 2 | 2 | 2020-05-10 14:17:32.000000 | 1508.57 | WIRE_TRANSFER |
| 1 | 3 | 3 | 2020-05-10 14:25:16.000000 | 227.15 | CASH |
| 2 | 1 | 2 | 2020-05-10 12:31:00.000000 | 1351.04 | CREDIT_CARD |
| 2 | 2 | 2 | 2020-05-10 12:45:27.000000 | 162.11 | WIRE_TRANSFER |
| 2 | 3 | 4 | 2020-05-10 13:11:23.000000 | 350.00 | CASH |
| 3 | 1 | 5 | 2020-05-10 12:31:00.000000 | 111.50 | CREDIT_CARD |
| 3 | 2 | 8 | 2020-05-10 12:45:27.000000 | 1509.23 | WIRE_TRANSFER |
| 3 | 3 | 3 | 2020-05-10 13:11:23.000000 | 3301.66 | CASH |
| 4 | 1 | 1 | 2020-05-25 10:22:11.000000 | 2083.56 | CREDIT_CARD |
| 4 | 2 | 2 | 2020-05-25 11:42:33.000000 | 515.22 | WIRE_TRANSFER |
| 4 | 3 | 3 | 2020-05-25 13:15:16.000000 | 1213.80 | CASH |
+-----------+------------+-------------+----------------------------+---------------+----------------+Describes common use cases for Spider, such as horizontal sharding for scalability, consolidating data from multiple sources, and migrating data between servers.
This article will cover simple working examples for some standard use cases for Spider. The example are illustrated using a sales opportunities table to be consistent throughout. In some cases the actual examples are contrived but are used to illustrate the varying syntax options.
Have 3 or more servers available and Install MariaDB on each of these servers:
spider server which will act as the front end server hosting the spider storage engine.
backend1 which will act as a backed server storing data
backend2 which will act as a second backend server storing data
Follow the instructions to enable the Spider storage engine on the spider server:
When explicitly setting the system variable, please note that Spider will execute matching statements on each of the data nodes. It will attempt to do this on the data nodes using the privilege, which thus requires one to grant this privilege to the Spider user on the data nodes.
If the Spider user on the data note is not configured with the SUPER privilege, you may encounter issues when working with Spider tables like ERROR 1227 (42000): Access denied for the missing SUPER privilege. To avoid this, don't explicitly set spider_internal_sql_log_off, or set it to -1, or grant the SUPER privilege to the Spider user on the data node.
Spider needs a remote connection to the backend server to actually perform the remote query. So this should be setup on each backend server. In this case 172.21.21.2 is the ip address of the spider node limiting access to just that server.
Now verify that these connections can be used from the spider node (here 172.21.21.3 = backend1 and 172.21.21.4 = backend2):
The table definition should be created in the test database on both backend1 and backend2 servers:
While the connection information can also be specified inline in the comment or (from ) as table options, it is cleaner to define a server object representing each remote backend server connection:
Bear in mind, if you ever need to remove, recreate or otherwise modify the server definition for any reason, you need to also execute a statement. Otherwise, Spider continues to use the old server definition, which can result in queries raising the error
If you encounter this error when querying Spider tables, issue a statement to update the server definitions.
In this case, a spider table is created to allow remote access to the opportunities table hosted on backend1. This then allows for queries and remote dml into the backend1 server from the spider server:
See also .
In this case a spider table is created to distribute data across backend1 and backend2 by hashing the id column. Since the id column is an incrementing numeric value the hashing will ensure even distribution across the 2 nodes.
See also .
In this case a spider table is created to distribute data across backend1 and backend2 based on the first letter of the accountName field. All accountNames that start with the letter L and prior are stored in backend1 and all other values stored in backend2. Note that the accountName column must be added to the primary key which is a requirement of MariaDB partitioning:
See also .
In this case a spider table is created to distribute data across backend1 and backend2 based on specific values in the owner field. Bill, Bob, and Chris are stored in backend1 and Maria and Olivier stored in backend2. Note that the owner column must be added to the primary key which is a requirement of MariaDB partitioning:
With the following partition clause can be used to specify a default partition for all other values, however this must be a distinct partition / shard:
For a complete list of partition types, see .
This page is licensed: CC BY-SA / Gnu FDL
Covers advanced management topics like executing direct SQL on backends, copying tables between nodes, and monitoring the cluster using status variables.
Direct SQL is a way to map reduced execution on remote backends and store the results in a local table. This can either be sequential, using the UDF function , or concurrently, using .
Or if you are using a :
The default for is to access concurrently all backends. If you have multiple partitions store inside a single backend, you still can increase parallelism affecting different channels to each partitions.
Procedures for performing consistent backups and restores in a federated Spider topology using MariaDB Backup and MariaDB Dump, ensuring data synchronization.
Frequently asked questions about Spider, covering troubleshooting common errors, configuration best practices, and architectural questions regarding HA and sharding.
sudo yum install MariaDB-spider-engineMariaDB Backup can be used to perform backup operations on Spider deployments.
MariaDB Backup can be used to create a physical backup of a Federated MariaDB Enterprise Spider topology. MariaDB Backup must be used to backup the Spider Node and the Data Node. The Spider Node and Data Node must be locked to guarantee that the backups of the Spider Node and Data Node are consistent.
The backup of the Spider Node contains:
Physical backup for Spider Tables
Physical backup for tables that use InnoDB, Aria, MyISAM, and MyRocks
Physical backup for any other database objects on the Spider Node
The backup of the Data Node contains:
Physical backup for Data Tables
Physical backup for tables that use InnoDB, Aria, MyISAM, and MyRocks
Physical backup for any other database objects on the Data Node
The following procedure shows how to take a consistent backup of a Spider Node and a Data Node deployed in a Spider Federated MariaDB Enterprise Spider topology.
On the Spider Node and on the Data Node, create a user account to perform the backup using the CREATE USER and GRANT statements:
On the Data Node, grant the user account Spider uses to operate on the Data Node sufficient privileges to lock the Data Tables using the GRANT statement.
For example, on the hq_server Data Node:
On the Spider Node, acquire a read lock on Spider Tables using the LOCK TABLES statement:
Keep this session open during the rest of the procedure.
The read lock will propagate to the Data Tables on the Data Node. The read locks will prevent the Data Tables from changing during the backup, so the backups of the Spider Node and the Data Node are consistent.
On the Data Node, perform the backup using MariaDB Backup:
When the Data Node backup is complete, perform the backup on the Spider Node using MariaDB Backup:
When the Spider Node backup is complete, release the table locks in your original session using the UNLOCK TABLES statement:
On the Spider Node and the Data Node, prepare each of the backups using MariaDB Backup:
The Spider Node and the Data Node now have a complete backup of the data directory. Backups should be tested to confirm they are complete and consistent.
MariaDB Backup can restore a Federated MariaDB Enterprise Spider topology from a backup taken with MariaDB Backup.
On the Spider Node and on the Data Node, stop the MariaDB Server process:
On the Spider Node and on the Data Node, empty the data directory:
On the Spider Node and on the Data Node, restore the backup for that server using MariaDB Backup:
On the Spider Node and on the Data Node, confirm that the restored files are owned by the user that owns the MariaDB Server process:
On the Spider Node and on the Data Node, start the MariaDB Server process:
On the Spider Node, query a Spider Table to test it:
MariaDB Dump generates backup files containing the SQL statements necessary to recreate the database. MariaDB Dump is included with MariaDB Enterprise Server and can be used to backup databases in Spider deployments. The MariaDB Client can then be used to restore databases from a MariaDB Dump backup.
MariaDB Dump can be used to create a logical backup of a Federated MariaDB Enterprise Spider topology. MariaDB Dump must be used to backup the Spider Node and the Data Node. The Spider Node and Data Node must be locked to guarantee that the backups of the Spider Node and Data Node are consistent.
The backup of the Spider Node contains:
Table definitions for Spider Tables
Table definitions for all tables
Table data for tables that use InnoDB, Aria, MyISAM, and MyRocks
Definitions for any other user accounts, privileges, views, and triggers on the Spider Node
The backup of the Data Node contains:
Table definitions for Data Tables
Table definitions for all tables
Table data for tables that use InnoDB, Aria, MyISAM, and MyRocks
Definitions for any other user accounts, privileges, views, and triggers on the Data Node
The following procedure shows how to take a consistent backup of a Spider Node and a Data Node deployed in a Federated MariaDB Enterprise Spider topology.
On the Spider Node and on the Data Node, create a user account to perform the backup using the CREATE USER and GRANT statements:
On the Data Node, grant the user account Spider uses to operate on the Data Node sufficient privileges to lock the Data Tables using the GRANT statement.
For example, on the hq_server Data Node:
On the Spider Node, acquire a read lock on Spider Tables using the LOCK TABLES statement:
Keep this session open during the rest of the procedure.
The read lock will propagate to the Data Tables on the Data Node. The read locks will prevent the Data Tables from receiving any changes during the backup, which ensures the backups of the Spider Node and the Data Node are consistent.
On the Data Node, perform the backup using MariaDB Dump:
When the Data Node has been backed up, perform the backup on the Spider Node using MariaDB Dump:
When the Spider Node backup is complete, release the table locks in your original session using the UNLOCK TABLES statement:
The Spider Node and the Data Node now each have a mariadb_dump.sql backup file. This backup file contains the SQL statements needed to recreate the schema on the Spider Node and the schema and data on the Data Node. Backups should be tested to confirm they are complete and consistent.
MariaDB Client can restore a Federated MariaDB Enterprise Spider topology from a backup taken with MariaDB Dump backup.
Stop all traffic to the Spider Node and the Data Node.
On the Spider Node, restore the backup for that server using MariaDB Client:
On the Data Node, restore the backup for that server using MariaDB Client:
On the Spider Node, query a Spider Table to test it:
This page is: Copyright © 2025 MariaDB. All rights reserved.
This is used by Spider monitoring to ask other monitoring nodes the status of a table.
Not having a primary key will generate errors for resynchronizing tables via spider_copy_table().
Yes, XA transactions can be disabled from Spider. Until Galera 4.0 fully supports xa transactions, spider can point to a maxscale proxy that can manage transparent node election in case of failure inside a shard group. Note that disabling XA will break cross shard WRITES in case of transaction ROLLBACK. This architecture need to be used with care if you have a highly transactional workload that can generate cross shard deadlocks.
Delegation of shard node replication using asynchronous replication and slave election with GTID.
Delegation of shard node replication via active passive HA solutions.
Shard builds via replication into Spider tables is interesting when you can route READS to a pool of Spider nodes reattaching the shards.
Map reduce in Spider is limited to a single table. Building spider on top of some views can eliminate the need to use joins.
Replication to universal tables to every shard is commonly used to enable the views on each shard.
When using MRR and BKA (and you do so with network storage), when Spider needs to create temporary tables on the backends, use the CREATE TEMPORARY TABLES privilege. Spider can still switch to a lower performance solution using spider_bka_mode=2, or Query push down or range predicate using spider_bka_mode=0
This page is licensed: CC BY-SA / Gnu FDL
CREATE USER 'mariadb-backup'@'localhost'
IDENTIFIED BY 'mb_passwd';
GRANT RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR
ON *.*
TO 'mariadb-backup'@'localhost';GRANT LOCK TABLES ON hq_sales.* TO 'spider_user'@'192.0.2.2';LOCK TABLES spider_federated_sales.invoices READ;$ sudo mariadb-backup --backup \
--target-dir=/data/backups/full \
--user=mariadb-backup \
--password='mb_passwd'$ sudo mariadb-backup --backup \
--target-dir=/data/backups/full \
--user=mariadb-backup \
--password='mb_passwd'UNLOCK TABLES;$ sudo mariadb-backup --prepare \
--target-dir=/data/backups/full$ sudo systemctl stop mariadb$ sudo rm -fr /var/lib/mysql/*$ sudo mariadb-backup --copy-back \
--target-dir=/data/backups/full$ sudo chown -R mysql:mysql /var/lib/mysql$ sudo systemctl start mariadbSELECT * FROM spider_hq_sales.invoices;+-----------+------------+-------------+----------------------------+---------------+----------------+
| branch_id | invoice_id | customer_id | invoice_date | invoice_total | payment_method |
+-----------+------------+-------------+----------------------------+---------------+----------------+
| 1 | 1 | 1 | 2020-05-10 12:35:10.000000 | 1087.23 | CREDIT_CARD |
| 1 | 2 | 2 | 2020-05-10 14:17:32.000000 | 1508.57 | WIRE_TRANSFER |
| 1 | 3 | 3 | 2020-05-10 14:25:16.000000 | 227.15 | CASH |
+-----------+------------+-------------+----------------------------+---------------+----------------+CREATE USER 'mariadb-dump'@'localhost'
IDENTIFIED BY 'md_passwd';
GRANT SELECT, INSERT, SHOW VIEW, TRIGGER, CREATE, ALTER, EVENT, RELOAD, LOCK TABLES
ON *.*
TO 'mariadb-dump'@'localhost';GRANT LOCK TABLES ON hq_sales.* TO 'spider_user'@'192.0.2.2';LOCK TABLES spider_federated_sales.invoices READ;$ mariadb-dump \
--user=mariadb-dump \
--password='md_passwd' \
--ssl-verify-server-cert \
--ssl-ca ~/mariadb_chain.pem \
--all-databases \
--single-transaction \
--default-character-set=utf8mb4 \
> mariadb_dump.sql$ mariadb-dump \
--user=mariadb-dump \
--password='md_passwd' \
--ssl-verify-server-cert \
--ssl-ca ~/mariadb_chain.pem \
--all-databases \
--single-transaction \
--default-character-set=utf8mb4 \
> mariadb_dump.sqlUNLOCK TABLES;$ mariadb \
--user=mariadb-dump \
--password='md_passwd' \
--ssl-verify-server-cert \
--ssl-ca ~/mariadb_chain.pem \
--default-character-set=utf8mb4 \
< mariadb_dump.sql$ mariadb \
--user=mariadb-dump \
--password='md_passwd' \
--ssl-verify-server-cert \
--ssl-ca ~/mariadb_chain.pem \
--default-character-set=utf8mb4 \
< mariadb_dump.sqlSELECT * FROM spider_hq_sales.invoices;+-----------+------------+-------------+----------------------------+---------------+----------------+
| branch_id | invoice_id | customer_id | invoice_date | invoice_total | payment_method |
+-----------+------------+-------------+----------------------------+---------------+----------------+
| 1 | 1 | 1 | 2020-05-10 12:35:10.000000 | 1087.23 | CREDIT_CARD |
| 1 | 2 | 2 | 2020-05-10 14:17:32.000000 | 1508.57 | WIRE_TRANSFER |
| 1 | 3 | 3 | 2020-05-10 14:25:16.000000 | 227.15 | CASH |
+-----------+------------+-------------+----------------------------+---------------+----------------+myisam-recover=FORCE,BACKUPMariaDB starting with
The Spider Handler Socket support has been removed, see MDEV-26858.
Check that Handler Socket is running on the backend nodes
MariaDB starting with
The UDF spider_copy_tables relies on Spider's high availability feature, which has been deprecated (MDEV-28479), and are deleted. Please use other high availability solutions like replication or .
The UDF function spider_copy_tables is available for copying table data from the source link ID to the destination link ID list without stopping your service for copying
spider_copy_tables(Spider table name, source link ID, destination link ID list[, parameters])
Returns 1 if copying data succeeded.
Returns 0 if copying data failed.
If the Spider table is partitioned, you must set "Spider table name" with a part name such as "table_name#P#part_name".
You can check the table name and the link ID with the part name using the following SQL:
To capture all queries sent to remote backends on a Spider Node :
See and .
Report the issue in MariaDB JIRA (see ) or to the MariaDB Corporation support center.
Available since version 3.1.14
To activate spider as a static plugin change "MODULE_ONLY" to "MANDATORY" in storage/spider/CMakeList.txt before compiling
Note that Spider UDF functions will not work with such settings.
A number of new status variables have been introduced, see Spider Status Variables for a complete list.
A new Information Schema table is installed - SPIDER_ALLOC_MEM.
From , Spider installs another Information Schema table, SPIDER_WRAPPER_PROTOCOLS.
The Performance schema is commonly used to troubleshoot issues that consume time inside your workload. The Performance schema should not be activated for servers that are experimenting constant heavy load, but most of time it is acceptable to lose 5% to 20% additional CPU to keep track of server internals execution.
To activate the performance schema, use the performance_schema system variable and add the following to the server section of the MariaDB configuration file.
Activate the Spider probes to be monitored.
Run your queries ...
And check the performance metrics. Remove specific Spider metrics to have a more global view.
This page is licensed: CC BY-SA / Gnu FDL
spider1 backend << EOF
CREATE TEMPORARY TABLE res
(
id INT(10) UNSIGNED NOT NULL,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT ''
) ENGINE=MEMORY;
SELECT spider_direct_sql(
'SELECT * FROM sbtest s WHERE s.id IN(10,12,13)',
'res',
concat('host "', host, '", port "', port, '", user "', username, '", password "', password, '", database "', tgt_db_name, '"')
) a
FROM
mysql.spider_tables
WHERE
db_name = 'backend' AND table_name LIKE 'sbtest#P#pt%';
SELECT * FROM res;
EOFSELECT spider_direct_sql(
'SELECT * FROM sbtest s WHERE s.id IN(10,12,13)',
'res',
concat('server "', server, '"')
) a
FROM mysql.spider_tables
WHERE db_name = 'backend' AND table_name LIKE 'sbtest#P#pt%' ;CREATE TEMPORARY TABLE res
(
id INT(10) UNSIGNED NOT NULL ,
col_microsec DATETIME(6) DEFAULT NOW(8),
db VARCHAR(20)
) ENGINE=MEMORY;
SELECT spider_bg_direct_sql( 'SELECT count(*) ,min(NOW(6)),min(DATABASE())) FROM sbtest', 'res', concat('srv "', server,'" cch ',@rn:=@rn+1 ) ) a FROM mysql.spider_tables,(SELECT @rn:=1) t2 WHERE db_name = 'bsbackend' AND table_name LIKE 'sbtest#P#pt%';:~# backend2 -e "show variables like 'handler%'"
+-------------------------------+---------------+
| Variable_name | Value |
+-------------------------------+---------------+
| handlersocket_accept_balance | 0 |
| handlersocket_address | 192.168.0.201 |
| handlersocket_backlog | 32768 |
| handlersocket_epoll | 1 |
| handlersocket_plain_secret | |
| handlersocket_plain_secret_wr | |
| handlersocket_port | 20500 |
| handlersocket_port_wr | 20501 |
| handlersocket_rcvbuf | 0 |
| handlersocket_readsize | 0 |
| handlersocket_sndbuf | 0 |
| handlersocket_threads | 4 |
| handlersocket_threads_wr | 1 |
| handlersocket_timeout | 300 |
| handlersocket_verbose | 10 |
| handlersocket_wrlock_timeout | 12 |
+-------------------------------+---------------+spider1 backend << EOF
CREATE TEMPORARY TABLE res
(
id INT(10) UNSIGNED NOT NULL,
k INT(10) UNSIGNED NOT NULL DEFAULT '0',
c CHAR(120) NOT NULL DEFAULT '',
pad CHAR(60) NOT NULL DEFAULT ''
) ENGINE=MEMORY;
SELECT spider_direct_sql('1\t=\t1\t2\t100000\t0','res', 'host "192.168.0.202", table "sbtest", database "test", port "20500", access_mode "1"');SELECT table_name FROM mysql.spider_tables;SET GLOBAL general_log=ON;
SET GLOBAL spider_general_log=ON;
SET GLOBAL spider_log_result_errors=1;
SET GLOBAL spider_log_result_error_with_sql=3;+-------------------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+---------------------+------+-----+---------+-------+
| ID | int(10) unsigned | NO | | 0 | |
| FUNC_NAME | varchar(64) | YES | | NULL | |
| FILE_NAME | varchar(64) | YES | | NULL | |
| LINE_NO | int(10) unsigned | YES | | NULL | |
| TOTAL_ALLOC_MEM | bigint(20) unsigned | YES | | NULL | |
| CURRENT_ALLOC_MEM | bigint(20) | YES | | NULL | |
| ALLOC_MEM_COUNT | bigint(20) unsigned | YES | | NULL | |
| FREE_MEM_COUNT | bigint(20) unsigned | YES | | NULL | |
+-------------------+---------------------+------+-----+---------+-------+performance_schema=ONUPDATE performance_schema.setup_instruments SET
ENABLED='YES', TIMED='yes' WHERE NAME LIKE '%spider%';SELECT * FROM performance_schema.events_waits_summary_global_by_event_name
WHERE COUNT_STAR<>0 AND EVENT_NAME LIKE '%spider%'
ORDER BY SUM_TIMER_WAIT DESC LIMIT 10;INSTALL SONAME 'ha_spider';backend1> mysql
GRANT ALL ON test.* TO spider@'172.21.21.2' IDENTIFIED BY 'spider';
backend2> mysql
GRANT ALL ON test.* TO spider@'172.21.21.2' IDENTIFIED BY 'spider';spider> mysql -u spider -p -h 172.21.21.3 test
spider> mysql -u spider -p -h 172.21.21.4 testCREATE TABLE opportunities (
id INT,
accountName VARCHAR(20),
name VARCHAR(128),
owner VARCHAR(7),
amount DECIMAL(10,2),
closeDate DATE,
stageName VARCHAR(11),
PRIMARY KEY (id),
KEY (accountName)
) ENGINE=InnoDB;CREATE SERVER backend1 FOREIGN DATA WRAPPER MYSQL OPTIONS
(HOST '172.21.21.3', DATABASE 'test', USER 'spider', PASSWORD 'spider', PORT 3306);
CREATE SERVER backend2 FOREIGN DATA WRAPPER MYSQL OPTIONS
(HOST '172.21.21.4', DATABASE 'test', USER 'spider', PASSWORD 'spider', PORT 3306);Error 1429: Unable to connect to foreign data sourceFLUSH TABLES;CREATE TABLE opportunities (
id INT,
accountName VARCHAR(20),
name VARCHAR(128),
owner VARCHAR(7),
amount DECIMAL(10,2),
closeDate DATE,
stageName VARCHAR(11),
PRIMARY KEY (id),
KEY (accountName)
) ENGINE=spider COMMENT='wrapper "mysql", srv "backend1" , TABLE "opportunities"';CREATE TABLE opportunities (
id INT,
accountName VARCHAR(20),
name VARCHAR(128),
owner VARCHAR(7),
amount DECIMAL(10,2),
closeDate DATE,
stageName VARCHAR(11),
PRIMARY KEY (id),
KEY (accountName)
) ENGINE=spider COMMENT='wrapper "mysql", TABLE "opportunities"'
PARTITION BY HASH (id)
(
PARTITION pt1 COMMENT = 'srv "backend1"',
PARTITION pt2 COMMENT = 'srv "backend2"'
) ;CREATE TABLE opportunities (
id INT,
accountName VARCHAR(20),
name VARCHAR(128),
owner VARCHAR(7),
amount DECIMAL(10,2),
closeDate DATE,
stageName VARCHAR(11),
PRIMARY KEY (id, accountName),
KEY(accountName)
) ENGINE=spider COMMENT='wrapper "mysql", TABLE "opportunities"'
PARTITION BY RANGE COLUMNS (accountName)
(
PARTITION pt1 VALUES LESS THAN ('M') COMMENT = 'srv "backend1"',
PARTITION pt2 VALUES LESS THAN (MAXVALUE) COMMENT = 'srv "backend2"'
) ;CREATE TABLE opportunities (
id INT,
accountName VARCHAR(20),
name VARCHAR(128),
owner VARCHAR(7),
amount DECIMAL(10,2),
closeDate DATE,
stageName VARCHAR(11),
PRIMARY KEY (id, owner),
KEY(accountName)
) ENGINE=spider COMMENT='wrapper "mysql", TABLE "opportunities"'
PARTITION BY LIST COLUMNS (owner)
(
PARTITION pt1 VALUES IN ('Bill', 'Bob', 'Chris') COMMENT = 'srv "backend1"',
PARTITION pt2 VALUES IN ('Maria', 'Olivier') COMMENT = 'srv "backend2"'
) ;PARTITION partition_name DEFAULTThis guide details how to configure Spider to connect to an Oracle database via ODBC, enabling data migration and federated access to Oracle tables.
This article describes how to use Spider ODBC to connect to Oracle. This can make it easier to migrate, by not requiring a wholesale cutover, but instead enabling a piecemeal approach.
The setup looks as shown in this figure:
In MariaDB,
The Spider storage engine must be installed.
The Spider storage engine plugin must be installed in MariaDB Enterprise Server. This is described in the instructions below.
An ODBC driver manager, unixODBC, must be installed.
In Oracle Database,
The ODBC driver must be installed.
The Oracle Basic Client must be installed. This is a prerequisite for installing the ODBC driver.
In your working directory, create a folder to hold the Oracle ODBC RPMs mkdir oracle_odbc_rpms, then change to that directory.
Download the following files (make sure to use the appropriate driver for your scenario):
Install the RPMs:
Spider ODBC to Oracle tables does not support INSERT SELECT statements, where the Spider table is the source of the data. (MENT-1588).
If the version of MariaDB does not have the variable, you aren't able to use a few aggregate functions on their own. (MENT-1558).
If you get the following error message: Error from ODBC 0 01004 [Oracle][ODBC]String data, right truncated., you need to set a larger value for the variable . (MENT-1557).
Blog post by Anders Karlsson, January 2024 - 13 minute read
This page is: Copyright © 2025 MariaDB. All rights reserved.
Add the following to /etc/odbcinst.ini:
The driver path may be different if you downloaded a different version of the Oracle driver, be sure to update it with correct path for your scenario.
Add the following to /etc/odbc.ini:
Driver is the driver name in the /etc/odbcinst.ini .
Description can be whatever you like, just make sure you use the same value later.
ServerName is the TNSName given for our Oracle connection.
Be sure to populate your user name and password.
Add the following line to /etc/tnsnames.ora:
Name the connection string as you like — just be sure to use the same name in the odbc.ini file.
Make sure to include your hostname and IP address, and the service name for your instance.
Verify that the ODBC connection is working with:
ORARDS is the section name we gave our entry in odbc.ini. If you are using something different, replace ORARDS with whatever you use.
If any errors come up, double-check the values entered into the .ini files.
Start the MariaDB database:
Log in to MariaDB, and run the following:
This creates a new database and a table that should be able to connect to an Oracle server via ODBC.
DSN must be the same put in the odbc.ini file .
It's best to always use upper case for both column names and the table value in the comment section, because these values are passed directly to Oracle Database, which prefers upper-case object names.
You should now be able to select data from spider_test.contacts table. That data is coming from the remote Oracle database.

[oracle]
Description = Oracle ODBC Connection
Driver = /usr/lib/oracle/19.16/client64/lib/libsqora.so.19.1[ORARDS]
Description = Oracle
Driver = oracle
ServerName = REMOTE
UserID = {user_name}
Password = {password}
TNSNamesFile = /etc/tnsnames.oraREMOTE=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST={host_name/ip address})(PORT=1521))(CONNECT_DATA=(SERVICE_NAME={service_name})))isql -v ORARDS systemctl start mariadb or systemctl start mysqld INSTALL SONAME 'ha_spider';
CREATE DATABASE spider_test;
USE spider_test;
CREATE OR REPLACE TABLE spider_test.contacts
(
CONTACT_ID BIGINT NOT NULL PRIMARY KEY,
FIRST_NAME VARCHAR( 255 ) NOT NULL,
LAST_NAME VARCHAR( 255 ) NOT NULL,
EMAIL VARCHAR( 255 ) NOT NULL,
PHONE VARCHAR( 20 ) ,
CUSTOMER_ID BIGINT) ENGINE=SPIDER
CONNECTION='WRAPPER "odbc", DSN "ORARDS", table "CONTACTS"';wget https://download.oracle.com/otn_software/linux/instantclient/1916000/oracle-instantclient19.16-odbc-19.16.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/1916000/oracle-instantclient19.16-sqlplus-19.16.0.0.0-1.x86_64.rpm
wget https://download.oracle.com/otn_software/linux/instantclient/1916000/oracle-instantclient19.16-basic-19.16.0.0.0-1.x86_64.rpmyum localinstall *.rpmA matrix listing the features supported by Spider, including sharding, partitioning, XA transactions, and support for various SQL statements and functions.
Not complete yet - still being updated
F() Federation only , P()partioning only . Spider column is for SpiderForMySQL found on the Spider web sIte.
Clustering and High Availability
Commit, Rollback transactions on multiple backend
Yes
This page is licensed: CC BY-SA / Gnu FDL
Yes
Multiplexing to a number of replicas using xa protocol 2PC
Yes
Yes
Split brain resolution based on a majority decision, failed node is remove from the list of replicas
Yes
Yes
Enable a failed backend to re enter the cluster transparently
No
No
Synchronize DDL to backend, table modification, schema changes
No
No
Synchronize DDL to other Spider
No
No
GTID tracking per table on XA error
No
Yes
Transparent partitioning
No
No
Covered by generic SQL test case
Yes
Yes
Heterogenous Backends
MariaDB and MySQL database backend
Yes
Yes
Oracle database backend, if build from source against the client library 'ORACLE_HOME'
Yes
Yes
Local table attachment
Yes
Yes
Performance
Index Condition Pushdown
No
No
Engine Condition Pushdown
Yes
Yes
Concurrent backend scan
Yes
No
Concurrent partition scan
Yes
No
Batched key access
Yes
Yes
Block hash join
No
Yes
HANDLER backend propagation
Yes
Yes
HANDLER backend translation from SQL
Yes
Yes
HANDLER OPEN cache per connection
No
Yes
HANDLER use prepared statement
No
Yes
HANDLER_SOCKET protocol backend propagation
Yes
Yes
HANDLER_SOCKET backend translation from SQL
No
No
Map reduce for ORDER BY ... LIMIT
Yes
Yes
Map reduce for MAX & MIN & SUM
Yes
Yes
Map reduce for some GROUP BY
Yes
Yes
Batch multiple WRITES in auto commit to reduce network round trip
Yes
Yes
Relaxing backend consistency
Yes
Yes
Execution Control
Configuration at table and partition level, settings can change per data collection
Yes
Yes
Configurable empty result set on errors. For API that does not have transactions replay
Yes
Yes
Query Cache tuning per table of the on remote backend
Yes
Yes
Index Hint per table imposed on remote backend
Yes
Yes
SSL connections to remote backend connections
Yes
Yes
Table definition discovery from remote backend
Yes
F(*)
Direct SQL execution to backend via UDF
Yes
Yes
Table re synchronization between backends via UDF
Yes
Yes
Maintain Index and Table Statistics of remote backends
Yes
Yes
Can use Independent Index and Table Statistics
No
Yes
Maintain local or remote table increments
Yes
Yes
LOAD DATA INFILE translate to bulk inserting
Yes
Yes
Performance Schema Probes
Yes
Yes
Load Balance Reads to replicate weight control
Yes
Yes
Fine tuning tcp timeout, connections retry
Yes
Yes
A reference for table-level parameters in Spider, which can be set via the COMMENT or CONNECTION string to control connection settings, monitoring, and query behavior.
When a table uses the Spider storage engine, the following Spider table parameters can be set in the COMMENT clause of the CREATE TABLE statement. Many Spider table parameters have corresponding system variables, so they can be set for all Spider tables on the node. For additional information, see the Spider System Variables page.
From , many table parameters can be set using dedicated Spider table options, see the Table Option Name fields below. From MariaDB 11.4, using the COMMENT clause is deprecated, as well as table parameters that do not have corresponding table options.
access_balancesDescription: Connection load balancing integer weight.
Default Table Value: 0
DSN Parameter Name: abl
Deprecated:
active_link_countDescription: Number of active remote servers, for use in load balancing read connections
Default Table Value: all backends
DSN Parameter Name: alc
Deprecated:
auto_increment_modeDescription: The table level value of
Table Option Name: AUTO_INCREMENT_MODE
Table Option Introduced:
bgs_modeDescription: The table level value of .
Table Option Name: BGS_MODE
Table Option Introduced:
bulk_sizeDescription: The table level value of .
Table Option Name: BULK_SIZE
Table Option Introduced:
bulk_update_sizeDescription: The table level value of .
Table Option Name: BULK_UPDATE_SIZE
Table Option Introduced:
casual_readDescription:
Default Table Value:
DSN Parameter Name:
Introduced: Spider 3.2
connect_timeoutDescription: The table level value of .
Table Option Name: CONNECT_TIMEOUT
Table Option Introduced:
databaseDescription: Database name for reference table that exists on remote backend server.
Default Table Value: local table database
DSN Parameter Name: database
Table Option Name: REMOTE_DATABASE
default_fileDescription: Configuration file used when connecting to remote servers. When the table variable is set, this variable defaults to the values of the --defaults-extra-file or --defaults-file options. When the table variable is not set, it defaults to none.
Default Table Value: none
DSN Parameter Name: dff
default_groupDescription: Group name in configuration file used when connecting to remote servers.
Default Table Value: none
DSN Parameter Name: dfg
Table Option Name: DEFAULT_GROUP
delete_all_rows_typeDescription: The table level value of .
Introduced: Spider 3.2
Table Option Name: DELETE_ALL_ROWS_TYPE
Table Option Introduced:
force_bulk_deleteDescription:
Introduced:
Table Option Name: FORCE_BULK_DELETE
Table Option Introduced:
force_bulk_updateDescription:
Introduced:
Table Option Name: FORCE_BULK_UPDATE
Table Option Introduced:
hostDescription: Host name of remote server.
Default Table Value: localhost
DSN Parameter Name: host
Table Option Name: REMOTE_HOST
idx000Description: When using an index on Spider tables for searching, Spider uses this hint to search the remote table. The remote table index is related to the Spider table index by this hint. The number represented by 000 is the index ID, which is the number of the index shown by the statement. 000 is the Primary Key. For instance, idx000 "force index(PRIMARY)" (in abbreviated format idx000 "f PRIMARY").
f force index
link_statusDescription: Change status of the remote backend server link.
0 Doesn't change status.
1 Changes status to OK.
monitoring_bg_intervalDescription: Interval of background monitoring in microseconds.
Default Table Value: 10000000
DSN Parameter Name: mbi
Deprecated:
monitoring_bg_kindDescription: Kind of background monitoring to use.
0 Disables background monitoring.
1 Monitors connection state.
monitoring_kindDescription: Kind of monitoring.
0 Disables monitoring
1 Monitors connection state.
monitoring_limitDescription: Limits the number of records in the monitoring table. This is only effective when Spider monitors the state of a table, which occurs when the table variable is set to a value greater than 1.
Default Table Value: 1
Range: 0 upwards
monitoring_server_idDescription: Preferred monitoring @@server_id for each backend failure. You can use this to geo-localize backend servers and set the first Spider monitoring node to contact for failover. In the event that this monitor fails, other monitoring nodes are contacted. For multiple copy backends, you can set a lazy configuration with a single MSI instead of one per backend.
Default Table Value: server_id
DSN Parameter Name: msi
multi_split_readDescription: The table level value of .
Table Option Name: MULTI_SPLIT_READ
Table Option Introduced:
net_read_timeoutDescription: The table level value of .
Table Option Name: NET_READ_TIMEOUT
Table Option Introduced:
net_write_timeoutDescription: The table level value of .
Table Option Name: NET_WRITE_TIMEOUT
Table Option Introduced:
passwordDescription: Remote server password.
Default Table Value: none
DSN Parameter Name: password
Table Option Name: REMOTE_PASSWORD
portDescription: Remote server port.
Default Table Value: 3306
DSN Parameter Name: port
Table Option Name: REMOTE_PORT
priorityDescription: Priority. Used to define the order of execution. For instance, Spider uses priority when deciding the order in which to lock tables on a remote server.
Default Table Value: 1000000
DSN Parameter Name: prt
Table Option Name:
query_cacheDescription: Uses the option for the when issuing statements to the remote server.
0 No option used.
1 Uses the option.
query_cache_syncDescription: A two-bit bitmap. Whether to pass the option for the (if any) when issuing statements to the remote server.
0 No option passed.
1 Passes the option, if specified in the query to the spider table.
read_rateDescription: Rate used to calculate the amount of time Spider requires when executing index scans.
Default Table Value: 0.0002
DSN Parameter Name: rrt
Deprecated:
scan_rateDescription: Rate used to calculate the amount of time Spider requires when scanning tables.
Default Table Value: 0.0001
DSN Parameter Name: srt
Deprecated:
serverDescription: Server name. Used when generating connection information with statements.
Default Table Value: none
DSN Parameter Name: srv
Table Option Name: REMOTE_SERVER
skip_parallel_searchDescription: The table level value of .
Table Option Name: SKIP_PARALLEL_SEARCH
Table Option Introduced:
socketDescription: Remote server socket.
Default Table Value: none
DSN Parameter Name: socket
Table Option Name: REMOTE_SOCKET
ssl_caDescription: Path to the Certificate Authority file.
Default Table Value: none
DSN Parameter Name: sca
Table Option Name: SSL_CA
ssl_capathDescription: Path to directory containing trusted TLS CA certificates in PEM format.
Default Table Value: none
DSN Parameter Name: scp
Table Option Name: SSL_CAPATH
ssl_certDescription: Path to the certificate file.
Default Table Value: none
DSN Parameter Name: scr
Table Option Name: SSL_CERT
ssl_cipherDescription: List of allowed ciphers to use with .
Default Table Value: none
DSN Parameter Name: sch
Table Option Name: SSL_CIPHER
ssl_keyDescription: Path to the key file.
Default Table Value: none
DSN Parameter Name: sky
Table Option Name: SSL_KEY
ssl_verify_server_certDescription: Enables verification of the server's Common Name value in the certificate against the host name used when connecting to the server.
0 Disables verification.
1 Enables verification.
tableDescription: Destination table name.
Default Table Value: Same table name
DSN Parameter Name: tbl
Table Option Name: REMOTE_TABLE
table_count_modeDescription: for setting table flags HA_STATS_RECORDS_IS_EXACT and HA_HAS_RECORDS.
Default Table Value: 0
Table Option Name: TABLE_COUNT_MODE
Table Option Introduced:
usernameDescription: user name for the data node.
Default Table Value: Same user name
Table Option Name: REMOTE_USERNAME
Table Option Introduced:
use_pushdown_udfDescription: The table level value of .
Table Option Name: USE_PUSHDOWN_UDF
Table Option Introduced:
wrapperDescription: wrapper for the data node.
Table Option Name: WRAPPER
Table Option Introduced:
This page is licensed: CC BY-SA / Gnu FDL
Guidelines for backing up and restoring a sharded Spider topology, ensuring consistency across multiple shards using tools like MariaDB Backup.
MariaDB Backup can be used to perform backup operations on Spider deployments.
MariaDB Backup can be used to create a physical backup of a Sharded MariaDB Enterprise Spider topology. MariaDB Backup must be used to backup the Spider Node and all Data Nodes. The Spider Node and Data Nodes must be locked to guarantee that the backups of the Spider Node and Data Nodes are consistent.
The backup of the Spider Node contains:
Physical backup for Spider Tables
Physical backup for tables that use InnoDB, Aria, MyISAM, and MyRocks
Physical backup for any other database objects on the Spider Node
The backups of the Data Nodes contain:
Physical backup for Data Tables
Physical backup for tables that use InnoDB, Aria, MyISAM, and MyRocks
Physical backup for any other database objects on the Data Node
The following procedure shows how to take a consistent backup of a Spider Node and Data Nodes deployed in a Sharded MariaDB Enterprise Spider topology.
On the Spider Node and on each Data Node, create a user account to perform the backup using the CREATE USER and GRANT statements.
For MariaDB Enterprise Server 10.5 and later:
On each Data Node, grant the user account Spider uses when operating on Data Nodes sufficient privileges to lock any Data Tables using the GRANT statement.
For example, on the hq_server Data Node:
On the eastern_server Data Node:
On the western_server Data Node:
On the Spider Node, acquire a read lock on Spider Tables using the LOCK TABLES statement:
Keep this session open during the rest of the procedure.
The read lock will propagate to the Data Tables on each Data Node as well. The read locks will prevent the Data Tables from changing during the backup, so the backups on the Spider Node and Data Nodes are consistent.
On each Data Node, perform the backup using MariaDB Backup .
With MariaDB Backup 10.5 and later, use the mariadb-backup command:
On the Spider Node, after backing up each Data Node, perform a backup with MariaDB Backup .
With MariaDB Backup 10.5 and later, use the mariadb-backup command:
On the Spider Node, after the backup is complete, in your original session, use the UNLOCK TABLES statement to release the table locks:
On the Spider Node and each of the Data Nodes, prepare each of the backups using MariaDB Backup.
With MariaDB Backup 10.5 and later, use the mariadb-backup command:
The Spider Node and Data Nodes now each have a complete backup of the data directory. Backups should be tested to confirm they are complete and consistent.
MariaDB Backup can restore a Sharded MariaDB Enterprise Spider topology from a backup taken with MariaDB Backup.
On the Spider Node and on each Data Node, stop the MariaDB Server process:
On the Spider Node and on each Data Node, empty the data directory:
On the Spider Node and on each Data Node, restore the backup for that server using MariaDB Backup.
With MariaDB Backup 10.5 and later, use the mariadb-backup command:
On the Spider Node and on each Data Node, confirm that the restored files are owned by the user that owns the MariaDB Server process:
On the Spider Node and on each Data Node, start the MariaDB Server process:
On the Spider Node, query a Spider Table to test it:
MariaDB Dump generates backup files containing the SQL statements necessary to recreate the database. MariaDB Dump is included with MariaDB Server and can be used to backup databases in Spider deployments. The MariaDB Client can then be used to restore databases from a MariaDB Dump backup.
MariaDB Dump can be used to create a logical backup of a Sharded MariaDB Enterprise Spider topology. MariaDB Dump must be used to backup the Spider Node and all Data Nodes. The Spider Node and Data Nodes must be locked to guarantee that the backups of the Spider Node and Data Nodes are consistent.
The backup of the Spider Node contains:
Table definitions for Spider Tables
Table definitions for all tables
Table data for tables that use InnoDB, Aria, MyISAM, and MyRocks
Definitions for any other user accounts, privileges, views, and triggers on the Spider Node
The backups of the Data Nodes contain:
Table definitions for Data Tables
Table definitions for all tables
Table data for tables that use InnoDB, Aria, MyISAM, and MyRocks
Definitions for any other user accounts, privileges, views, and triggers on the Data Node
The following procedure shows how to take a consistent backup of a Spider Node and Data Nodes deployed in a Sharded MariaDB Enterprise Spider topology.
On the Spider Node and on each Data Node, create a user account to perform the backup using the CREATE USER and GRANT statements:
On each Data Node, grant the user account Spider uses to operate on the Data Nodes sufficient privileges to lock any Data Tables using the GRANT statement.
For example, on the hq_server Data Node:
On the eastern_server Data Node:
On the western_server Data Node:
On the Spider Node, acquire a read lock on Spider Tables using the LOCK TABLES statement:
Keep this session open during the rest of the procedure.
The read lock will propagate to the Data Tables on each Data Node as well. The read locks will prevent the Data Tables from changing during the backup, so the backups of the Spider Node and Data Nodes are consistent.
On each Data Node, perform the backup using MariaDB Dump. With MariaDB Dump 10.5 and later, use the mariadb-dump command:
On the Spider Node, once the Data Nodes are backed up, perform a backup using MariaDB Dump.
With MariaDB Dump 10.5 and later, use the mariadb-dump command:
On the Spider Node, after the backups are complete, in your original session use the UNLOCK TABLES statement to release the table locks:
The Spider Node and the Data Node now each have a mariadb_dump.sql backup file. This backup file contains the SQL statements needed to recreate the schema on the Spider Node and the schema and data on the Data Nodes. Backups should be tested to confirm they are complete and consistent.
MariaDB Client can restore a Sharded MariaDB Enterprise Spider topology from a backup taken with MariaDB Dump backup.
Stop all traffic to the Spider Node and each Data Node.
On the Spider Node, restore the backup for that server using MariaDB Client.
With MariaDB Client 10.5 and later, use the mariadb command:
On the Data Node, restore the backup for that server using MariaDB Client.
With MariaDB Client 10.5 and later, use the mariadb command:
On the Spider Node, query a Spider Table to test it:
This page is: Copyright © 2025 MariaDB. All rights reserved.
CREATE USER 'mariadb-backup'@'localhost'
IDENTIFIED BY 'mb_passwd';
GRANT RELOAD, PROCESS, LOCK TABLES, BINLOG MONITOR
ON *.*
TO 'mariadb-backup'@'localhost';GRANT LOCK TABLES ON hq_sales.* TO 'spider_user'@'192.0.2.2';GRANT LOCK TABLES ON eastern_sales.* TO 'spider_user'@'192.0.2.2';GRANT LOCK TABLES ON western_sales.* TO 'spider_user'@'192.0.2.2';LOCK TABLES spider_sharded_sales.invoices READ;$ sudo mariadb-backup --backup \
--target-dir=/data/backups/full \
--user=mariadb-backup \
--password='mb_passwd'$ sudo mariadb-backup --backup \
--target-dir=/data/backups/full \
--user=mariadb-backup \
--password='mb_passwd'UNLOCK TABLES;$ sudo mariadb-backup --prepare \
--target-dir=/data/backups/full$ sudo systemctl stop mariadb$ sudo rm -fr /var/lib/mysql/*$ sudo mariadb-backup --copy-back \
--target-dir=/data/backups/full$ sudo chown -R mysql:mysql /var/lib/mysql$ sudo systemctl start mariadbSELECT * FROM spider_sharded_sales.invoices;+-----------+------------+-------------+----------------------------+---------------+----------------+
| branch_id | invoice_id | customer_id | invoice_date | invoice_total | payment_method |
+-----------+------------+-------------+----------------------------+---------------+----------------+
| 1 | 1 | 1 | 2020-05-10 12:35:10.000000 | 1087.23 | CREDIT_CARD |
| 1 | 2 | 2 | 2020-05-10 14:17:32.000000 | 1508.57 | WIRE_TRANSFER |
| 1 | 3 | 3 | 2020-05-10 14:25:16.000000 | 227.15 | CASH |
| 2 | 1 | 2 | 2020-05-10 12:31:00.000000 | 1351.04 | CREDIT_CARD |
| 2 | 2 | 2 | 2020-05-10 12:45:27.000000 | 162.11 | WIRE_TRANSFER |
| 2 | 3 | 4 | 2020-05-10 13:11:23.000000 | 350.00 | CASH |
| 3 | 1 | 5 | 2020-05-10 12:31:00.000000 | 111.50 | CREDIT_CARD |
| 3 | 2 | 8 | 2020-05-10 12:45:27.000000 | 1509.23 | WIRE_TRANSFER |
| 3 | 3 | 3 | 2020-05-10 13:11:23.000000 | 3301.66 | CASH |
+-----------+------------+-------------+----------------------------+---------------+----------------+CREATE USER 'mariadb-dump'@'localhost'
IDENTIFIED BY 'md_passwd';
GRANT SELECT, INSERT, SHOW VIEW, TRIGGER, CREATE, ALTER, EVENT, RELOAD, LOCK TABLES
ON *.*
TO 'mariadb-dump'@'localhost';GRANT LOCK TABLES ON hq_sales.* TO 'spider_user'@'192.0.2.2';GRANT LOCK TABLES ON eastern_sales.* TO 'spider_user'@'192.0.2.2';GRANT LOCK TABLES ON western_sales.* TO 'spider_user'@'192.0.2.2';LOCK TABLES spider_sharded_sales.invoices READ;$ mariadb-dump \
--user=mariadb-dump \
--password='md_passwd' \
--ssl-verify-server-cert \
--ssl-ca ~/mariadb_chain.pem \
--all-databases \
--single-transaction \
--default-character-set=utf8mb4 \
> mariadb_dump.sql$ mariadb-dump \
--user=mariadb-dump \
--password='md_passwd' \
--ssl-verify-server-cert \
--ssl-ca ~/mariadb_chain.pem \
--all-databases \
--single-transaction \
--default-character-set=utf8mb4 \
> mariadb_dump.sqlUNLOCK TABLES;$ mariadb \
--user=mariadb-dump \
--password='md_passwd' \
--ssl-verify-server-cert \
--ssl-ca ~/mariadb_chain.pem \
--default-character-set=utf8mb4 \
< mariadb_dump.sql$ mariadb \
--user=mariadb-dump \
--password='md_passwd' \
--ssl-verify-server-cert \
--ssl-ca ~/mariadb_chain.pem \
--default-character-set=utf8mb4 \
< mariadb_dump.sqlSELECT * FROM spider_sharded_sales.invoices;+-----------+------------+-------------+----------------------------+---------------+----------------+
| branch_id | invoice_id | customer_id | invoice_date | invoice_total | payment_method |
+-----------+------------+-------------+----------------------------+---------------+----------------+
| 1 | 1 | 1 | 2020-05-10 12:35:10.000000 | 1087.23 | CREDIT_CARD |
| 1 | 2 | 2 | 2020-05-10 14:17:32.000000 | 1508.57 | WIRE_TRANSFER |
| 1 | 3 | 3 | 2020-05-10 14:25:16.000000 | 227.15 | CASH |
| 2 | 1 | 2 | 2020-05-10 12:31:00.000000 | 1351.04 | CREDIT_CARD |
| 2 | 2 | 2 | 2020-05-10 12:45:27.000000 | 162.11 | WIRE_TRANSFER |
| 2 | 3 | 4 | 2020-05-10 13:11:23.000000 | 350.00 | CASH |
| 3 | 1 | 5 | 2020-05-10 12:31:00.000000 | 111.50 | CREDIT_CARD |
| 3 | 2 | 8 | 2020-05-10 12:45:27.000000 | 1509.23 | WIRE_TRANSFER |
| 3 | 3 | 3 | 2020-05-10 13:11:23.000000 | 3301.66 | CASH |
+-----------+------------+-------------+----------------------------+---------------+----------------+Table Option Introduced:
Table Option Name: DEFAULT_FILE
Table Option Introduced:
Table Option Introduced:
Table Option Introduced:
u use index
ig ignore index
Default Table Value: none
Table Option Name: IDX
Table Option Introduced:
2 Changes status to RECOVERY.
3 Changes status to no more in group communication.
Default Table Value: 0
DSN Parameter Name: lst
Deprecated: MariaDB 11.4.0
2 Monitors state of table without WHERE clause.3 Monitors state of table with WHERE clause (currently unsupported).
Default Table Value: 0
DSN Parameter Name: mbk
Deprecated: MariaDB 11.4.0
2WHERE3 Monitors state of table with WHERE clause (currently unsupported).
Default Table Value: 0
DSN Parameter Name: mkd
Deprecated: MariaDB 11.4.0
DSN Parameter Name: mlt
Deprecated: MariaDB 11.4.0
Deprecated: MariaDB 11.4.0
Table Option Introduced:
Table Option Introduced:
PRIORITYTable Option Introduced:
2 Uses the SQL_NO_CACHE option.Default Table Value: 0
DSN Parameter Name: qch
Table Option Name: QUERY_CACHE
Table Option Introduced:
2 Passes the SQL_NO_CACHE option, if specified in the query to the spider table.
3 Passes both the SQL_CACHE option and the SQL_NO_CACHE option, if specified in the query to the spider table.
Default Table Value: 3
Table Option Name: QUERY_CACHE_SYNC
Table Option Introduced:
Table Option Introduced:
Table Option Introduced:
Table Option Introduced:
Table Option Introduced:
Table Option Introduced:
Table Option Introduced:
Table Option Introduced:
0DSN Parameter Name: svc
Table Option Name: SSL_VSC
Table Option Introduced:
Table Option Introduced:
Comprehensive list of system variables to configure Spider globally or per session, affecting connection timeouts, buffering, and query pushdown strategies.
The following variables are available when the Spider storage engine has been installed.
See Server System Variables for a complete list of system variables and instructions on setting them.
See also the Full list of MariaDB options, system and status variables.
MariaDB starting with
Starting from , MariaDB 10.6.15, , , MariaDB 10.11.5, , all spider system variables with the value -1 for deferring to table parameter values follow the correct overriding mechanism: table parameter (if set) overrides system variables (if not -1) overrides actual variable default. As a side effect, all such system variables in all versions have the same default value as the table param default value. Before this change, a non-minus-one system variable value would override the table parameter value. That is, if both the system variable value and the table parameter value were set to be non-minus-one, the system variable value would prevail. For + where the system variable default values were the same as table param default instead of -1, this means that if the system variable were not set, but a table param is set to a non-default value, the default would override the non-default value.
spider_auto_increment_modeDescription: The mode.
-1 Falls back to the default value, if the is not set.
0 Normal Mode. Uses a counter that Spider gets from the remote backend server with an exclusive lock for the auto-increment value. This mode is slow. Use Quick Mode (2), if you use Spider tables with the table partitioning feature and the auto-increment column is the first column of the index.
spider_bgs_first_readDescription: Number of first read records to use when performing a concurrent background search. To start a range scan on the remote backend, the storage engine first needs to send the first record. Fetching a second record in the same query can save a network round trip stopping the plan if the backend has a single record. The first and second reads are used to warm up for background search. When not using and , the third read fetches the remaining data source in a single fetch.
-1 Falls back to the default value, if the is not set.
0 Records are usually retrieved.
spider_bgs_modeDescription: Background search mode. This enables the use of a thread per data server connection if the query is not shard-based and must be distributed across shards. The partitioning plugin scans partitions one after the other to optimize memory usage. Because the shards are external, reading all shards can be performed in parallel when the plan prunes multiple partitions.
-1 Falls back to the default value, if the is not set.
0 Disables background search.
spider_bgs_second_readDescription: Number of second read records on the backend server when using background search. When the first records are found from , the engine continues scanning a range adding a LIMIT of and .
-1 Falls back to the default value, if the is not set.
0 Records are usually retrieved.
spider_bka_engineDescription: Storage engine used with temporary tables when the system variable is set to 1. Defaults to the value of the , which is by default.
Scope: Global, Session
Dynamic: Yes
Data Type: string
spider_bka_modeDescription: Internal action to perform when multi-split reads are disabled. If the system variable is set to 0, Spider uses this variable to determine how to handle statements when the optimizer resolves range retrieval to multiple conditions.
-1 Falls back to the default value, if the is not set.
0 Uses "union all".
spider_bka_table_name_typeDescription: The type of temporary table name for bka.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
spider_block_sizeDescription: Size of memory block used in MariaDB. Can usually be left unchanged.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
spider_buffer_sizeDescription: Buffer size. -1, the default, will use the .
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
spider_bulk_sizeDescription: Size in bytes of the buffer when multiple grouping multiple INSERT statements in a batch, (that is, bulk inserts).
-1 The is adopted.
0 or greater: Size of the buffer.
spider_bulk_update_modeDescription: Bulk update and delete mode. Note: If you use a non-default value for the or system variables, Spider sets this variable to 2.
-1 Falls back to the default value, if the is not set.
0 Sends UPDATE
spider_bulk_update_sizeDescription: Size in bytes for UPDATE and DELETE queries when generating bulk updates.
-1 The is adopted.
0 or greater: Size of buffer.
spider_casual_readDescription: Casual Reads enables all isolation levels, (such as repeatable reads) to work with transactions on multiple backends. With auto-commit queries, you can relax read consistency and run on multiple connections to the backends. This enables parallel queries across partitions, even if multiple shards are stored on the same physical server. Deprecated in due to the complexity of the code for little benefit.
-1 Use .
0 Use casual read.
spider_conn_recycle_modeDescription: Connection recycle mode.
0 Disconnect.
1 Recycle by all sessions.
spider_conn_recycle_strictDescription: Whether to force the creation of new connections.
1 Don't force.
0 Force new connection
spider_conn_wait_timeoutDescription: Max waiting time in seconds for Spider to get a remote connection.
Scope: Global
Dynamic: Yes
Data Type: numeric
spider_connect_error_intervalDescription: Return same error code until interval passes if connection is failed
Scope: Global,
Dynamic: Yes
Data Type: numeric
spider_connect_mutexDescription: Whether to serialize remote servers connections (use mutex at connecting). Use this parameter if you get an error or slowdown due to too many connection processes.
0 Not serialized.
1 : Serialized.
spider_connect_retry_countDescription: Number of times to retry connections that fail due to too many connection processes.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
spider_connect_retry_intervalDescription: Interval in microseconds for connection failure due to too many connection processes.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
spider_connect_timeoutDescription: Timeout in seconds to declare remote backend unresponsive when opening a connection. Change for high-latency networks.
-1 The is adopted.
0 Less than 1.
spider_crd_bg_modeDescription: Indexes cardinality statistics in the background. Disable when the system variable is set to 3 or when the variable is set to 0.
-1 Falls back to the default value, if the is not set.
0
spider_crd_intervalDescription: Time interval in seconds of index cardinality statistics. Set to 0 to always get the latest information from remote servers.
-1 The is adopted.
1 or more: Interval in seconds table state confirmation.
spider_crd_modeDescription: Mode for index cardinality statistics. By default, uses SHOW at the table-level.
-1,0 Uses the .
1 Uses the SHOW command.
spider_crd_syncDescription: Synchronize index cardinality statistics in partitioned tables.
-1 Falls back to the default value, if the is not set.
0 Disables synchronization.
spider_crd_typeDescription: Type of cardinality calculation. Only effective when the system variable is set to use SHOW (1) or to use the Information Schema (2).
-1 Falls back to the default value, if the is not set.
spider_crd_weightDescription: Weight coefficient used to calculate effectiveness of index from the cardinality of column. For more information, see the description for the system variable.
-1 Falls back to the default value, if the is not set.
0 or greater: Weight.
spider_delete_all_rows_typeDescription: The type of delete_all_rows.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
spider_direct_dup_insertDescription: Manages duplicate key check for , and to remote servers. This can save on network roundtrips if the key always maps to a single partition. For bulk operations, records are checked for duplicate key errors one by one on the remote server, unless you set it to avoid duplicate checks on local servers (1).
-1 Falls back to the default value, if the is not set.
0
spider_direct_order_limitDescription: Pushes ORDER BY and LIMIT operations to the remote server.
-1 Falls back to the default value, if the is not set.
0 Uses local execution.
spider_disable_group_by_handlerDescription: Whether to disable the spider group by handler, which if created takes over the query execution after query optimization is done.
OFF Does not disable the spider group by handler.
ON Disables the spider group by handler.
spider_dry_accessDescription: Simulates an empty result-set. No queries are sent to the backend. Use for performance tuning.
0 Normal access.
1 All access from Spider to data node is suppressed.
spider_error_read_modeDescription: Sends an empty result-set when reading a backend server raises an error. Useful with applications that don't implement transaction replays.
-1 Falls back to the default value, if the is not set.
0 Returns an error.
spider_error_write_modeDescription: Sends an empty result-set when writing to a backend server raises an error. Useful with applications that don't implement transaction replays.
-1 Falls back to the default value, if the is not set.
0 Returns an error.
spider_first_readDescription: Number of first read records to start a range scan on the backend server. Spider needs to send the first record. Fetching the second record saves network round-trips, stopping the plan if the backend has a single record. First read and second read are used to warm up for background searches, third reads without using the and system variables fetches the remaining data source in a single last fetch.
-1 Use the .
0 Usually retrieves records.
spider_force_commitDescription: Behavior when error occurs on XA PREPARE, XA COMMIT, and XA ROLLBACK statements.
0 Returns the error.
1
spider_general_logDescription: Whether Spider logs all commands to the General Log. Spider logs error codes according to the system variable.
OFF Logs no commands.
ON Logs commands to the General Log.
spider_ignore_commentsDescription: Whether to unconditionally ignore COMMENT and CONNECTION strings without checking whether table options are specified.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
spider_index_hint_pushdownDescription: Whether to use pushdown index hints, like force_index.
0 Do not use pushdown hints
1 Use pushdown hints
spider_init_sql_alloc_sizeDescription: Initial size of the local SQL buffer.
-1 Falls back to the default value, if the is not set.
0 or greater: Size of the buffer.
spider_internal_limitDescription: Limits the number of records when acquired from a remote server.
-1 The is adopted.
0 or greater: Records limit.
spider_internal_offsetDescription: Skip records when acquired from the remote server.
-1 Falls back to the default value, if the is not set.
0 or more : Number of records to skip.
spider_internal_optimizeDescription: Whether to perform push down operations for statements.
-1 Falls back to the default value, if the is not set.
0 Transmitted.
spider_internal_optimize_localDescription: Whether to transmit to remote servers when statements are executed on the local server.
-1 Falls back to the default value, if the is not set.
0 Not transmitted.
spider_internal_sql_log_offDescription: Whether to log SQL statements sent to the remote server in the .
Explicitly setting this system variable to either ON or OFF causes the Spider node to send a SET sql_log_off statement to each of the data nodes using the SUPER privilege.
spider_internal_unlockDescription: Whether to transmit unlock tables to the connection of the table used with SELECT statements.
0 Not transmitted.
1 Transmitted.
spider_internal_xaDescription: Whether to implement XA at the server- or storage engine-level. When using the server-level, set different values for the system variable on all server instances to generate different xid values.
OFF Uses the storage engine protocol.
ON Uses the server protocol.
spider_internal_xa_id_typeDescription: The type of internal_xa id.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
spider_internal_xa_snapshotDescription: Limitation for reading consistent data on all backend servers when using MariaDB's internal XA implementation and START TRANSACTION WITH CONSISTENT SNAPSHOT.
0 Raise error when using a Spider table.
1 Raise error when issued a START TRANSACTION statement.
spider_load_crd_at_startupDescription: Whether to load CRD from the system table at startup.
-1 Use
0 Do not load
spider_load_sts_at_startupDescription: Whether to load STS from the system table at startup.
-1 Use
0 Do not load
spider_local_lock_tableDescription: Whether to push statements down to the remote server.
0 Transmitted.
1 Not transmitted.
spider_lock_exchangeDescription: Whether to convert and statements into a statement.
0 Not converted.
1 Converted.
spider_log_result_error_with_sqlDescription: How to log SQL statements with result errors.
0 No log
1 Log error
spider_log_result_errorsDescription: Log results from data nodes to the Spider node in the . Performs no logging by default.
0 : Logs no errors from data nodes.
1 : Logs errors from data nodes.
spider_low_mem_readDescription: Whether to use low memory mode when executing queries issued internally to remote servers that return result-sets.
-1 Falls back to the default value, if the is not set.
0 Doesn't use low memory mode.
spider_max_connectionsDescription: Maximum number of connections from Spider to a remote MariaDB servers. Defaults to 0, which is no limit.
Command-line: --spider-max-connections
Scope: Global
Dynamic: Yes
spider_max_orderDescription: Maximum number of columns for ORDER BY operations.
-1 The is adopted.
0 and greater: Maximum number of columns.
spider_multi_split_readDescription: Whether to divide a statement into multiple SQL statements sent to the remote backend server when the optimizer resolves range retrievals to multiple conditions.
-1 Falls back to the default value, if the is not set.
0 Doesn't divide statements.
spider_net_read_timeoutDescription: TCP timeout in seconds to declare remote backend servers unresponsive when reading from a connection. Change for high latency networks.
-1 Falls back to the default value, if the is not set.
0 Less than 1 second timeout.
spider_net_write_timeoutDescription: TCP timeout in seconds to declare remote backend servers unresponsive when writing to a connection. Change for high latency networks.
-1 The is adopted.
0 Less than 1 second timeout.
spider_ping_interval_at_trx_startDescription: Resets the connection with keepalive timeout in seconds by sending a ping.
0 At every transaction.
1 and greater: Number of seconds.
spider_quick_modeDescription: Sets the backend query buffering to cache on the remote backend server or in the local buffer.
-1 Falls back to the default value, if the is not set.
0 Local buffering, it acquires records collectively with store_result.
spider_quick_page_byteDescription: Memory limit by size in bytes in a page when acquired record by record.
-1 The is used. When quick_mode is 1 or 2, Spider stores at least 1 record even if quick_page_byte is smaller than 1 record. When quick_mode is 3, quick_page_byte is used for judging using temporary tables. That is given priority when spider_quick_page_byte is set.
0 or greater: Memory limit.
spider_quick_page_sizeDescription: Number of records in a page when acquired record by record.
-1 The is adopted.
0 or greater: Number of records.
spider_read_only_modeDescription: Whether to allow writes on Spider tables.
-1 Falls back to the default value, if the is not set.
0 Allows writes to Spider tables.
spider_remote_access_charsetDescription: Forces the specified session when connecting to the backend server. This can improve connection time performance.
Scope: Global, Session
Dynamic: Yes
Data Type: string
spider_remote_autocommitDescription: Sets the auto-commit mode when connecting to backend servers. This can improve connection time performance.
-1 Doesn't change the auto-commit mode.
0 Sets the auto-commit mode to 0.
spider_remote_default_databaseDescription: Sets the local default database when connecting to backend servers. This can improve connection time performance.
Scope: Global, Session
Dynamic: Yes
Data Type: string
spider_remote_sql_log_offDescription: Sets the system variable to use when connecting to backend servers.
-1 Doesn't set the value.
0 Doesn't log Spider SQL statements to remote backend servers.
spider_remote_time_zoneDescription: Forces the specified setting when connecting to backend servers. This can improve connection performance when you know the time zone.
Scope: Global
Dynamic: Yes
Data Type: string
spider_remote_trx_isolationDescription: Sets the when connecting to the backend server.
-1 Doesn't set the Isolation Level.
0 Sets to the READ UNCOMMITTED level.
spider_remote_wait_timeoutDescription: Wait timeout in seconds on remote server. -1 means not set.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
spider_reset_sql_allocDescription: Resets the per connection SQL buffer after an SQL statement executes.
-1 Falls back to the default value, if the is not set.
0 Doesn't reset.
spider_same_server_linkDescription: Enables the linking of a table to the same local instance.
0 Disables linking.
1 Enables linking.
spider_second_readDescription: Number of second read records on the backend server when the first records are found from the first read. Spider continues scanning a range, adding a LIMIT using the and variables.
-1 Falls back to the default value, if the is not set.
0 Usually retrieves records.
spider_select_column_modeDescription: Mode for column retrieval from remote backend server.
-1 Falls back to the default value, if the is not set.
0 Uses index columns when the SELECT statement can resolve with an index, otherwise uses all columns.
spider_selupd_lock_modeDescription: Local lock mode on INSERT SELECT.
-1 Falls back to the default value, if the is not set.
0 Takes no locks.
spider_semi_split_readDescription: Whether to use chunk retrieval with offset and limit parameters on SQL statements sent to the remote backend server when using the system variable.
-1 Falls back to the default value, if the is not set.
0 Doesn't use chunk retrieval.
spider_semi_split_read_limitDescription: Sets the limit value for the system variable.
-1 Falls back to the default value, if the is not set.
0 or more: The limit value.
spider_semi_table_lockDescription: Enables semi-table locking. This adds a statement to SQL executions sent to the remote backend server when using non-transactional storage engines to preserve consistency between roundtrips.
0 Disables semi-table locking.
1 Enables semi-table locking.
spider_semi_table_lock_connectionDescription: Whether to use multiple connections with semi-table locking. To enable semi-table locking, use the system variable.
-1 Falls back to the default value, if the is not set.
0 Uses the same connection.
spider_semi_trxDescription: Enables semi-transactions. This controls transaction consistency when an SQL statement is split into multiple statements issued to the backend servers. You can preserve or relax consistency as need. Spider encapsulates auto-committed SQL statements within a transaction on the remote backend server. When using READ COMMITTED or READ UNCOMMITTED to force consistency, set the system variable to 2.
0 Disables semi-transaction consistency.
spider_semi_trx_isolationDescription: Set consistency during range SQL execution when is 1
-1 OFF
0 READ UNCOMMITTED
spider_skip_default_conditionDescription: Whether to compute condition push downs.
-1 Falls back to the default value, if the is not set.
0 Computes condition push downs.
spider_skip_parallel_searchDescription: Whether to skip parallel search by specific conditions.
-1 :use
0 :not skip
spider_slave_trx_isolationDescription: Transaction isolation level when Spider table is used by slave SQL thread.
-1 off
0 read uncommitted
spider_split_readDescription: Number of records in chunk to retry the result when a range query is sent to remote backend servers.
-1 Falls back to the default value, if the is not set.
0 or more: Number of records.
spider_store_last_crdDescription: Whether to store last CRD result in the system table.
-1 Use .
0 Do not store last CRD result in the system table.
spider_store_last_stsDescription: Whether to store last STS result in the system table.
-1 Use .
0 Do not store last STS result in the system table.
spider_strict_group_byDescription: Whether to use columns in select clause strictly for group by clause
-1 Use the .
0 Do not strictly use columns in select clause for group by clause
spider_sts_bg_modeDescription:
Enables background confirmation for table statistics. When background confirmation is enabled, Spider uses one thread per partition to maintain table status. Disable when the system variable is set to 0, which causes Spider to always retrieve the latest information as need. It is effective, when the system variable is set to 10.
-1 Falls back to the default value, if the is not set.
spider_sts_intervalDescription: Time interval of table statistics from the remote backend servers.
-1 Falls back to the default value, if the is not set.
0 Retrieves the latest table statistics on request.
spider_sts_modeDescription: Table statistics mode. Mode for table statistics. The command is used at the table level default.
-1,0 Uses the .
1 Uses the SHOW command.
spider_sts_syncDescription: Synchronizes table statistics in partitioned tables.
-1 Falls back to the default value, if the is not set.
0 Doesn't synchronize table statistics in partitioned tables.
spider_support_xaDescription: XA Protocol for mirroring and for multi-shard transactions.
1 Enables XA Protocol for these Spider operations.
0 Disables XA Protocol for these Spider operations.
spider_suppress_comment_ignored_warningDescription: Whether to suppress warnings that table COMMENT or CONNECTION strings are ignored due to specified table options.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
spider_sync_autocommitDescription: Whether to push down local auto-commits to remote backend servers.
OFF Pushes down local auto-commits.
ON Doesn't push down local auto-commits.
spider_sync_sql_modeDescription: Whether to sync .
OFF No sync
ON Sync
spider_sync_time_zoneDescription: Whether to push the local time zone down to remote backend servers.
OFF Doesn't synchronize time zones.
ON Synchronize time zones.
spider_sync_trx_isolationDescription: Pushes local transaction isolation levels down to remote backend servers.
OFF Doesn't push down local isolation levels.
ON Pushes down local isolation levels.
spider_table_crd_thread_countDescription: Static background thread count of table crd.
Command line: --spider-table-crd-thread-count=#
Scope: Global
Dynamic: No
spider_table_init_error_intervalDescription: Interval in seconds where the same error code is returned if table initialization fails. Use to protect against infinite loops in table links.
Scope: Global
Dynamic: Yes
Data Type: numeric
spider_table_sts_thread_countDescription: Static background thread count of table sts.
Command line: --spider-table-sts-thread-count=#
Scope: Global
Dynamic: No
spider_udf_ct_bulk_insert_intervalDescription: Interval in milliseconds between bulk inserts at copying. For use with the UDF spider_copy_tables, which copies table data linked to a Spider table from the source server to destination server using bulk insert. If this interval is 0, it may cause higher write traffic.
-1 Uses the UDF parameter.
0 and more: Time in milliseconds.
spider_udf_ct_bulk_insert_rowsDescription: Number of rows to insert at a time when copying during bulk inserts.
-1, 0: Uses the .
1 and more: Number of rows
spider_udf_ds_bulk_insert_rowsDescription: Number of rows inserted at a time during bulk inserts when the result-set is stored in a temporary table on executing a UDF.
-1, 0 Uses the UDF parameter.
1 or more: Number of rows
spider_udf_ds_table_loop_modeDescription: Whether to store the result-set in the same temporary table when the temporary table list count for UDF is less than the result-set count on UDF execution.
-1 Falls back to the default value, if the is not set.
0 Drops records.
spider_udf_ds_use_real_tableDescription: Whether to use real table for temporary table list.
-1 Use UDF parameter.
0 Do not use real table.
spider_udf_table_lock_mutex_countDescription: Mutex count of table lock for Spider UDFs.
Scope: Global
Dynamic: No
Data Type: numeric
spider_udf_table_mon_mutex_countDescription: Mutex count of table mon for Spider UDFs.
Scope: Global
Dynamic: No
Data Type: numeric
spider_use_all_conns_snapshotDescription: Whether to pass START TRANSACTION WITH SNAPSHOT statements to all connections.
OFF Doesn't pass statement to all connections.
ON Passes statement to all connections.
spider_use_cond_other_than_pk_for_updateDescription: Whether to use all conditions even if condition has a primary key.
0 Don't use all conditions
1 Use all conditions
spider_use_consistent_snapshotDescription:
Whether to push a local START TRANSACTION WITH CONSISTENT statement down to remote backend servers.
OFF Doesn't push the local statement down.
ON Pushes the local statement down.
spider_use_default_databaseDescription: Whether to use the default database.
OFF Doesn't use the default database.
ON Uses the default database.
spider_use_flash_logsDescription: Whether to push statements down to remote backend servers.
OFF Doesn't push the statement down.
ON Pushes the statement down.
spider_use_handlerDescription: Converts SQL statements.
When the system variable is set to 0, Spider disables conversions to prevent use of the statement on the isolation level.
-1 Falls back to the default value, if the is not set.
0 Converts statements into
spider_use_pushdown_udfDescription:
When using a UDF function in a condition and the system variable is set to 1, whether to execute the UDF function locally or push it down.
-1 Falls back to the default value, if the is not set.
0 Doesn't transmit the UDF
spider_use_snapshot_with_flush_tablesDescription:
Whether to encapsulate and statements when START TRANSACTION WITH CONSISTENT and FLUSH TABLE WITH READ LOCK statements are sent to the remote backend servers.
0 : No encapsulation.
1 : Encapsulates, only when the
spider_use_table_charsetDescription: Whether to use the local table for the remote backend server connections.
-1 Falls back to the default value, if the is not set.
0 Use utf8.
spider_versionDescription: The current Spider version. Removed in when the Spider version number was matched with the server version.
Scope: Global
Dynamic: No
Data Type: string
spider_wait_timeoutDescription: Wait timeout in seconds of setting to remote server. -1 means not set.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
spider_xa_register_modeDescription: Mode of XA transaction register into system table.
0 Register all XA transactions
1 Register only write XA transactions
This page is licensed: CC BY-SA / Gnu FDL
1 Quick Mode. Uses an internal Spider counter for the auto-increment value. This mode is fast, but it is possible for duplicates to occur when updating the same table from multiple Spider proxies.
2 Set Zero Mode. The auto-increment value is given by the remote backend. Sets the column to 0, even if you set the value to the auto-increment column in your statement. If you use the table with the table partitioning feature, it sets to zero after choosing an inserted partition.
3 When the auto-increment column is set to NULL, the value is given by the remote backend server. If you set the auto-increment column to 0,the value is given by the local server. Set spider_reset_auto_increment to 2 or 3 if you want to use an auto-increment column on the remote server.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 3
DSN Parameter Name: aim
1 and greater: Number of records.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 2
Default Table Value: 2
Range: -1 to 9223372036854775807
DSN Parameter Name: bfr
1 Uses background search when searching without locks
2 Uses background search when searching without locks or with shared locks.
3 Uses background search regardless of locks.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 3
DSN Parameter Name: bmd
1 and greater: Number of records.
Scope: Global, Session
Dynamic: Yes
Default Session Value: 100
Default Table Value: 100
Range: -1 to 9223372036854775807
DSN Parameter Name: bsr
Default Session Value: ""
Default Table Value: Memory
DSN Parameter Name: bke
1 Uses a temporary table, if it is judged acceptable.
2 Uses a temporary table, if it is judged acceptable and avoids replication delay.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Default Table Value: 1
Range: -1 to 2
DSN Parameter Name: bkm
Default Session Value: 1
Range: -1 to 1
Default Session Value: 16384
Range: 0 to 4294967295
DSN Parameter Name: bsz
Default Session Value: 16000
Default Table Value: 16000
Range: -1 to 2147483647
Introduced:
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 16000
Default Table Value: 16000
Range: -1 to 2147483647
DSN Parameter Name: bsz
DELETE1 Collects multiple UPDATE and DELETE statements, then sends the collected statements one by one.
2 Collects multiple UPDATE and DELETE statements and sends them together.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 2
DSN Parameter Name: bum
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 16000
Default Table Value: 16000
Range: -1 to 2147483647
DSN Parameter Name: bus
1 Choose connection channel automatically.
2 to 63 Number of connection channels.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 63
DSN Parameter Name: ##
Deprecated:
2Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Range: 0 to 2
Default Session Value: 0
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Range: 0 to 1
Default Session Value: 10
Range: 0 to 1000
Introduced:
Default Value: 1
Range: 0 to 4294967295
Dynamic: Yes
Data Type: boolean
Default Session Value: 0
Default Session Value: 2 (>= MariaDB 11.8), 1000 (<= )
Range: 0 to 2147483647
Default Session Value: 1000
Range: -1 to 9223372036854775807
1 and greater: Number of seconds.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 6
Default Table Value: 0
Range: -1 to 2147483647
DSN Parameter Name: cto
2 Enables background confirmation.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 2
Default Table Value: 1
Range: -1 to 2
DSN Parameter Name: cbm
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 51
Default Table Value: 51
Range: -1 to 2147483647
DSN Parameter Name: civ
2 Uses the Information Schema.
3 Uses the EXPLAIN command.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Default Table Value: 1
Range: -1 to 3
DSN Parameter Name: cmd
Deprecated: ,
1 Uses table state synchronization when opening a table, but afterwards performs no synchronization.2 Enables synchronization.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 2
DSN Parameter Name: csy
01 Uses the value of the spider_crd_weight system variable, as an addition value.
2 Uses the value of the spider_crd_weight system variable, as a multiplication value.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 2
Default Table Value: 2
Range: -1 to 2
DSN Parameter Name: ctp
Deprecated: ,
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 2
Default Table Value: 2
Range: -1 to 2147483647
DSN Parameter Name: cwg
Deprecated: ,
Default Session Value: 1
Range: -1 to 1
1 Avoids duplicate checks on the local server.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 1
DSN Parameter Name: ddi
1 Uses push down execution.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 2
Default Table Value: 9223372036854775807
Range: -1 to 9223372036854775807
DSN Parameter Name: dol
Dynamic: Yes
Data Type: boolean
Default Value: OFF
Introduced:
Dynamic: No
Data Type: boolean
Default Value: OFF
1 Returns an empty result.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 1
DSN Parameter Name: erm
1 Returns an empty result-set on error.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 1
DSN Parameter Name: ewm
1 and greater: Sets the number of first read records.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 2
Range: -1 to 9223372036854775807
DSN Parameter Name: frd
xid2 Continues processing the XA transaction, disregarding all errors.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Range: 0 to 2
Dynamic: Yes
Data Type: boolean
Default Session Value: OFF
Default Value: OFF
Introduced:
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
Default Session Value: OFF
Introduced:
Dynamic: Yes
Data Type: numeric
Default Session Value: 1024
Default Table Value: 1024
DSN Parameter Name: isa
Range: -1 to 2147483647
Deprecated: , ,
Dynamic: Yes
Data Type: numeric
Default Session Value: 9223372036854775807
Default Table Value: 9223372036854775807
Range: -1 to 9223372036854775807
DSN Parameter Name: ilm
Deprecated: ,
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 9223372036854775807
DSN Parameter Name: ios
Deprecated: ,
1 Not transmitted.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 1
DSN Parameter Name: iom
1 Transmitted.Default Session Value: 0
Default Table Value: 0
Range: -1 to 1
DSN Parameter Name: iol
-1 Don't know or does not matter; don't send 'SET SQL_LOG_OFF' statement0 Send 'SET SQL_LOG_OFF 0' statement to data nodes (logs SQL statements to the remote server)
1 Send 'SET SQL_LOG_OFF 1' statement to data nodes (doesn't log SQL statements to the remote server)
Scope: Global, Session
Dynamic: Yes
Data Type: numeric (previously boolean)
Range: -1 to 1
Default Session Value: -1 (previously ON)
Data Type: boolean
Default Session Value: 0
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
Default Session Value: OFF
Default Session Value: 0
Range: -1 to 1
2 Takes a consistent snapshot on each backend, but loses global consistency.
3 Starts transactions with XA, but removes CONSISTENT SNAPSHOT.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Range: 0 to 3
Default Session Value: 0
1 LoadScope: Global
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Range: -1 to 1
Introduced:
Deprecated: ,
1 LoadScope: Global
Dynamic: Yes
Data Type: boolean
Default Session Value: 1
Range: -1 to 1
Introduced:
Deprecated: ,
Dynamic: Yes
Data Type: boolean
Default Value: 0
Dynamic: Yes
Data Type: boolean
Default Value: 0
23 Log warning
4 Log info (Added in )
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Value: 0
Range: 0 to 4
2 : Logs errors from data nodes, as well as warning summaries.3 : Logs errors from data nodes, as well as warning summaries and details.
4 : Logs errors from data nodes, as well as warning summaries and details, and result summaries.
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Value: 0
Range: 0 to 4
1 Uses low memory mode.Data Type: numeric
Default Session Value: 1
Default Table Value: 1
Range: -1 to 1
Data Type: numeric
Default Session Value: 0
Range: 0 to 99999
Introduced:
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 32767
Default Table Value: 32767
Range: -1 to 32767
DSN Parameter Name: mod
1 Divides statements.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 100
Default Table Value: 100
Range: -1 to 2147483647
DSN Parameter Name: msr
1 and greater: Timeout in seconds.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 600
Default Table Value: 600
Range: -1 to 2147483647
DSN Parameter Name: nrt
1 and more: Timeout in seconds.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 600
Default Table Value: 600
Range: -1 to 2147483647
DSN Parameter Name: nwt
Dynamic: Yes
Data Type: numeric
Default Value: 3600
Range: 0 to 2147483647
1 Remote buffering, it acquires records one by one. Interrupts don't wait and recovery on context switch back.
2 Remote buffering, it acquires records one by one. Interrupts wait to the end of the acquisition.
3 Local buffering, uses a temporary table on disk when the result-set is greater than the value of the spider_quick_page_size system variable.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 3
Default Table Value: 3
Range: -1 to 3
DSN Parameter Name: qmd
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 10485760
Range: -1 to 9223372036854775807
Introduced: ,
Dynamic: Yes
Data Type: numeric
Default Session Value: 1024 (>=), -1 (<= MariaDB 10.6)
Default Table Value: 100
Range: -1 to 9223372036854775807
DSN Parameter Name: qps
1 Makes tables read- only.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0 (>=), -1 (<= MariaDB 10.6)
Default Table Value: 0
Range: -1 to 1
DSN Parameter Name: rom
Default Session Value: null
1 Sets the auto-commit mode to 1.
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Session Value: -1
Range: -1 to 1
Default Session Value: Empty string
1 Logs SQL statements on remote backendScope: Global
Dynamic: Yes
Data Type: numeric
Default Session Value: -1
Range: -1 to 1
Default Session Value: null
1 Sets to the READ COMMITTED level.
2 Sets to the REPEATABLE READ level.
3 Sets to the SERIALIZABLE level.
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Session Value: -1
Range: -1 to 3
Default Value: -1
Range: -1 to 2147483647
Introduced:
1 Resets.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Default Table Value: 1
Range: -1 to 1
DSN Parameter Name: rsa
Dynamic: Yes
Data Type: boolean
Default Session Value: OFF
1 and greater: Number of records.
Scope: Global, Session
Dynamic: Yes
Default Session Value: 0
Default Table Value: 0
Range: -1 to 9223372036854775807
DSN Parameter Name: srd
1 Uses all columns judged necessary to resolve the query.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Default Table Value: 1
Range: -1 to 1
DSN Parameter Name: scm
1 Takes shared locks.
2 Takes exclusive locks.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Default Table Value: 1
Range: -1 to 2
DSN Parameter Name: slm#
1 or more Uses chunk retrieval.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 2
Default Table Value: 2
Range: -1 to 2147483647
DSN Parameter Name: ssr#
Dynamic: Yes
Data Type: numeric
Default Session Value: 9223372036854775807
Default Table Value: 9223372036854775807
Range: -1 to 9223372036854775807
DSN Parameter Name: ssl#
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0 (>=), 1 (<= )
Range: 0 to 1
DSN Parameter Name: stl#
1 Uses different connections.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Default Table Value: 1
Range: -1 to 1
DSN Parameter Name: stc#
1 Enables semi-transaction consistency.
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
Default Session Value: ON
1 READ COMMITTED2 REPEATABLE READ
3 SERIALIZABLE
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: -1
Range: -1 to 3
1 Doesn't compute condition push downs.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 1
DSN Parameter Name: sdc
1 :skip parallel search if query is not SELECT statement2 :skip parallel search if query has SQL_NO_CACHE
3 :1+2
Command line: --spider-skip-parallel-search=#
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Range: -1 to 3
Introduced:
1 read committed2 repeatable read
3 serializable
Command line: --spider-slave-trx-isolation=#
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Session Value: -1
Range: -1 to 3
Introduced: ,
Dynamic: Yes
Data Type: numeric
Default Session Value: 9223372036854775807
Default Table Value: 9223372036854775807
Range: -1 to 9223372036854775807
DSN Parameter Name: srd
1 Store last CRD result in the system table.Command line: --spider-store-last-crd=#
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Range: -1 to 1
Introduced:
Deprecated: ,
1 Store last STS result in the system table.Command line: --spider-store-last-sts=#
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Range: -1 to 1
Introduced:
Deprecated: ,
1 Use columns in select clause strictly for group by clauseScope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Range: -1 to 1
Introduced:
0 Disables background confirmation.1 Enables background confirmation (create thread per table/partition).
2 Enables background confirmation (use static threads). (from MariaDB 10.)
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 2
Range: -1 to 2
DSN Parameter Name: sbm
1 or more: Interval in seconds for table state confirmation.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 10
Default Table Value: 10
Range: -1 to 2147483647
DSN Parameter Name: siv
2 Uses the Information Schema.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 1
Default Table Value: 1
Range: -1 to 2
DSN Parameter Name: smd
Deprecated: ,
1 Synchronizes table state when opening a table, doesn't synchronize after opening.2 Synchronizes table statistics.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Session Value: 0
Default Table Value: 0
Range: -1 to 2
DSN Parameter Name: ssy
Dynamic: No
Data Type: boolean
Default Table Value: 1
Default Value: OFF
Introduced:
Dynamic: Yes
Data Type: boolean
Default Session Value: ON
Dynamic: Yes
Data Type: boolean
Default Value: ON
Introduced:
Dynamic: Yes
Data Type: boolean
Default Session Value: OFF
Removed:
Dynamic: Yes
Data Type: boolean
Default Session Value: ON
Data Type: numeric
Default Value:
1 (>= , , MariaDB 10.6.17, MariaDB 10.11.7, , , )
10 (<= , , MariaDB 10.6.16, MariaDB 10.11.6, , , )
Range: 1 to 4294967295
Deprecated:
Default Value: 1
Range: 0 to 4294967295
Data Type: numeric
1 (>= , , MariaDB 10.6.17, MariaDB 10.11.7, , , )
10 (<= , , MariaDB 10.6.16, MariaDB 10.11.6, , , )
Range: 1 to 4294967295
Deprecated:
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Value: 10
Default Table Value: 10
Range: -1 to 2147483647
Deprecated: ,
Removed:
Dynamic: Yes
Data Type: numeric
Default Value: 100
Default Table Value: 100
Range: -1 to 9223372036854775807
Deprecated: ,
Removed:
Dynamic: Yes
Data Type: numeric
Default Value: 3000
Default Table Value: 3000
Range: -1 to 9223372036854775807
Deprecated: ,
Removed:
1 Inserts the last table.
2 Inserts the first table and loops again.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Value: 0
Range: -1 to 2
Deprecated: ,
Removed:
1 Use real table.Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Value: 0
Range: -1 to 1
Deprecated: ,
Removed:
Default Value: 20
Range: 1 to 4294967295
Removed:
Default Value: 20
Range: 1 to 4294967295
Removed:
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
Default Session Value: OFF
Dynamic: Yes
Data Type: numeric
Default Value: 1
Range: 0 to 1
Introduced: ,
Scope: Global, Session
Dynamic: Yes
Data Type: boolean
Default Value: OFF
Dynamic: Yes
Data Type: boolean
Default Value: ON
Dynamic: Yes
Data Type: boolean
Default Value: OFF
1 Passes HANDLER to the remote backend server.
2 Converts SQL statements to HANDLER statements.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Value: 0
Default Table Value: 0
Range: -1 to 3
DSN Parameter Name: uhd
Deprecated: ,
Removed:
1 Transmits the UDF.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Value: -1
Default Table Value: 1
Range: -1 to 1
DSN Parameter Name: upu
1.2 :
Synchronizes the snapshot using a LOCK TABLES statement and [flush|FLUSH TABLES]] at the XA transaction level. This is only effective when the spider_use_all_cons_snapshot system variable is set to 1.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Value: 0
Range: 0 to 2
1 Uses the table character set.
Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Value: 1
Default Table Value: 1
Range: -1 to 1
DSN Parameter Name: utc
Removed:
Default Value: 604800
Range: -1 to 2147483647
Introduced:
--spider-xa-register-mode=#Scope: Global, Session
Dynamic: Yes
Data Type: numeric
Default Value: 1
Range: 0 to 1
Introduced:
Deprecated: ,