All pages
Powered by GitBook
1 of 29

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Benchmark Results

This section is for the posting of benchmark results

Benchmarks and Long Running Tests

Here you will find details about our automated benchmark runs and long running tests. Feel free to suggest other benchmarks and tests. You can also send us your findings about benchmarks and tests, which you have run.

sysbench v0.5 - 3x Five Minute Runs on work with 5.2-wl86

3x Five Minute Runs on work with 5.2-wl86 key cache partitions on and off

MariDB 5.2-wl86 sysbench benchmark comparison with key_cache_partitions off and 7 in %

Each test was run three times for 5 minutes.

Number of threads
                     1       4       8       16      32      64      128
 sysbench test
  delete            -18.36  -20.66  -11.32    5.42   -2.91  -14.62   -3.47
  insert             -2.38  -30.11   -1.64   -0.98   -1.19    0.12   -2.37
  oltp_complex_ro     0.16    2.61    4.03    2.99    3.10    5.73   20.95
  oltp_complex_rw   Dup key errors (due to sysbench)
  oltp_simple        -1.24    1.86   11.14   10.69   16.11   17.16   14.31
  select             -0.22    2.00   11.42   10.31   15.58   17.10   14.31
  update_index       -9.34   15.75   -0.36  -10.33    1.94    2.44   41.44
  update_non_index    0.73    1.04   11.12   17.32    5.30   -0.24   -9.55

 (MariaDB 5.2-wl86 key_cache_partitions off q/s /
  MariaDB 5.2-wl86 key_cache_partitions=7 q/s * 100)

key_buffer_size = 32M

Benchmark was run on work: Linux openSUSE 11.1 (x86_64), daul socket quad-core Intel 3.0GHz. with 6MB L2 cache, 8 GB RAM, data_dir on single disk.

MariaDB and MySQL were compiled with

BUILD/compile-amd64-max

MariaDB revision was:

lp:~maria-captains/maria/maria-5.2-wl86

revno: 2742
committer: Igor Babaev <igor@askmonty.org>
branch nick: maria-5.2-keycache
timestamp: Tue 2010-02-16 08:41:11 -0800
message:
  WL#86: Partitioned key cache for MyISAM.
  This is the base patch for the task.

sysbench was run with the following parameters:

and this variable part of parameters

Configuration used for MariDB:

This page is licensed: CC BY-SA / Gnu FDL

sysbench v0.5 - Single Five Minute Runs on T500 Laptop

MariDB/MySQL sysbench benchmark comparison in %

Number of threads
                     1       4       8       16      32      64      128
 sysbench test
  delete             92.42   82.60   88.05   89.85   94.68   98.75   97.77
  insert            100.70   99.90  103.21   89.51   85.16  105.39  108.51
  oltp_complex_ro   101.09  101.83  100.32  103.78  102.10  101.29  102.92
  oltp_complex_rw    95.04   90.22   91.84   88.78  100.98  101.96  101.60
  oltp_simple       100.29   99.90  101.69  102.22  102.61  102.18  101.49
  select            101.57  101.73  100.26  102.15  101.99  102.39  102.09
  update_index       96.01  103.06  105.89  108.35  108.13  104.36  101.61
  update_non_index   99.85  102.05  110.76  119.51  119.69  118.25  122.77

(MariaDB q/s / MySQL q/s * 100)

Run on Lenovo ThinkPad T500 with dual core 2.80GHz and 4GB RAM

MariaDB and MySQL were compiled with

BUILD/compile-amd64-max

MariaDB revision was:

revno: 2818 timestamp: Wed 2010-02-17 21:10:02 +0100

MySQL revision was:

revno: 3360 [merge] timestamp: Wed 2010-02-17 18:48:40 +0100

sysbench was run with these parameters:

and this variable part of parameters

Configuration used for MariDB and MySQL:

This page is licensed: CC BY-SA / Gnu FDL

sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 a

sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 key cache partitions off, 8, and 32 and key buffer size 400

MariDB sysbench benchmark comparison for key_cache_partitions in % with key_buffer_size = 400MB

Each test was run 3 times for 15 minutes with 3 minutes warmup.

Benchmark was run on perro: Linux openSUSE 11.1 (x86_64), single socket dual-core Intel 3.2GHz. with 1MB L2 cache, 2GB RAM, data_dir on 2 disk software RAID 0

MariaDB and MySQL were compiled with

MariaDB revision was:

sysbench was run with the following parameters:

and these variable parameters:

Configuration used for MariaDB:

This page is licensed: CC BY-SA / Gnu FDL

sysbench v0.5 - 3x Five Minute Runs on work with 5.1 vs. 5.2-wl86

3x Five Minute Runs on work with 5.1 vs. 5.2-wl86 key cache partitions off

vs. 5.2-wl86 sysbench benchmark comparison in %

Each test was run three times for 5 minutes.

Benchmark was run on work: Linux openSUSE 11.1 (x86_64), daul socket quad-core Intel 3.0GHz. with 6MB L2 cache, 8 GB RAM, data_dir on single disk.

MariaDB and MySQL were compiled with

revision was:

-wl86 revision was:

sysbench was run with the following parameters:

and this variable part of the parameters

Configuration used for MariaDB:

This page is licensed: CC BY-SA / Gnu FDL

sysbench v0.5 - Three Times Five Minutes Runs on work with 5.1.42

MariDB/MySQL sysbench benchmark comparison in %

Each test was run for 5 minutes 3 times

Benchmark was run on work: Linux openSUSE 11.1 (x86_64), daul socket quad-core Intel 3.0GHz. with 6MB L2 cache, 8 GB RAM, data_dir on single disk.

MariaDB and MySQL were compiled with

MariaDB revision was:

MySQL revision was:

sysbench was run with these parameters:

and this variable part of parameters

Configuration used for MariaDB and MySQL:

This page is licensed: CC BY-SA / Gnu FDL

sysbench v0.5 - 3x 15 Minute Runs on perro with 5.2-wl86 b

3x 15 Minute Runs on perro with 5.2-wl86 key cache partitions off, 8, and 32 and key buffer size 75

MariDB sysbench benchmark comparison for key_cache_partitions in % with key_buffer_size = 75MB

Each test was run 3 times for 15 minutes with 3 minutes warmup.

Benchmark was run on perro: Linux openSUSE 11.1 (x86_64), single socket dual-core Intel 3.2GHz. with 1MB L2 cache, 2GB RAM, data_dir on 2 disk software RAID 0

MariaDB and MySQL were compiled with

MariaDB revision was:

sysbench was run with the following parameters:

and the following variable parameters

Configuration used for MariDB:

This page is licensed: CC BY-SA / Gnu FDL

--oltp-table-size=20000000 \  # 20 mio rows
--max-time=300 \
--max-requests=0 \
--mysql-table-engine=MyISAM \
--mysql-user=root \
--mysql-engine-trx=no \
--myisam-max-rows=50000000"
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
--no-defaults \
--datadir=$DATA_DIR \
--language=./sql/share/english \
--key_buffer_size=32M \
--max_connections=256 \
--query_cache_size=0 \
--query_cache_type=0 \
--skip-grant-tables \
--socket=$MY_SOCKET \
--table_open_cache=512 \
--thread_cache=512 \
--tmpdir=$TEMP_DIR"
#  --key_cache_partitions=7 \
--oltp-table-size=2000000 \
--max-time=300 \
--max-requests=0 \
--mysql-table-engine=InnoDB \
--mysql-user=root \
--mysql-engine-trx=yes
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
--no-defaults \
--skip-grant-tables \
--language=./sql/share/english \
--datadir=$DATA_DIR \
--tmpdir=$TEMP_DIR \
--socket=$MY_SOCKET \
--table_open_cache=512 \
--thread_cache=512 \
--query_cache_size=0 \
--query_cache_type=0 \
--innodb_data_home_dir=$DATA_DIR \
--innodb_data_file_path=ibdata1:128M:autoextend \
--innodb_log_group_home_dir=$DATA_DIR \
--innodb_buffer_pool_size=1024M \
--innodb_additional_mem_pool_size=32M \
--innodb_log_file_size=256M \
--innodb_log_buffer_size=16M \
--innodb_flush_log_at_trx_commit=1 \
--innodb_lock_wait_timeout=50 \
--innodb_doublewrite=0 \
--innodb_flush_method=O_DIRECT \
--innodb_thread_concurrency=0 \
--innodb_max_dirty_pages_pct=80"
Number of threads
                     1       4       8       16      32      64      128
 sysbench test
  oltp_complex_ro
      8 / off        -0.78   -0.42   -0.18   -0.49   -1.03   -0.64    1.08
     32 / off        -0.38   -0.30    0.55   -0.39   -0.75   -0.05    2.49
  oltp_simple
      8 / off        -1.19   -2.20   -0.74   -2.74   -1.54    0.28   -1.46
     32 / off        -1.24   -1.22    0.33   -0.13    0.11    2.09   -1.34
  select
      8 / off        -0.71   -1.68   -1.48   -2.05    0.94   -2.93   -0.18
     32 / off        -0.71   -1.33   -2.11   -0.63   -0.40  -19.68* -11.45*
  update_index
      8 / off        -1.30    4.37  -14.69*  -2.56   17.69*  -1.14    2.82
     32 / off        -1.47    7.03*   0.71   -0.72   15.61*   1.61    0.33

 ( 8/off*100)-100
 (32/off*100)-100

 * means due to unusual high STDEV (see OO.org spreadsheet for details)

 off means key_cache_partitions off
   8 means key_cache_partitions = 8
  32 means key_cache_partitions = 32
BUILD/compile-amd64-max
revno: 2742
committer: Igor Babaev <igor@askmonty.org>
branch nick: maria-5.2-keycache
timestamp: Tue 2010-02-16 08:41:11 -0800
message:
  WL#86: Partitioned key cache for MyISAM.
  This is the base patch for the task.
Number of threads
                     1       4       8       16      32      64      128
 sysbench test
  delete            107.28   94.70   98.10  107.12   93.59   89.24   86.89
  insert            103.15  105.13  101.75  102.78  101.52  100.29  100.89
  oltp_complex_ro   101.31  101.77  100.41   98.72   98.53  101.59  100.44
  oltp_complex_rw   Dup key errors (due to sysbench)
  oltp_simple       102.28  100.76  102.70  100.94  101.05  101.81  102.06
  select            100.88  101.05  100.48  101.61  101.48  101.87  101.44
  update_index       97.57   96.81   93.58  102.43   89.19  107.63   88.29
  update_non_index  101.58   83.24  110.46   94.52  106.33  103.87  115.22

(MariaDB 5.1 key_cache_partitions off q/s /
 MariaDB 5.2-wl86 key_cache_partitions off q/s * 100)

key_buffer_size = 512M
BUILD/compile-amd64-max
revno: 2821
committer: Sergei Golubchik <sergii@pisem.net>
branch nick: maria-5.1
timestamp: Tue 2010-02-23 13:04:58 +0100
message:
  fix for a possible DoS in the my_net_skip_rest()
Number of threads
                     1       4       8       16      32      64      128
 sysbench test
  delete             98.99   86.56   97.42  102.60  101.25   98.91   99.99
  insert             99.20   97.52   98.18   99.01   99.32   99.76   99.36
  oltp_complex_ro   100.34   99.60   98.97  100.34   99.37   99.98  100.25
  oltp_complex_rw   115.90  101.87  101.93  100.78  100.45   95.67  105.08
  oltp_simple       100.09   99.82   99.73   99.57   99.57  101.48  100.59
  select             99.72   99.83   98.85   99.92  101.29   99.34  100.11
  update_index      112.62  101.40   99.31  100.21   98.15   99.12   99.98
  update_non_index   99.36   99.28  100.20   87.68   97.09  102.04   99.91

(MariaDB q/s / MySQL q/s * 100)
BUILD/compile-amd64-max
-rtag:5.1.42
-rtag:5.1.42
Number of threads
                     1       4       8       16      32      64      128
 sysbench test
  oltp_complex_ro
      8 / off        3.05   -0.10    0.57    1.00   -0.05    0.15    2.50
     32 / off        3.04  -26.61*   0.89   -0.02    1.03    0.44    2.94
  oltp_simple
      8 / off       -0.95   -1.10   -1.17    1.36   -2.65   -0.59   -1.01
     32 / off        0.76   -0.02    0.66    2.75    1.96    3.05   -0.86
  select
      8 / off       -1.45   -0.68   -2.31  -27.61*  -0.52   -3.97   -0.24
     32 / off       -0.87   -1.63   -1.15   -2.10    0.44    1.12    1.39
  update_index
      8 / off       -2.55    8.29*   3.14    4.16    1.13    1.95    1.29
     32 / off       -2.27    9.46*  -0.12    2.64    0.69   -7.26*  -0.24
 
 ( 8/off*100)-100
 (32/off*100)-100
 
 * means unusually high STDEV

 off means key_cache_partitions off
   8 means key_cache_partitions = 8
  32 means key_cache_partitions = 32
BUILD/compile-amd64-max
revno: 2742
committer: Igor Babaev <igor@askmonty.org>
branch nick: maria-5.2-keycache
timestamp: Tue 2010-02-16 08:41:11 -0800
message:
  WL#86: Partitioned key cache for MyISAM.
  This is the base patch for the task.
--oltp-table-size=20000000 \       # 20 million rows.
--max-requests=0 \
--mysql-table-engine=MyISAM \
--mysql-user=root \
--mysql-engine-trx=no \
--myisam-max-rows=50000000 \
--rand-seed=303
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
--no-defaults \
--datadir=/mnt/data/sysbench/data \
--language=./sql/share/english \
--key_buffer_size=400M \
--key_cache_partitions=32 \        # Off | 8 | 32
--max_connections=256 \
--query_cache_size=0 \
--query_cache_type=0 \
--skip-grant-tables \
--socket=/tmp/mysql.sock \
--table_open_cache=512 \
--thread_cache=512 \
--tmpdir=/mnt/data/sysbench
lp:~maria-captains/maria/maria-5.2-wl86

revno: 2742
committer: Igor Babaev <igor@askmonty.org>
branch nick: maria-5.2-keycache
timestamp: Tue 2010-02-16 08:41:11 -0800
message:
  WL#86: Partitioned key cache for MyISAM.
  This is the base patch for the task.
--oltp-table-size=20000000 \  # 20 mio rows
--max-time=300 \
--max-requests=0 \
--mysql-table-engine=MyISAM \
--mysql-user=root \
--mysql-engine-trx=no \
--myisam-max-rows=50000000"
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
--no-defaults \
--datadir=$DATA_DIR \
--language=./sql/share/english \
--key_buffer_size=512M \
--max_connections=256 \
--query_cache_size=0 \
--query_cache_type=0 \
--skip-grant-tables \
--socket=$MY_SOCKET \
--table_open_cache=512 \
--thread_cache=512 \
--tmpdir=$TEMP_DIR"
#  --key_cache_partitions=7 \
--oltp-table-size=2000000 \
--max-time=300 \
--max-requests=0 \
--mysql-table-engine=InnoDB \
--mysql-user=root \
--mysql-engine-trx=yes
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
--no-defaults \
--skip-grant-tables \
--language=./sql/share/english \
--datadir=$DATA_DIR \
--tmpdir=$TEMP_DIR \
--socket=$MY_SOCKET \
--table_open_cache=512 \
--thread_cache=512 \
--query_cache_size=0 \
--query_cache_type=0 \
--innodb_data_home_dir=$DATA_DIR \
--innodb_data_file_path=ibdata1:128M:autoextend \
--innodb_log_group_home_dir=$DATA_DIR \
--innodb_buffer_pool_size=1024M \
--innodb_additional_mem_pool_size=32M \
--innodb_log_file_size=256M \
--innodb_log_buffer_size=16M \
--innodb_flush_log_at_trx_commit=1 \
--innodb_lock_wait_timeout=50 \
--innodb_doublewrite=0 \
--innodb_flush_method=O_DIRECT \
--innodb_thread_concurrency=0 \
--innodb_max_dirty_pages_pct=80"
--oltp-table-size=20000000 \       # 20 million rows.
--max-requests=0 \
--mysql-table-engine=MyISAM \
--mysql-user=root \
--mysql-engine-trx=no \
--myisam-max-rows=50000000 \
--rand-seed=303
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
--no-defaults \
--datadir=/mnt/data/sysbench/data \
--language=./sql/share/english \
--key_buffer_size=75M \
--key_cache_partitions=32 \        # Off | 8 | 32
--max_connections=256 \
--query_cache_size=0 \
--query_cache_type=0 \
--skip-grant-tables \
--socket=/tmp/mysql.sock \
--table_open_cache=512 \
--thread_cache=512 \
--tmpdir=/mnt/data/sysbench

Benchmarks

Articles about the performance of MariaDB.

mariadb-tools

All of the helper and wrapper scripts we use for benchmarks and tests are available in the mariadb.org-tools project on GitHub.

This page is licensed: CC BY-SA / Gnu FDL

run-sql-bench.pl

run-sql-bench.pl is a perl script for automating runs of sql-bench (You can find sql-bench in the .)

run-sql-bench.pl can be found in the project on Launchpad. Once you have a copy of mariadb-tools, you'll find the script, and its configuration directories, in the sql-bench directory. For the purposes of this article, wherever you located your local branch of mariadb-tools will be called ${BASE_DIR}.

The run-sql-bench.pl script is located at '${BASE_DIR}/sql-bench/run-sql-bench.pl'.

Example configuration scripts used for different runs can be found in the various subdirectories of ${BASE_DIR}/sql-bench/

Performance of MEMORY Tables

sysbench v0.5 - Single Five Minute Runs on perro

MariDB/MySQL sysbench benchmark comparison in % Each test was run for 5 minutes.

Number of threads
                     1       4       8       16      32      64      128
 sysbench test
  delete            103.72  101.84  106.56  102.80   94.19   86.23   65.13
  insert            102.01   95.04   97.44   89.00   82.42   81.82   85.63
  oltp_complex_ro   104.21  104.98  105.30  102.67  102.69  102.95  101.10
  oltp_complex_rw   105.08  104.34  103.60  102.90  100.76   98.41   89.94
  oltp_simple       100.66  100.44  102.82  104.23  103.08  100.55   95.90
  select            102.93  101.56  103.70  104.18  102.25  100.65   97.33
  update_index      101.74   92.33  101.69   93.09   76.45   73.67   72.88
  update_non_index  101.58   98.13   98.91   92.32   84.00   76.75   74.19

(MariaDB q/s / MySQL q/s * 100)

Benchmark was run on perro: Linux openSUSE 11.1 (x86_64), single socket dual-core Intel 3.2GHz. with 1MB L2 cache, 2GB RAM, data_dir on 2 disk software RAID 0

MariaDB and MySQL were compiled with

BUILD/compile-amd64-max

MariaDB revision was:

revno: 2821
committer: Sergei Golubchik <sergii@pisem.net>
branch nick: maria-5.1
timestamp: Tue 2010-02-23 13:04:58 +0100
message:
fix for a possible DoS in the my_net_skip_rest()

MySQL revision was:

sysbench was run with these parameters:

and this variable part of parameters

Configuration used for MariDB and MySQL:

This page is licensed: CC BY-SA / Gnu FDL

sysbench v0.5 - Single Five Minute Runs on work

MariDB/MySQL sysbench benchmark comparison in %

Each test was run for 5 minutes.

Number of threads
                     1       4       8       16      32      64      128
 sysbench test
  delete            121.52  144.77  117.70  115.15  100.48   75.39   66.56
  insert            114.89  181.50  118.06  136.00  125.53  141.83  113.88
  oltp_complex_ro   103.13  100.99   94.65  104.14   97.87   90.18   79.93
  oltp_complex_rw   131.65  149.90  120.88  128.58  116.71   89.92   80.63
  oltp_simple       102.32  102.57   97.33   96.34   93.99   78.81   59.71
  select            102.12  102.05   96.64   97.28   93.55   81.53   59.83
  update_index      114.08  103.98  115.59  124.90  123.51  104.38   99.11
  update_non_index  134.04  147.94  150.91  150.04  152.12  108.34   89.24

insert/4 is a glitch

(MariaDB q/s / MySQL q/s * 100)

Benchmark was run on work: Linux openSUSE 11.1 (x86_64), daul socket quad-core Intel 3.0GHz. with 6MB L2 cache, 8 GB RAM, data_dir on single disk.

MariaDB and MySQL were compiled with

BUILD/compile-amd64-max

MariaDB revision was:

revno: 2821
committer: Sergei Golubchik <sergii@pisem.net>
branch nick: maria-5.1
timestamp: Tue 2010-02-23 13:04:58 +0100
message:
fix for a possible DoS in the my_net_skip_rest()

MySQL revision was:

sysbench was run with these parameters:

and this variable part of parameters

Configuration used for MariaDB and MySQL:

This page is licensed: CC BY-SA / Gnu FDL

revno: 3360 [merge]
author: hery.ramilison@sun.com
committer: MySQL Build Team <build@mysql.com>
branch nick: mysql-5.1
timestamp: Wed 2010-02-17 18:48:40 +0100
message:
Merge from mysql-5.1.44-release
--oltp-table-size=2000000 \
--max-time=300 \
--max-requests=0 \
--mysql-table-engine=InnoDB \
--mysql-user=root \
--mysql-engine-trx=yes
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
--no-defaults \
--skip-grant-tables \
--language=./sql/share/english \
--datadir=$DATA_DIR \
--tmpdir=$TEMP_DIR \
--socket=$MY_SOCKET \
--table_open_cache=512 \
--thread_cache=512 \
--query_cache_size=0 \
--query_cache_type=0 \
--innodb_data_home_dir=$DATA_DIR \
--innodb_data_file_path=ibdata1:128M:autoextend \
--innodb_log_group_home_dir=$DATA_DIR \
--innodb_buffer_pool_size=1024M \
--innodb_additional_mem_pool_size=32M \
--innodb_log_file_size=256M \
--innodb_log_buffer_size=16M \
--innodb_flush_log_at_trx_commit=1 \
--innodb_lock_wait_timeout=50 \
--innodb_doublewrite=0 \
--innodb_flush_method=O_DIRECT \
--innodb_thread_concurrency=0 \
--innodb_max_dirty_pages_pct=80"
revno: 2929
committer: Alexander Nozdrin <alik@sun.com>
branch nick: mysql-trunk
timestamp: Sat 2010-02-20 12:26:22 +0300
message:
Fix default.conf.
--oltp-table-size=2000000 \
--max-time=300 \
--max-requests=0 \
--mysql-table-engine=InnoDB \
--mysql-user=root \
--mysql-engine-trx=yes
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
--no-defaults \
--skip-grant-tables \
--language=./sql/share/english \
--datadir=$DATA_DIR \
--tmpdir=$TEMP_DIR \
--socket=$MY_SOCKET \
--table_open_cache=512 \
--thread_cache=512 \
--query_cache_size=0 \
--query_cache_type=0 \
--innodb_data_home_dir=$DATA_DIR \
--innodb_data_file_path=ibdata1:128M:autoextend \
--innodb_log_group_home_dir=$DATA_DIR \
--innodb_buffer_pool_size=1024M \
--innodb_additional_mem_pool_size=32M \
--innodb_log_file_size=256M \
--innodb_log_buffer_size=16M \
--innodb_flush_log_at_trx_commit=1 \
--innodb_lock_wait_timeout=50 \
--innodb_doublewrite=0 \
--innodb_flush_method=O_DIRECT \
--innodb_thread_concurrency=0 \
--innodb_max_dirty_pages_pct=80"
.

To run the run-sql-bench.pl script, do the following:

  1. Optionally do some code changes in that tree and commit your changes

  2. Edit run-sql-bench.pl to set internal options, especially the "my $path" variable.

  3. Create a ${BASE_DIR}/sql-bench/conf/host.cnf file for your system. An easy way to do this is by duplicating one of the example host.cnf files:

Edit the file to customize it for your system.

  1. Create ${HOSTNAME}.cnf files under the ${BASE_DIR}/sql-bench/basic, ${BASE_DIR}/sql-bench/debug, ${BASE_DIR}/sql-bench/debug-full, and ${BASE_DIR}/sql-bench/O2 directories, depending on which automated tests you want to run. Use the example files as a base and customize them for your system.

  2. Run the run-sql-bench.pl script:

  • --repository is the MariaDB tree to use and compile, the script will also look here for sql-bench.

  • --sql-bench-options is mostly used in testing and debugging cases where we want to have short run times. For instance, using --small-test or --small-table.

  • You can separate several sql-bench options with spaces like so:

The results

Results are stored at the location specified by the $sql_bench_results variable in the ${BASE_DIR}/sql-bench/conf/hostname.cnf file for your host. Results are organized in sub directories with the following schema:

Future plans

  • Crash and error detection and reporting.

  • One should be able to specify a test name for each file (run-all-tests --suffix='_xxxx')

This page is licensed: CC BY-SA / Gnu FDL

mariadb-tools

Select Random Ranges and Select Random Point

  • select_random_ranges (select 10 ranges with a delta as parameter)

  • select_random_points (select 100 random points)

Findings:

select_random_ranges

  • A delta of 100 for the ranges gives 3 - 6% performance gain

  • A delta of 50 for the ranges gives 3 - 15% performance gain

  • A delta of 5 for the ranges gives up to 70% performance gain

  • A delta of 1 million shows no difference at all.

select_random_points

  • We see up to 150% performance gain fetching index only

  • We see up to 50% performance gain fetching index and data

The absolute numbers are highly RAM depended

  • We see an up to 250% performance difference on a 2GB system compared to a 4GB system.

MariaDB and MySQL were compiled with

MariaDB revision was:

sysbench was run with the following parameters:

and the following variable parameters

Configuration used for MariDB:

This page is licensed: CC BY-SA / Gnu FDL

Benchmarking Aria

We have not yet had time to benchmark properly. Here follows some things that have been discussed on the maria-discuss email list.

Aria used for internal temporary tables

By default Aria (instead of MyISAM) is used for the internal temporary tables when MEMORY tables overflows to disk or MEMORY tables can't be used (for example when you are using temporary results with BLOB's). In most cases Aria should give you better performance than using MyISAM, but this is not always the case.

CREATE TABLE `t1` (`id` INT(11) DEFAULT NULL, `tea` TEXT) 
  ENGINE=MyISAM DEFAULT CHARSET=latin1;
INSERT t1 SELECT rand()*2e8, repeat(rand(), rand()*64) FROM t1;

Repeat the last row until you get 2097152 rows.

The queries tested

Results (times in seconds, lower is better):

Test
Aria 8K page size
Aria 2K page size
MyISAM

The good news is that for common group by queries that is using summary functions there is a close to 50 % speedup of using Aria for internal temporary tables.

Note that queries Q1,Q3 and Q5 are not typical queries as there is no sum functions involved. In this case rows are just written to the tmp tables and there is no updates. As soon as there are summary functions and updates the new row format in Aria gives a close to 50 % speedup.

The above table also shows how the page size (determined by the system variable) affects the performance. The reason for the difference is that there is more data to move back/from the page cache for inserting of keys. (When reading data we are normally not copying pages). The bigger page size however allows longer keys and fewer index levels so for bigger data sets the different should be smaller. It's possible to in the future optimize Aria to not copy pages from the page cache also for index writes and then this difference should disappear.

The default page size for Aria is 8K.

If you want to run MariaDB with MyISAM for temporary tables, don't use the configure option '--with-aria-tmp-tables' when building MariaDB.

This page is licensed: CC BY-SA / Gnu FDL

Recommended Settings for Benchmarks

Running benchmarks requires a lot of different settings. In this article we collect our best known settings and recommendations.

Hardware and BIOS Settings

We have had good experiences with Intel's hyperthreading on newer Xeon CPUs. Please turn on hyperthreading in your BIOS.

NUMA

The NUMA architecture attaches resources (most important: memory) to individual NUMA nodes (typically: NUMA node = cpu socket). This results in a performance penalty when a cpu core from one NUMA node accesses memory from another NUMA node.

The NUMA topology can be checked with the numactl command:

The Linux kernel uses ACPI tables from BIOS to detect if the hardware is NUMA. On NUMA hardware extra optimizations kick in:

  • if a task has been scheduled on a certain NUMA node, the scheduler tries to put it on the same node again in the future

  • if a task running on a certain NUMA node allocates memory, the kernel tries hard to map physical memory from the same NUMA node

This results in all kinds of weird behavior when you run one big process (mysqld) that consumes most of the memory. In such cases it is recommended to either turn off NUMA (BIOS or kernel command line) or prefix such problem processes with numactl --interleave all. You can enable this by running with the --numa-interleave option.

.

Linux Kernel Settings

See .

InnoDB Settings

to about 80% of RAM or leaving <5G RAM free (on large RAM systems). Less if lots of connections are used.

to be larger than the amount of writes in the test run or sufficient to cover several minutes of the test run at least.

MyISAM Settings

General Settings

should be the same as (unless using thread pools).

This page is licensed: CC BY-SA / Gnu FDL

Benchmark Builds

When you build for benchmarks, it's important to use consistent compile time settings across different versions and even products (i.e. when comparing MySQL and MariaDB).

MariaDB and MySQL are now built using cmake. This makes it hard to fine tune settings because when you chose a predefined build configuration (recommended: RelWithDebInfo) then other settings like CFLAGS are overwritten by those from the CMakefile.

There are more pain points with cmake:

  • cmake uses a different install layout than autotools builds

  • the OQGraph engine is included by default, but fails often due to mismatching boost libraries

  • make install tries to create directories in system locations (/etc/my.cnf.d etc.) which fails as ordinary user

  • CMakefiles for different products sometimes use different CFLAGS

So here is my build script that fixes all those things.

The script shall be run from a subdir of a source tree. i.e.

This page is licensed: CC BY-SA / Gnu FDL

sysbench Benchmark Setup

For our automated MariaDB/MySQL sysbench benchmark tests, we use sysbench fromlp:sysbench. This page describes the basic parameters and configuration we use.

You can find the automation wrapper scripts we use for running sysbench in

Current general parameters

cp -avi ${BASE_DIR}/sql-bench/conf/pitbull.cnf ${BASE_DIR}/sql-bench/conf/${HOSTNAME}.cnf
cd ${BASE_DIR}/sql-bench/; ./run-sql-bench.pl --repository=[/path/to/bzr/repository] --sql-bench-options=[additional sql-bench-options] --debug=[yes|no]
--sql-bench-options="--small-test --small-table"
sql-bench-results-dir/${HOSTNAME}/YYYY-MM-DD
Q1: SELECT id, tea from t1 group by left(id,1) order by null;
Q2: SELECT id, count(*), tea from t1 group by left(id,1) order by null;
Q3: SELECT id, tea from t1 group by left(id,2) order by null;
Q4: SELECT id, count(*), tea from t1 group by left(id,2) order by null;
Q5: SELECT id, tea from t1 group by id % 100 order by null;
Q6: SELECT id, count(*), tea from t1 group by id % 100 order by null;
BUILD/compile-amd64-max
#!/bin/bash

INSTDIR=${1:?usage: $0 install-dir}

CFLAGS="-O3 -g -fno-omit-frame-pointer -fno-strict-aliasing -DNDEBUG -DDBUG_OFF"
CXXFLAGS="$CFLAGS -felide-constructors"

CMAKE_LAYOUT_OPTS="-DINSTALL_LAYOUT=RPM -DINSTALL_SCRIPTDIR=bin \
-DINSTALL_MYSQLDATADIR=var -DINSTALL_SBINDIR=libexec \
-DINSTALL_SUPPORTFILESDIR=share -DINSTALL_SYSCONFDIR=etc \
-DINSTALL_SYSCONF2DIR=etc/my.cnf.d -DCMAKE_INSTALL_PREFIX=$INSTDIR \
-DMYSQL_DATADIR=$INSTDIR/var"

CMAKE_FEATURE_OPTS="-DWITH_READLINE=1 -DWITHOUT_OQGRAPH_STORAGE_ENGINE=1"

CMAKE_BUILD_OPTS="-DCMAKE_BUILD_TYPE=RelWithDebInfo"

cmake .. $CMAKE_BUILD_OPTS $CMAKE_LAYOUT_OPTS $CMAKE_FEATURE_OPTS \
-DCMAKE_C_FLAGS_RELWITHDEBINFO="$CFLAGS" \
-DCMAKE_CXX_FLAGS_RELWITHDEBINFO="$CXXFLAGS"

make && make install
tar xfz mariadb-10.0.7.tar.gz
cd mariadb-10.0.7
mkdir build
cd build
#... run the build script above
Current InnoDB parameters

Compile

Install MariaDB or MySQL to /usr/local and make a symlink to /usr/local/mysql. Please use non-debug version! On Mac OS X we currently need automake1.10.

Start and prepare database to use

Tests

We use the latest sysbench with Lua scripting support. Therefore the test names differ from sysbench <= 0.4. To get reasonable results we use a run time of 5 minutes.

We run the tests with 1, 4, 8, 16, 32, 64, and 128 threads.

Custom added tests

We created a couple of custom tests for SysBench:

  • select_random_ranges.lua

  • select_random_points.lua

Both of these have been added to the latest SysBench v0.5 repository.

This page is licensed: CC BY-SA / Gnu FDL

lp:mariadb-tools
revno: 2742
committer: Igor Babaev <igor@askmonty.org>
branch nick: maria-5.2-keycache
timestamp: Tue 2010-02-16 08:41:11 -0800
message:
  WL#86: Partitioned key cache for MyISAM.
  This is the base patch for the task.
--oltp-table-size=20000000 \       # 20 million rows.
--max-requests=0 \
--mysql-table-engine=MyISAM \
--mysql-user=root \
--mysql-engine-trx=no \
--myisam-max-rows=50000000 \
--rand-seed=303
--num-threads=$THREADS --test=${TEST_DIR}/${SYSBENCH_TEST}
--no-defaults \
--datadir=/mnt/data/sysbench/data \
--language=./sql/share/english \
--key_buffer_size=512M \
--key_cache_partitions=32 \        # Off | 32 | 64
--max_connections=256 \
--query_cache_size=0 \
--query_cache_type=0 \
--skip-grant-tables \
--socket=/tmp/mysql.sock \
--table_open_cache=512 \
--thread_cache=512 \
--tmpdir=/mnt/data/sysbench
table_open_cache = 512
thread_cache = 512
query_cache_size = 0
query_cache_type = 0
innodb_data_home_dir = /data/mysql/
innodb_data_file_path = ibdata1:128M:autoextend
innodb_log_group_home_dir = /data/mysql/
innodb_buffer_pool_size = 1024M
innodb_additional_mem_pool_size = 32M
innodb_log_file_size = 256M
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
innodb_doublewrite = 0
innodb_flush_method = O_DIRECT
innodb_thread_concurrency = 0
innodb_max_dirty_pages_pct = 80
./autogen.sh
./configure \
  --without-drizzle \
  --with-mysql-includes=/usr/local/mysql/include/mysql \
  --with-mysql-libs=/usr/local/mysql/lib/mysql
make
optionally: make install
mysqladmin -uroot drop sbtest
mysqladmin -uroot create sbtest
NUM_THREADS="1 4 8 16 32 64 128"
SYSBENCH_TESTS="delete.lua \
  insert.lua \
  oltp_complex_ro.lua \
  oltp_complex_rw.lua \
  oltp_simple.lua \
  select.lua \
  update_index.lua \
  update_non_index.lua"

NUM_THREADS=1
TEST_DIR=${HOME}/work/monty_program/sysbench/sysbench/tests/db

./sysbench \
  --test=${TEST_DIR}/oltp_simple.lua \
  --oltp-table-size=2000000 \
  --max-time=300 \
  --max-requests=0 \
  --mysql-table-engine=InnoDB \
  --mysql-user=root \
  --mysql-engine-trx=yes \
  --num-threads=$NUM_THREADS \
  prepare

./sysbench \
  --test=${TEST_DIR}/oltp_simple.lua \
  --oltp-table-size=2000000 \
  --max-time=300 \
  --max-requests=0 \
  --mysql-table-engine=InnoDB \
  --mysql-user=root \
  --mysql-engine-trx=yes \
  --num-threads=$NUM_THREADS \
  run

Q4

8.20

7.04

15.14

Q5

7.10

6.37

6.28

Q6

10.38

9.09

17.00

Q1

3.08

2.41

2.17

Q2

6.24

5.21

12.89

Q3

4.87

4.05

4.04

More details can be found here
configuring Linux for MariaDB

MariaDB 5.3/MySQL 5.5 Windows performance patches

I just backported Windows performance patches I've done for 5.5 back to . There will be a bit more in Maria that in MySQL 5.5, but more on this later.

First, I feel Windows performance improvements in 5.5 were never adequately described, so here is the redux. For those familiar with Windows systems programming, MySQL code used to offer of low-hanging performance fruits. I picked some of them those back in my days in MySQL/Sun. The result benchmark curve became really nice: look at Calvin's blog entry.

If graphs in this blog looks familiar to you, it is because it was often used by Oracle marketing as proof of big-O's positive influence on MySQL code :)

There were 3 Windows performance related patches. I comment on the bugs history a little bit, too.

  • Bug#24509. The fix removed the limit of 2048 open MyISAM files, and as a nice side-effect allowed for much bigger table cache. When mysqld starts, it checks the maximum open files value, and corrects the value of table cache, if max_open_files is low or max_connection is high. This is what also happened during benchmarks. If you look at the read-only benchmark graph in the Calvin's blog above, you'll notice a drop around 64 concurrent users. No wonder, mysql server recalculated table cache size, setting it to the absolute minimum, that is to 64.

The fix was to create an own sort-of C runtime library on top of pure Win32, which is capable of handling more than 2048 open files (16K default). Some other things are also done nicer than in Microsofts C runtime, e.g there are no locks, and there is an acceptable pread()/pwrite() implementation. The main advantage as I said is being able to have a large table cache - for this, rewriting C runtime is likely an overkill, but I did not come up with anything better.

  • . This bug was fixing a lot of questionable places in InnoDB that were written probably back in the NT3.1 days .

First it is importantto understand how innodb "mutex" structure is acquired. Details on it are hairy, mutex is a composite structure which has a real os mutex( known under Windows as CRITICAL_SECTION) plus innodb event (known under Windows as event). There are a couple of variations on the implementation - mutex can be an interlocked (aka atomic for my Unix friends) variable, under Unix event is represented as condition variable.

Acquisition is done in 2 steps - first, trylock on os mutex is performed , possibly several times with in a loop, if unsuccessfull, event is reserved in a global table of events known as "sync array", event is entering a waiting state. mutex unlock would wakeup the waiters if there are any. Do not ask me why the implementation is so complicated, it is so :) Maybe, this design helps to find deadlocks.

Variation of this implementation - instead of trylock on mutex, there might be a compare_exchange instruction on interlocked(atomic) variable.

Back to Windows, the implementation of the above exposed a couple of interesting self-compensating bugs.

  1. First, I fixed os_mutex_trylock() to be what it really means . The implementation was EnterCriticalSection, which is "try very hard", and actually acquire the lock. A more conscious trylock would be TryEnterCriticalSection. When I fixed that, contrary to my expectation, this made mysqld really slow. When trylock() failed, innodb started to enter code paths it has never seen before. for example, reserving space in the mentioned "sync array". Access to sync array is protected by so-called "slow lock" and this showed up very often in the profiler. The next step was fixing the "slow lock"

  2. "slow Innodb mutex" was implemented as kernel object aka Windows mutex (for my Unix friends this is sort of SysV semaphore). It can be used to synchronize processes but is an absolute overkill for synchronization of threads inside the same process. It was a "really slow mutex". Changing this to CRITICAL_SECTION made it faster however...

  3. When all of the above was fixed, found out that Windows events (mentioned events) did not really scale well in many-threads scenarios. On newer Windows (Vista+), there is a CONDITION_VARIABLE that is documented to scale better, and measuring also showed that it scaled really well. So I used condition variables when possible, which is ironic, because InnoDB events were really modeled after Windows events.

So, the story about "atomics did not work well on Windows" was a cumulative effect of different things.

Prior to that patch . Once atomics were enabled, implementation of fast mutexes did not use CRITICAL_SECTION, but compare_exchange instruction. Ingenious "trylock_veryhard" as we have seen at the step 1. above is not used anymore, instead it is a quite correct "try" lock . Once try_lock() began to fail with many concurrent threads, overhead of sync array guard implemented as Windows kernel object that we have seen in 2. became apparent, and less-then inefficient Windows events mentioned in 3. finished that picture.

This patch was merely to compensate for negative effects of the 5.5 metadata lock on MyISAM bechmarks, and fix was using native Vista performance primitives. The patch per se is not interesting, and repeats a lot of what was done for Innodb. What was great, was a discussion prior to the patch between myself, Davi, Dmitry on different implementations of reader writer locks, including 2 homebacked ones, and one by .

Without doubt, the discussions around that was a highlight in my very short stint at Oracle. Also, if you want to get a MySQL-classic-style code review with 17 things to fix, of which at least 10 would be marked with "Coding Style" (yes, both words capitalized) , try to get Dmitry Lenev as a reviewer, he's great - this is the proof Anyway, the patch improves MyISAM throughput by 10-20% , which I think is quite ok. Somehow those percents were subsequently eaten by MDL though :)

Notes

Taken from a note on Facebook: by Vladislav Vaintroub.

This page is licensed: CC BY-SA / Gnu FDL

Sysbench Results

Results from various Sysbench runs. The data is in OpenDocument Spreadsheet format (.ods).

For reference, the "perro" and "work" systems were configured as follows:

perro
work

perro

Linux openSUSE 11.1 (x86_64), single socket dual-core Intel 3.2GHz. with 1MB L2 cache, 2GB RAM, data_dir on 2 disk software RAID 0

work

Linux openSUSE 11.1 (x86_64), dual socket quad-core Intel 3.0GHz. with 6MB L2 cache, 8 GB RAM, data_dir on single disk.

sysbench v0.5 results

  • Single Five Minutes Runs on T500 Laptop, OO.org spreadsheet:

  • Single Five Minutes Runs on perro, OO.org spreadsheet:

  • Single Five Minutes Runs on work, OO.org spreadsheet:

  • Three Times Five Minutes Runs on work with 5.1.42, OO.org spreadsheet:

This page is licensed: CC BY-SA / Gnu FDL

MariaDB 5.3 - Asynchronous I/O on Windows with InnoDB

Consider 2 pseudo-code implementation of event handling loop handling IO completion on Windows.

  • Using Windows events

  • I/O Completion port based

Which one is more efficient ? The right answer is - I/O completion port based. This is because:

  1. the number of outstanding events a thread can handle is not restricted by a constant like in the WaitForMultipleObject() case.

  2. if there several io_handler() threads running, they load-balance since every I/O can be "dequeued" by GetQueuedCompletionStatus in any io handler thread. With WaitForMultipleObjects(), the thread that will dequeue the I/O result is predetermined for each I/O.

InnoDB has used asynchronous file I/O on Windows since the dawn of time, probably since NT3.1 . On some reason unknown to me (I can only speculate that Microsoft documentation was not good enough back then), InnoDB has always used method with events, and this lead to relatively complicated designs - if you're seeing "segment" mentioning in os0file.c or fil0fil.c , this is mostly due to the fact that number of events WaitForMultipleObjects() can handle is fixed.

I changed async IO handling for XtraDB in MariaDB5.3 to use completion ports, rather than wait_multiple technique. The results of a synthetic benchmark are good.

The test that I run was sysbench 0.4 "update_no_key"

I do understand, sysbench it does not resemble anything that real-life load, and I'm ready to admit cheating with durability for this specific benchmark, but this is an equal-opportunity cheating, all 3 versions ran with the same parameters.

What do I refer to as durability cheating:

  1. using , which, for me , is ok for many scenarios

  2. "Switch off Windows disk flushing" setting, which has the effect of not flushing data in the disk controller (file system caching is not used here anyway). This setting is only recommended for battery backed disks, my own desktop does not have it, of course.

However, if I have not done the above, then I would be measuring the latency of a FlushFileBuffers() in my benchmark, which was not what I wanted. I wanted to stress the asynchronous IO.

And here is the obligatory graph:

Notes

This is taken from an original Facebook note from Vladislav Vaintroub, and it can be found:

It is also worth noting a note from Vlad about the graph: "The graph is given for 5.2, because I developed that patch for 5.2. I pushed it into 5.3 though :)"

This page is licensed: CC BY-SA / Gnu FDL

DBT3 Example Preparation Time

This page contains database preparation and creation times that were discovered while working on the .

Database creation times

Setup
Time
Size on Disk

RQG Performance Comparisons

Performance testing

The performance/perfrun.pl executes each query against a set of tw servers and reports the outcome.

  • --input-directory contains the queries to be run, one query per file. Alternative sources for queries will be made available in the future;

~ $numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 12278 MB
node 0 free: 11624 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 12288 MB
node 1 free: 11778 MB
node distances:
node   0   1 
  0:  10  21 
  1:  21  10
void io_thread() {
    HANDLE handles = new HANDLE[32];
    ...
    for (;;) {
       DWORD index = WaitForMultipleObjects(handles,32, FALSE);
       DWORD num_bytes;
       // Find file and overlapped structure for the index,
       GetOverlappedResult(file, overlapped, &num_bytes, TRUE);
       // handle io represented by overlapped
   }
void io_thread() {
    for (;;) {
       DWORD num_bytes;
       ULONG_PTR key;
       OVERLAPPED *overlapped;
       if (GetQueuedCompletionStatus(io_completion_port, &num_bytes,
            &key, &overlapped, INFINITE)) {
          // handle io represented by overlapped
      }
   }

Three Times Five Minutes Runs on work with 5.2-wl86 key_cache_partitions on and off, OO.org spreadsheet: Sysbench_five_minutes_mariadb-5.2-wl86_key_cache_partitions_on_off_work.ods

  • Three Times Five Minutes Runs on work with 5.1 vs. 5.2-wl86 key_cache_partitions off, OO.org spreadsheet: Sysbench_five_minutes_mariadb-5.2-wl86_key_cache_partitions_on_off_work.ods

  • Three Times Fifteen Minutes Runs on perro with 5.2-wl86 key_cache_partitions off, 8, and 32 and key_buffer_size 400, OO.org spreadsheet: Sysbench_fifteen_minutes_mariadb-5.2-wl86_key_cache_partitions_off_8_32_kbs_400.ods

  • Three Times Fifteen Minutes Runs on perro with 5.2-wl86 key_cache_partitions off, 8, and 32 and key_buffer_size 75, OO.org spreadsheet: Sysbench_fifteen_minutes_mariadb-5.2-wl86_key_cache_partitions_off_8_32_kbs_75.ods

  • select_random_ranges and select_random_points, OO.org spreadsheet: Sysbench_select_random_ranges_points.ods

  • select_100_random_points.lua result on perro with key_cache_partitions off and 32, OO.org spreadsheet: Sysbench_v0.5_select_100_random_points.ods

  • select_random_points.lua --random-points=50 result on perro with key_cache_partitions off and 32, OO.org spreadsheet: Sysbench_v0.5_select_50_random_points.ods

  • select_random_points.lua --random-points=10 result on perro with key_cache_partitions off and 32, OO.org spreadsheet: Sysbench_v0.5_select_10_random_points.ods

  • select_random_points.lua --random-points=10, 50, and 100 results on perro with key_cache_segments off, 32, and 64 OO.org spreadsheet: Sysbench_v0.5_select_random_points_10_50_100_perro.ods

  • select_random_points.lua --random-points=10, 50, and 100 results on pitbull with key_cache_segments off, 32, and 64 OO.org spreadsheet: Sysbench_v0.5_select_random_points_10_50_100_pitbull.ods

  • Sysbench_five_minutes_mariadb_mysql_t500.ods
    Sysbench_five_minutes_mariadb_mysql_perro.ods
    Sysbench_five_minutes_mariadb_mysql_work.ods
    Sysbench_five_minutes_mariadb_mysql_work_5.1.42.ods
  • Reenabled implementation of fast mutexes as atomic variables. Prior to the patch, precompiler flags to enable atomics were commented out with "Windows atomics do not work well" in CMakeLists.txt. Great comment, given that unlike software developers, atomic instructions have no preferences for the OSes they are being used on :)

  • Bug#52102
    Bug#56585
    Vance Morrison
    118295
    note.php?note_id=238505812835782

    75 GB

    MariaDB + MyISAM s10 on 4CPU 4GB RAM developer machine

    96 m 22s = 1h 36m 22s

    17 GB

    MariaDB + MyISAM s100 on facebook-maria2:

    424m 4s = 7h 4m 4s.

    162 GB

    MariaDB + InnoDB s100 on facebook-maria2:

    577m 42s = 9h 37m 42s.

    + InnoDB s30 on facebook-maria1

    122m39.753s = 2h 2m 40s.

    71 GB

    + InnoDB s30 on facebook-maria1

    150m46.218s = 2h 30m 46s.

    71 GB

    MySQL 5.6.4 + InnoDB s30 on facebook-maria1:

    140m18.272s = 2h 20m 18s.

    71 GB

    MariaDB + MyISAM s30 on pitbull with pre_create_PK

    > 3h 30m (with limited RAM to 11 GB)

    MariaDB + MyISAM s30 on facebook-maria1:

    99m 8s = 1h 39m 8s.

    49 GB

    MySQL 5.6.4 + MyISAM s30 on facebook-maria1

    114m 17s.

    49 GB

    Dataload creation time

    Setup
    Time
    Size on Disk

    Scale 10 on 4CPU 4GB RAM developer machine

    8 min 44 sec.

    10.4 GB

    Scale 30 on pitbull.askmonty.org

    20 min 28 sec.

    32 GB (with limited RAM to 11 GB)

    Scale 100 on facebook-maria2

    78 min 27.601s.

    106 GB

    Scale 30 on facebook-maria1

    23 min 40 sec.

    This page is licensed: CC BY-SA / Gnu FDL

    Postgre s30 on pitbull

    DBT3 Automation Scripts

    758 min = 12 h 38 min.

    --dsn1 and --dsn2 specify the locations of the two servers being compared in the Perl DBI URL format. If the queries are not fully qualified, the DSNs must contain the database name. The username and the password are also provided via the DSN. For example:

    • --output-file=... specifies a file where matching queries will be dumped in a form suitable for follow-up filtering with perfreport

    • if no --filter is specified, all queries from the input will be reported;

    Performance reporting

    The performance/perfreport.pl script takes an ouput file created byperfrun and prints out its contents, possibly applying a filter in the process:

    If no --filter is specified, all queries present in the input file are printed.

    Filters

    The queries reported by the system can be filtered out using perl-based filtering expressions. A filter can be applied during data collection, at which point it determines what information is stored in the output file and at

    If you do not specify a filter, it is assumed that you want to output all queries.

    The following variables can participate in filter expressions:

    • Variables from SHOW SESSION STATUS, except:

      • variables that are not reset at the start of the query, which includes Com_*, Uptime, Opened_files and the like;

      • variables that relate to the operation of SSL encryption or the query cache;

    • Variables from SHOW GLOBAL STATUS LIKE 'Innodb_%'

    For each MySQL status variable, 4 Perl variables are provided — the value of the MySQL variable from each server, their absolute difference and their ratio. For example, for theInnodb_rows_read MySQL variable, you can use Innodb_rows_read1,Innodb_rows_read2 , Innodb_rows_read_delta andInnodb_rows_read_ratio in your Perl filter expressions.

    In addition to the MySQL status variables, the framework provides the following additional variables:

    • $Execution_time{1|2|delta|ratio} reports the time the query took to run from start to end in seconds. The value is truncated at milliseconds in order to prevent exorbitant performance ratios from being reported on very fast queries;

    • $Temperature can either be cold or warm depending on whether the first execution of the query is being processed, or the second.

    • $Query contains the text of the query, in order to enable filtering such as $Query !~ m{l_receiptDATE}

    For example, the following command-line option:

    Will only process or report queries that took more than 0.1 sec to execute on a warm server.

    On-screen Reporting Format

    The ASCII on-screen reporting format is as follows:

    Only variables whose values are different between the two servers are reported. In this particular example, the query ran slower on 5.3.0-MariaDB with warm cache and caused twice as many Innodb_rows_read.

    On-disk Data Storage Format

    The on-disk storage format is Data::Dumper objects, wrapped in<![CDATA[ ... ]]> tags, without the file being a full-blown XML. The serialized representation is created byGenTest::QueryPerformanceDelta::serialize() and is read by using eval() in performance/perfreport.pl

    See also:

    • RQG Documentation

    • RQG Extensions for MariaDB Features

    • Optimizer Quality

    • QA Tools

    This page is licensed: CC BY-SA / Gnu FDL

    note.php?note_id=238687382817625
    mariadbasynciowindowsinnodb
    perl performance/perfrun.pl --input-directory=... --dsn1=... --dsn2=... --output-file=... --filter=...
    --dsn1=dbi:mysql:host=127.0.0.1:port=19300:user=root:database=test \
    --dsn2=dbi:mysql:host=127.0.0.1:port=19302:user=root:database=test
    perl performance/perfreport.pl --input-file=... --filter=...
    --filter='($Execution_time1 > 0.1 || $Execution_time2 > 0.1) && $Temperature eq "warm"'
    Query: SELECT  l_linenumber FROM lineitem  WHERE l_suppkey  IN ( 10 , 1 ) AND l_shipdate NOT IN ( '1993-06-06' , '1998-02-04' , '1992-01-08' ) AND l_linenumber NOT IN ( 8 , 7 ) AND l_quantity <> 3 AND ( l_orderkey NOT IN ( 1298 , 93 ) OR l_suppkey = 10 ) AND ( l_suppkey BETWEEN 4 AND 10 + 4 OR l_linenumber = 2 AND l_commitDATE BETWEEN '1993-06-27' AND '1993-09-05' AND l_linenumber BETWEEN 3 AND 9 + 9 );
    Cache: warm
                                        5.3.0-MariaDB-     5.2.6-MariaDB-          Delta              Ratio
    -------------------------------------------------------------------------------------------------------
    Execution time                               0.011s            0.004s           -0.007s            0.36
    Innodb_buffer_pool_read_requests          2171              1836              -335                 0.85
    Handler_read_rnd                           583                 0              -583
    Handler_read_next                          583               602                19                 1.03
    Innodb_rows_read                          1166               602              -564                 0.52
    4       16      64      256     1024
    mariadb-5.2             17812   22378   23436   7882    6043
    mariadb-5.2-fix         19217   24302   25499   25986   25925
    mysql-5.5.13            12961   20445   16393   14407   5343
    Worklog Quality Checklist Template
    MariaDB 5.5.18

    Segmented Key Cache Performance

    Testing method for segmented key cache performance

    We used SysBench v0.5 from Launchpad to test the performance for the MyISAM storage engine of -gamma.

    As wrapper scripts for automated running of SysBench we used the sysbench/ directory from MariaDB Tools.

    To test that splitting the key cache's global mutex into several mutex helps under multi user load, we wrote a new SysBench test called select_random_points.lua. We used one big table and selected random points with increasing number of concurrent users.

    Main testing outcomes

    We see up to 250% performance gain depending on the amount of concurrent users.

    Detailed testing outcomes

    On our machine pitbull

    On pitbull with --random-points=10

    In relative numbers:

    On pitbull with --random-points=50

    In relative numbers:

    On pitbull with --random-points=100

    In relative numbers:

    Detailed numbers of all runs on pitbull

    You can find the absolute and relative numbers in our OpenOffice.org spread sheet here:

    On our machine perro

    On perro with --random-points=10

    In relative numbers:

    On perro with --random-points=50

    In relative numbers:

    On perro with --random-points=100

    In relative numbers:

    Detailed numbers of all runs on perro

    You can find the absolute and relative numbers in our OpenOffice.org spread sheet here:

    Table and query used

    Table definition:

    Query used:

    The ? parameters were replaced by random numbers when running the SysBench test. We used 10, 50, and 100 random points in our tests.

    We inserted 20 million rows using random data, which gave us a data and index file size of:

    We chose our key buffer size to be big enough to hold the index file.

    Testing environment

    MariaDB sources

    We used -gamma with following revision from our launchpad repository

    Compiling MariaDB

    We compiled MariaDB using this line:

    MariaDB runtime options

    We used following configuration for running MariaDB

    SysBench v0.5 select_random_points.lua options

    We run the SysBench v0.5 select_random_points.lua test with following options:

    We tested with increasing number of concurrent users with a warm up time of 8 minutes and a run time of 20 minutes:

    We also tested an increasing number of random points:

    Kernel parameters

    IO scheduler

    For optimal IO performance running a database we are using the noop scheduler. You can check your scheduler setting with:

    For instance, it should look like this output:

    You can find detailed notes about Linux schedulers here: .

    Open file limits

    Having a lot of concurrent connections can hit the open file limit on your system. On most Linux systems the open file limit is at 1024, which can be not enough. Please set your open file limit higher by editing

    and adding a line like

    Your ulimit -a output should look like this afterwards:

    Machines used for testing

    perro

    pitbull

    This page is licensed: CC BY-SA / Gnu FDL

    DBT3 Benchmark Results InnoDB

    Introduction

    This page shows the results for benchmarking the following configuration:

    • Beta + XtraDB with all optimizations (optimizater_switch) set to ON

    Beta + XtraDB with Igor's suggested optimizations

  • MySQL 5.5.13 + InnoDB

  • MySQL 5.6.2 + InnoDB

  • The test is performed using the automation script under/mariadb-tools/dbt3_benchmark.

    Details about this automation script could be found in DBT3 automation scripts

    Hardware

    The tests were performed on pitbull.askmonty.org. It has the following parameters:

    • CPU: Two socket X hexacore Intel Xeon X5660 = 12 CPUs with hyperthreading on: 24 virtual CPUs

    • Memory: 23GB

    • Logical disk: HDD 500.1 GB as software RAID1

      • device size with M = 1000*1000: 500107 MBytes (500 GB)

      • cache/buffer size = 16384 KBytes

      • Nominal Media Rotation Rate: 7200

    • Operating System: Ubuntu 10.10 (x86_64)

    Scale factor 10

    This test is performed with the following parameters:

    • Scale factor: 10

    • Query timeout: 600 sec.

    • Cluster size: 3

    • Total DB size on disk: about 24GB

    Steps to reproduce

    Follow the instructions in DBT3 automation scripts to prepare the environment for the test.

    Before you run the test, reassure that the settings into the test configuration files match your prepared environment. For more details on the test configuration, please, refer to the test configuration parameters.

    For that test you should set the SCALE_FACTOR parameter to 10 for the following files before the test:

    • mariadb_innodb_igor_s1.pm

    • mariadb_innodb_s1.pm

    • mysql_5_5_15_innodb_s1.pm

    • mysql_5_6_2_innodb_s1.pm

    NOTE: In future versions the scale factor will be passed in as an input parameter to launcher.pl instead of being test configuration parameter.

    After the environment is prepared, the following command should be executed in the shell:

    Results

    Here is the graphics of the results:

    innodb-test-scale-10-timeout-600

    NOTE: Queries that are cut off by the graphics have timed out the period of 600 seconds for that test.

    Here are the actual results:

    Configuration
    1.sql
    2.sql
    3.sql
    4.sql
    5.sql
    6.sql
    7.sql
    8.sql
    9.sql
    10.sql
    11.sql
    12.sql
    13.sql
    14.sql
    15.sql
    16.sql
    17.sql
    18.sql
    19.sql
    20.sql
    21.sql
    22.sql

    The archived folder with all the results and details for that benchmark can be downloaded from:Image:Res_myisam_timeout_120_s10_2011-09-15_190613.zip

    Comments

    From the graphics we can see that for the first query MySQL has performed better than MariaDB with about 37%.

    For the third query on the other hand MariaDB with all optimizations set to ON is the only one query that returned results before the timeout exceeded. This means that it has at least 30% better performance. Also there is some option that could optimize Igor's set of settings even more for that query. For the particular numbers, the same test should be performed with longer timeouts.

    For query #6 it turns out that both MySQL 5.5.15 and 5.6.2 are faster than MariaDB with about 10%.

    For query #12 Igor's settings could be readjusted, so that the query execution time could fall with 38%.

    Igor's settings turned out to be the only one that could finish query #14 before the timeout exceeded.

    From query #16 we can see that MySQL have made a great performance improvement from version 5.5.15 to 5.6.2 to make the query finish at least ten times faster.

    For all the other queries the results are either statistically the same, or the queries have timed out for all configurations and the test should be repeated with longer timeout limit.

    Summary

    Most of the queries have timed out for the given period of 10 minutes per query and until a new test with longer timeout is performed, no correct comparison summary could be made.

    This page is licensed: CC BY-SA / Gnu FDL

    SysBench v0.5 select_random_points on pitbull
    SysBench v0.5 select_random_points on perro
    Revision #2878
    Linux schedulers in TPCC like benchmark
    pitbull_rp10
    pitbull_rp50
    pitbull_rp100
    perro_rp10
    perro_rp50
    perro_rp100
    perl launcher.pl --test={PROJECT_HOME}/mariadb-tools/dbt3_benchmark/tests/mariadb_myisam_s1.pm \
     --test={PROJECT_HOME}/mariadb-tools/dbt3_benchmark/tests/mysql_5_5_15_myisam_s1.pm \
     --test={PROJECT_HOME}/mariadb-tools/dbt3_benchmark/tests/mysql_5_6_2_myisam_s1.pm
     --results-output-dir=path/to/results/output/dir
    Threads	               1      4      8      16      32      64      128
    (32/off)             -3%    53%    122%    155%    226%    269%    237%
    (64/off)             -6%    55%    130%    162%    234%    270%    253%
    
    select_random_points.lua --random-points=10
    Threads	               1      4      8      16      32      64      128
    (32/off)             -3%    53%    113%    154%    232%    254%    231%
    (64/off)             -1%    55%    121%    161%    235%    268%    244%
    
    select_random_points.lua --random-points=50
    Threads	               1      4      8      16      32      64      128
    (32/off)             -3%    54%    121%    160%    209%    246%    219%
    (64/off)             -6%    56%    129%    167%    219%    260%    241%
    
    select_random_points.lua --random-points=100
    Threads	               1      4      8      16      32      64      128
    (32/off)              1%     2%     17%     45%     73%     70%     71%
    (64/off)             -0.3%   6%     19%     46%     72%     74%     80%
    
    select_random_points.lua --random-points=10
    Threads	               1      4      8      16      32      64      128
    (32/off)              1%    10%     26%     69%    105%    122%    114%
    (64/off)             -1%     8%     27%     75%    111%    120%    131%
    
    select_random_points.lua --random-points=50
    Threads	               1      4      8      16      32      64      128
    (32/off)            -0.2%    1%     22%	    73%    114%    114%    126%
    (64/off)            -0.1%    4%     22%     75%    112%    125%    135%
    
    select_random_points.lua --random-points=100
    CREATE TABLE sbtest (
      id  INT UNSIGNED NOT NULL AUTO_INCREMENT,
      k   INT UNSIGNED NOT NULL DEFAULT '0',
      c   CHAR(120) NOT NULL DEFAULT '',
      pad CHAR(60) NOT NULL DEFAULT '',
      PRIMARY KEY (id),
      KEY k (k)
    ) ENGINE=MyISAM
    SELECT id, k, c, pad
        FROM sbtest
        WHERE k IN (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    3.6G    sbtest.MYD
    313M    sbtest.MYI
    revno: 2878
    committer: Sergei Golubchik <sergii@pisem.net>
    branch nick: 5.2
    timestamp: Tue 2010-10-26 07:37:44 +0200
    message:
      fixes for windows
    BUILD/compile-amd64-max
    MYSQLD_OPTIONS="--no-defaults \
      --datadir=$DATA_DIR \
      --language=./sql/share/english \
      --log-error \
      --key_buffer_size=512M \
      --max_connections=256 \
      --query_cache_size=0 \
      --query_cache_type=0 \
      --skip-grant-tables \
      --socket=$MY_SOCKET \
      --table_open_cache=512 \
      --thread_cache=512 \
      --key_cache_segments=0 \ # 0 | 32 | 64
      --tmpdir=$TEMP_DIR"
    # 20 million rows.
    TABLE_SIZE=20000000
    
    SYSBENCH_OPTIONS="--oltp-table-size=$TABLE_SIZE \
      --max-requests=0 \
      --mysql-table-engine=MyISAM \
      --mysql-user=root \
      --mysql-engine-trx=no \
      --myisam-max-rows=50000000 \
      --rand-seed=303"
    NUM_THREADS="1 4 8 16 32 64 128"
    ...
      --num-threads=$THREADS
    # Default option is --random-points=10.
    SYSBENCH_TESTS[0]="select_random_points.lua"
    SYSBENCH_TESTS[1]="select_random_points.lua --random-points=50"
    SYSBENCH_TESTS[2]="select_random_points.lua --random-points=100"
    cat /sys/block/${DEVICE}/queue/scheduler
    cat /sys/block/sda/queue/scheduler
    [noop] deadline cfq
    $EDITOR /etc/security/limits.conf
    #ftp             hard    nproc           0
    #@student        -       maxlogins       4
    *                -       nofile          16384
    
    # End of file
    ulimit -a
    core file size          (blocks, -c) 0
    data seg size           (kbytes, -d) unlimited
    scheduling priority             (-e) 0
    file size               (blocks, -f) unlimited
    pending signals                 (-i) 15975
    max locked memory       (kbytes, -l) 64
    max memory size         (kbytes, -m) 1744200
    open files                      (-n) 16384
    # OS: openSUSE 11.1 (x86_64)
    # Platform: x86_64
    # CPU: Quad-core Intel @ 3.20GHz: 4 CPUs
    # RAM: 2GB
    # Disk(s): 2 x ST31000528AS S-ATA as software RAID 0
    # OS: Ubuntu 10.10
    # Platform: x86_64
    # CPU: Two-socket x hexa-core Intel Xeon X5660 @ 2.80GHz. With hyperthreading: 24CPUs
    # RAM: 28GB
    # Disk(s): 1 x ST3500320NS S-ATA

    n/a

    n/a

    n/a

    8

    + XtraDB with Igor's suggestions for optimization

    163.667

    n/a

    n/a

    n/a

    n/a

    114

    n/a

    n/a

    n/a

    n/a

    538

    280.667

    n/a

    257

    n/a

    60

    456

    MySQL 5.5.15 + InnoDB

    104

    n/a

    n/a

    n/a

    n/a

    103

    n/a

    n/a

    n/a

    n/a

    534.667

    177

    n/a

    n/a

    n/a

    n/a

    476

    MySQL 5.6.2 + InnoDB

    103

    n/a

    n/a

    n/a

    n/a

    104

    n/a

    n/a

    n/a

    n/a

    531

    168

    n/a

    n/a

    n/a

    55

    460.667

    + XtraDB with all optimizations to ON

    165

    n/a

    424.333

    n/a

    n/a

    114.333

    n/a

    n/a

    n/a

    n/a

    536

    173

    n/a

    n/a

    n/a

    52

    452

    n/a

    n/a

    n/a

    n/a

    n/a

    8

    n/a

    n/a

    n/a

    n/a

    6

    n/a

    n/a

    n/a

    n/a

    6

    Threadpool Benchmarks

    Here are some benchmarks of some development threadpool code (the ).

    The benchmarks were run on three machines:

    • facebook-maria1 (Linux, 16 cores)

    • pitbull (Linux, 24 cores)

    • windows (Windows, 8 cores)

    Sysbench 0.4 was used to run some "unit" OLTP tests (point-select and update-nokey), as well as the "classic" OLTP-readonly and OLTP-readwrite. All tests were run with the number of concurrent clients ranging from 16 to 4096, with warm cache, with the sysbench table having 1M records.

    The results are quite different on all of the machines tested (the machines are very different, in terms of cores, IO etc), yet threadpool has a positive effect on all 3 machines, and the positive effect seems to grow with the number of cores.

    Some notes on how the benchmarks were run:

    1. The benchmark client and server used different CPUs - ('taskset' (Linux), or 'start /affinity' (Windows) were used to run the benchmark client on #CPUs/4, the rest of CPUs were used by the server). On the Linux boxes, --thread_pool_size=<N> (where N is number of cores dedicated to the server) was used.

    2. innodb_flush_log_at_trx_commit=0 and innodb_flush_method=ALL_O_DIRECT was used to avoid extensive fsyncing, which is ok for the purposes of the testing for this.

    OLTP_RO

    OLTP_RO facebook-maria1

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    OLTP_RO pitbull

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    OLTP_RO Windows

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    OLTP_RW

    OLTP_RW facebook-maria1

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    OLTP_RW pitbull

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    OLTP_RW Windows

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    POINT_SELECT

    POINT_SELECT facebook-maria1

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    POINT_SELECT pitbull

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    POINT_SELECT Windows

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    UPDATE_NOKEY

    UPDATE_NOKEY facebook-maria1

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    UPDATE_NOKEY pitbull

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    UPDATE_NOKEY Windows

    concurrent clients
    16
    32
    64
    128
    256
    512
    1024
    2048
    4096
    thread per connection
    threadpool

    This page is licensed: CC BY-SA / Gnu FDL

    Every "write" benchmark (
    oltp_rw
    and
    update_nokey
    ) started with a new server (i.e. kill mysqld, remove innodb files, and restart mysqld for each test). Every "read" benchmark, on the other hand, reused the same running server instance. Starting afresh with a new server on write benchmarks is done mainly to eliminate the effects of the purge lag.
  • The results are in queries-per-second (QPS).

  • 128

    256

    512

    1024

    2048

    4096

    thread per connection

    3944

    4725

    4878

    4863

    4732

    4554

    4345

    4103

    1670

    threadpool

    3822

    4955

    4991

    5017

    4908

    4716

    4610

    4307

    2962

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    6754

    7905

    8152

    7948

    7924

    7587

    5313

    3827

    208

    threadpool

    6566

    7725

    8108

    8079

    7976

    7793

    7429

    6523

    4456

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    1822

    1831

    1825

    1829

    1816

    1879

    1866

    1783

    987

    threadpool

    2019

    2049

    2024

    1992

    1924

    1897

    1855

    1825

    1403

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    2833

    3510

    3545

    3420

    3259

    2818

    1788

    820

    113

    threadpool

    3163

    3590

    3498

    3459

    3354

    3117

    2190

    1064

    506

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    4561

    5316

    5332

    3512

    2874

    2476

    1380

    265

    53

    threadpool

    4504

    5382

    5694

    5567

    5302

    4514

    2548

    1186

    484

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    1480

    1498

    1472

    1477

    1456

    1371

    731

    328

    82

    threadpool

    1449

    1523

    1527

    1492

    1443

    1409

    1365

    1240

    862

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    91322

    113116

    115418

    114484

    111169

    104612

    26902

    12843

    5038

    threadpool

    100359

    115618

    118115

    120136

    119165

    113931

    110787

    109970

    104985

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    148673

    161547

    169747

    172083

    69036

    42041

    21775

    4368

    282

    threadpool

    143222

    167069

    167270

    165977

    164983

    158410

    148690

    147107

    143934

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    39734

    42885

    44448

    44478

    41720

    38196

    36844

    35404

    23306

    threadpool

    42143

    45679

    47066

    47753

    46720

    44215

    43677

    43093

    44364

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    60165

    65761

    67727

    57232

    47612

    26341

    8981

    3265

    389

    threadpool

    65092

    68683

    67053

    64141

    64815

    63047

    63346

    63638

    62843

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    65213

    71680

    19418

    13008

    11155

    8742

    5645

    635

    332

    threadpool

    64902

    70236

    70037

    68926

    69930

    69929

    67099

    62376

    17766

    128

    256

    512

    1024

    2048

    4096

    thread per connection

    24790

    25634

    25639

    25309

    24754

    19420

    5249

    2361

    824

    threadpool

    25251

    25259

    25406

    25327

    24850

    23818

    23137

    23003

    22047

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    concurrent clients

    16

    32

    oltp-ro-facebook-maria1
    oltp_ro-pitbull
    oltp_ro-windows
    oltp_rw-facebook-maria1
    oltp_rw-pitbull
    oltp_rw-windows
    point_select-facebook-maria1
    point_select-pitbull
    point_select-windows
    update_nokey-facebook-maria1
    update_nokey-pitbull
    update_nokey-windows

    64

    64

    64

    64

    64

    64

    64

    64

    64

    64

    64

    64

    DBT3 Automation Scripts

    DBT-3 (OSDL Database Test 3) is a workload tool for the Linux kernel that OSDL (Open Source Development Labs, inc) developed based on TPC-H which is provided by the Transaction Performance Processing Council (TPC).

    DBT-3, like TPC-H, simulates an actual decision-making support system and models complex business analysis applications that perform data processing jobs for making better business decisions. By running the workload that DBT-3 simulates, it is possible to verify and measure the performances of the Linux kernel in an actual decision-making support system.

    DBT-3 uses the "scale factor (SF)" as a stress indicator of the system. By varying the SF, it becomes possible to make the size of a database the SF times its size.

    The tests performed by DBT-3 comprise the three tests listed below. DBT-3 obtains the execution times of these three tests as well as the system status information and database statistics information.

    1. Load test

    • Enters the data to be used for the Power and Throughput tests into the database. Makes a bulk insert of the huge CSV data corresponding to the scale factor into the database.

    1. Power test

    • Performs 22 complex queries.

    1. Throughput test

    • Performs the same 22 queries as in the Power test simultaneously in more than one process.

    For the purpose of this task, only the Power test is performed over preliminary prepared database with various Scale factors. The time for each query execution will be measured and stored into a database. Later the results of one whole test with all 22 queries will be rendered into a histogram graphics comparing it to different configurations.

    Benchmark environment preparation

    sudo rights

    The user that will run the benchmark must have sudo rights on the machine.

    For clearing the system caches between query runs, the automation script uses the following command:

    This command must be run with superuser rights. Even if a user supplies a password to sudo, this password expires after some timeout. In order for this command to be run without requiring password, the following line should be added to the sudoers file (edit it with the "sudo visudo" command):

    ...where 'your_username' is the user that will run the benchmark.

    Required software

    The automated DBT3 benchmark requires the following software:

      • Project home:

      • Project home:

    • — a Perl module to connect to MariaDB/MySQL and PostgreSQL. To install it use the following command:

    NOTE: You may receive an error saying that CPAN could not findmysql_config. In this case you have to install the mysql client development library. In OpenSuse the command is:

    Alternatively this module can be installed manually by following these steps:

    1. Download DBD-mysql-4.020.tar.gz from and unpack it

    2. Run the perl script PerlMake.pl under the unzipped dir:

    1. Run make to compile DBD::mysql:

    1. Add the necessary paths in order to run DBD::mysql:

    Tested DBMS

      • Download location: → Generally Available (GA) Releases → Linux - Generic 2.6 (x86, 64-bit), Compressed TAR Archive - downloads mysql-5.5.x-linux2.6-x86_64.tar.gz - gzipped tar file for Linux x86

      • Download location: , downloaded with Bazaar:

      • Download location:

    NOTE: The DBT3 benchmark requires a lot of disk space (for example MySQL 5.5.x + MyISAM database with scale factor 30 takes about 50 GB). Also some queries require the utilization of temp tables under the directory set by the--tmpdir startup parameter passed to mysqld. In the prepared configuration files the temp directory is pointed to the mysql system directory of the binary distribution, but one should reassure that there is enough free space available for the temp directory.

    Installation instructions

    NOTE: The directory where all the files will be downloaded or installed will be referred as $PROJECT_HOME. This could be for example ~/benchmark/dbt3.

    Download

    1. Go to your project folder

    1. Get the latest branch from LaunchPad with Bazaar:

    Now the project for the dbt3 benchmark test will be in the following dir:

    The project dbt3_benchmark has the following directories and files:

    • config — a folder where the configuration files for MariaDB, MySQL and PostgreSQL are stored. They are divided into subfolders named 'sXX', where XX is the scale factor.

    • dbt3_mysql — a folder with all the necessary files for preparing DBT3 databases and queries for the tests with MySQL and MariaDB

    • tests — a folder where the different test configurations are stored. It contains the following directories:

    Prepare benchmark workload and queries

    For the purpose of the benchmark from we will only need DBGEN and QGEN. DBGEN is a tool that generates a workload for the test and QGEN is a tool that generates the queries used for the test.

    1. Go to

    2. Download the archive for DBT3 1.9 into your project folder $PROJECT_HOME

    3. Unzip the archive into your project folder

    1. Copy the file tpcd.h into the dbt3 folder. This step includes the necessary labels for MySQL/MariaDB when building queries.

    1. Copy the file Makefile under $PROJECT_HOME/mariadb-tools/dbt3_benchmark/dbt3_mysql/ into the dbt3 folder

    • NOTE: This step is executed only if you want to overwrite the default behavior of PostgreSQL settings. After copying this Makefile and building the project, QGEN will be set to generate queries for MariaDB/MySQL. If you skip this step, QGEN will generate queries for PostgreSQL by default.

    1. Go to $PROJECT_HOME/dbt3-1.9/src/dbgen and build the project

    1. Set the variable DSS_QUERY to the folder with template queries for MariaDB/MySQL or for PostgreSQL

    2. If you want to build the queries that fit MariaDB/MySQL dialect execute the following command:

    1. If you want to use the default PostgreSQL templates, execute the following command:

    1. Create a directory to store the generated queries in

    1. Generate the queries

    NOTE: The examples use scale factor 30. If you want different scale, change the value of -s parameter

    cd $PROJECT_HOME/dbt3-1.9/src/dbgen ./qgen -s 30 1 > $PROJECT_HOME/gen_query/s30-m/1.sql ./qgen -s 30 2 > $PROJECT_HOME/gen_query/s30-m/2.sql ./qgen -s 30 3 > $PROJECT_HOME/gen_query/s30-m/3.sql ./qgen -s 30 4 > $PROJECT_HOME/gen_query/s30-m/4.sql ./qgen -s 30 5 > $PROJECT_HOME/gen_query/s30-m/5.sql ./qgen -s 30 6 > $PROJECT_HOME/gen_query/s30-m/6.sql ./qgen -s 30 7 > $PROJECT_HOME/gen_query/s30-m/7.sql ./qgen -s 30 8 > $PROJECT_HOME/gen_query/s30-m/8.sql ./qgen -s 30 9 > $PROJECT_HOME/gen_query/s30-m/9.sql ./qgen -s 30 10 > $PROJECT_HOME/gen_query/s30-m/10.sql ./qgen -s 30 11 > $PROJECT_HOME/gen_query/s30-m/11.sql ./qgen -s 30 12 > $PROJECT_HOME/gen_query/s30-m/12.sql ./qgen -s 30 13 > $PROJECT_HOME/gen_query/s30-m/13.sql ./qgen -s 30 14 > $PROJECT_HOME/gen_query/s30-m/14.sql ./qgen -s 30 15 > $PROJECT_HOME/gen_query/s30-m/15.sql ./qgen -s 30 16 > $PROJECT_HOME/gen_query/s30-m/16.sql ./qgen -s 30 17 > $PROJECT_HOME/gen_query/s30-m/17.sql ./qgen -s 30 18 > $PROJECT_HOME/gen_query/s30-m/18.sql ./qgen -s 30 19 > $PROJECT_HOME/gen_query/s30-m/19.sql ./qgen -s 30 20 > $PROJECT_HOME/gen_query/s30-m/20.sql ./qgen -s 30 21 > $PROJECT_HOME/gen_query/s30-m/21.sql ./qgen -s 30 22 > $PROJECT_HOME/gen_query/s30-m/22.sql

    ./qgen -s 30 -x 1 > $PROJECT_HOME/gen_query/s30-m/1_explain.sql ./qgen -s 30 -x 2 > $PROJECT_HOME/gen_query/s30-m/2_explain.sql ./qgen -s 30 -x 3 > $PROJECT_HOME/gen_query/s30-m/3_explain.sql ./qgen -s 30 -x 4 > $PROJECT_HOME/gen_query/s30-m/4_explain.sql ./qgen -s 30 -x 5 > $PROJECT_HOME/gen_query/s30-m/5_explain.sql ./qgen -s 30 -x 6 > $PROJECT_HOME/gen_query/s30-m/6_explain.sql ./qgen -s 30 -x 7 > $PROJECT_HOME/gen_query/s30-m/7_explain.sql ./qgen -s 30 -x 8 > $PROJECT_HOME/gen_query/s30-m/8_explain.sql ./qgen -s 30 -x 9 > $PROJECT_HOME/gen_query/s30-m/9_explain.sql ./qgen -s 30 -x 10 > $PROJECT_HOME/gen_query/s30-m/10_explain.sql ./qgen -s 30 -x 11 > $PROJECT_HOME/gen_query/s30-m/11_explain.sql ./qgen -s 30 -x 12 > $PROJECT_HOME/gen_query/s30-m/12_explain.sql ./qgen -s 30 -x 13 > $PROJECT_HOME/gen_query/s30-m/13_explain.sql ./qgen -s 30 -x 14 > $PROJECT_HOME/gen_query/s30-m/14_explain.sql ./qgen -s 30 -x 15 > $PROJECT_HOME/gen_query/s30-m/15_explain.sql ./qgen -s 30 -x 16 > $PROJECT_HOME/gen_query/s30-m/16_explain.sql ./qgen -s 30 -x 17 > $PROJECT_HOME/gen_query/s30-m/17_explain.sql ./qgen -s 30 -x 18 > $PROJECT_HOME/gen_query/s30-m/18_explain.sql ./qgen -s 30 -x 19 > $PROJECT_HOME/gen_query/s30-m/19_explain.sql ./qgen -s 30 -x 20 > $PROJECT_HOME/gen_query/s30-m/20_explain.sql ./qgen -s 30 -x 21 > $PROJECT_HOME/gen_query/s30-m/21_explain.sql ./qgen -s 30 -x 22 > $PROJECT_HOME/gen_query/s30-m/22_explain.sql

    mkdir $PROJECT_HOME/gen_data/s30

    export DSS_PATH=$PROJECT_HOME/gen_data/s30/

    ./dbgen -vfF -s 30

    gunzip < mysql-5.5.x-linux2.6-x86_64.tar.gz |tar xf -

    $PROJECT_HOME/bin/mysql-5.5.x-linux2.6-x86_64/bin/mysqld_safe --datadir=some/data/dir &

    gunzip < mysql-5.6.x-m5-linux2.6-x86_64.tar.gz |tar xf -

    $PROJECT_HOME/bin/mysql-5.6.x-m5-linux2.6-x86_64/bin/mysqld_safe --datadir=some/data/dir &

    bzr branch lp:maria/5.3 mv 5.3/ mariadb-5.3

    cd mariadb-5.3/ ./BUILD/compile-amd64-max

    ./scripts/make_binary_distribution

    mv mariadb-5.3.x-beta-linux-x86_64.tar.gz $PROJECT_HOME/bin/ cd $PROJECT_HOME/bin/ tar -xf mariadb-5.3.x-beta-linux-x86_64.tar.gz

    $PROJECT_HOME/bin/mariadb-5.3.x-beta-linux-x86_64/bin/mysqld_safe --datadir=some/data/dir &

    LOAD DATA INFILE '/some/path/to/gen_data/nation.tbl' into table nation fields terminated by '|';

    LOAD DATA INFILE '~/benchmark/dbt3/gen_data/s30/nation.tbl' into table nation fields terminated by '|';

    cd $DB_HOME ./scripts/mysql_install_db --defaults-file=$PROJECT_HOME/mariadb-tools/dbt3_benchmark/config/s30/load_mysql_myisam_my.cnf --basedir=$DB_HOME --datadir=$PROJECT_HOME/db_data/myisam-s30/

    ./bin/mysqladmin --user=root --socket=$PROJECT_HOME/temp/mysql.sock shutdown 0

    gunzip < postgresql-9.1rc1.tar.gz |tar xf -

    mkdir $PROJECT_HOME/PostgreSQL_bin cd $PROJECT_HOME/postgresql-9.1rc1 ./configure --prefix=$PROJECT_HOME/bin/PostgreSQL_bin make make install

    mkdir $PROJECT_HOME/db_data/postgre_s30 cd $PROJECT_HOME/bin/PostgreSQL_bin ./bin/initdb -D $PROJECT_HOME/db_data/postgre_s30

    ./bin/postgres -D $PROJECT_HOME/db_data/postgre_s30 -p 54322 &

    ./bin/createdb -O {YOUR_USERNAME} dbt3 -p 54322 ./bin/psql -p 54322 -d dbt3 -f $PROJECT_HOME/mariadb-tools/dbt3_benchmark/dbt3_mysql/make-dbt3-db_pg.sql

    ./bin/pg_ctl -D $PROJECT_HOME/db_data/postgre_s30/ -p 54322 stop

    cd $PROJECT_HOME/bin/mariadb-5.3.x-beta-linux-x86_64

    ./scripts/mysql_install_db --datadir=$PROJECT_HOME/db_data/dbt3_results_db

    ./bin/mysqld_safe --defaults-file=$PROJECT_HOME/mariadb-tools/dbt3_benchmark/config/results_mariadb_my.cnf --port=12340 --socket=$PROJECT_HOME/temp/mysql_results.sock --datadir=$PROJECT_HOME/db_data/dbt3_results_db/ &

    ./bin/mysql -u root -P 12340 -S $PROJECT_HOME/temp/mysql_results.sock < $PROJECT_HOME/mariadb-tools/dbt3_benchmark/dbt3_mysql/make-results-db.sql

    ./bin/mysqladmin --user=root --port=12340 --socket=$PROJECT_HOME/temp/mysql_results.sock shutdown 0

    [common] RESULTS_DB_CONFIG = $PROJECT_HOME/mariadb-tools/dbt3_benchmark/tests/results_db_conf/results_db.conf TEST_CONFIG = $PROJECT_HOME/mariadb-tools/dbt3_benchmark/tests/test_conf/test_myisam.conf

    [mariadb_5_3] QUERIES_CONFIG = $PROJECT_HOME/mariadb-tools/dbt3_benchmark/tests/queries_conf/queries.conf DB_CONFIG = $PROJECT_HOME/mariadb-tools/dbt3_benchmark/tests/db_conf/db_mariadb_5_3_myisam.conf

    [mysql_5_5] QUERIES_CONFIG = $PROJECT_HOME/mariadb-tools/dbt3_benchmark/tests/queries_conf/queries_mysql.conf DB_CONFIG = $PROJECT_HOME/mariadb-tools/dbt3_benchmark/tests/db_conf/db_mysql_5_5_myisam.conf ...

    unlink /path/to/datadir/mysql ln -s /path/to/value/in/MYSQL_SYSTEM_DIR/mysql_mariadb_5_3 /path/to/datadir/mysql

    [db_settings] DBMS_HOME = $PROJECT_HOME/bin/mariadb-5.3.2-beta-linux-x86_64 DBMS_USER = root ...

    QUERIES_AT_ONCE = 0 CLEAR_CACHES = 1 WARMUP = 0 ...

    [queries_settings] QUERIES_HOME = /path/to/queries

    [query1] QUERY=1.sql EXPLAIN_QUERY=1_explain.sql STARTUP_PARAMS=

    [query2] QUERY=2.sql EXPLAIN_QUERY=2_explain.sql STARTUP_PARAMS=--optimizer_switch='mrr=on' --mrr_buffer_size=8M --some_startup_parmas ...

    DBMS_HOME = $PROJECT_HOME/mariadb-5.3.x-beta-linux-x86_64 DBMS_USER = root ...

    sudo /sbin/sysctl vm.drop_caches=3

    {your_username} ALL=NOPASSWD:/sbin/sysctl

    sar -u 0 2>null

    sar -b 0 2>null

    sar -r 0 2>null

    perl launcher.pl --project-home=/path/to/project/home/ --results-output-dir=/path/to/project/home/results/myisam_test --datadir=/path/to/project/home/db_data/ --test=/path/to/project/home/mariadb-tools/dbt3_benchmark/tests/myisam_test_mariadb_5_3_mysql_5_5_mysql_5_6.conf --queries-home=/path/to/project/home/gen_query/ --scale-factor=30 --TIMEOUT=600

    perl launcher.pl --project-home=/path/to/project/home/ --results-output-dir=/path/to/project/home/results/innodb_test --datadir=/path/to/project/home/db_data/ --test=/path/to/project/home/mariadb-tools/dbt3_benchmark/tests/innodb_test_mariadb_5_3_mysql_5_5_mysql_5_6.conf --queries-home=/path/to/project/home/gen_query/ --scale-factor=30 --TIMEOUT=7200 --NUM_TESTS=3

    [mariadb_5_3_new_configuration] QUERIES_CONFIG = $PROJECT_HOME/mariadb-tools/dbt3_benchmark/tests/queries_conf/queries-mariadb.conf DB_CONFIG = $PROJECT_HOME/mariadb-tools/dbt3_benchmark/tests/db_conf/db_mariadb_5_3_myisam_new_configuration.conf

    The project folder is called "dbt3_benchmark" and is under mariadb-tools.

  • dbt3-1.9

    • Download location:

  • Gnuplot 4.4 — graphics output program.

    • Project home:

  • Config::Auto — a Perl module that reads configuration files. To install it use the following command:

  • Download location:
    → Development Releases → Linux - Generic 2.6 (x86, 64-bit), Compressed TAR Archive - downloads mysql-5.6.x-m5-linux2.6-x86_64.tar.gz - gzipped tar file for Linux x86
  • MariaDB 5.3.x

    • Download location: 5.3 , downloaded with Bazaar:

  • db_conf — here are stored the database configuration files

  • queries_conf — here are stored the different queries configuration files

  • results_db_conf — here is stored the configuration of the results database

  • test_conf — here are the test configurations

  • launcher.pl — a perl script that automates the test. Details about calling and functionality of this file are listed later on this page.

  • Perl
    mariadb-tools
    mariadb-tools
    DBD::mysql
    mysql.pm
    MySQL 5.5.x
    #downloads
    MySQL 5.6.x
    MariaDB 5.5.x
    5.5
    PostgreSQL
    mariadb-tools
    DBT3-1.9
    #downloads
    sudo /sbin/sysctl vm.drop_caches=3
    'your_username' ALL=NOPASSWD:/sbin/sysctl
    sudo cpan Config::Auto
    sudo cpan DBD::mysql
    sudo zypper install libmysqlclient-devel
    perl Makefile.PL --mysql_config=/path/to/some/mysql_binary_distribution/bin/mysql_config
    make
    export PERL5LIB="/path/to/unzipped_DBD_mysql/DBD-mysql-4.020/lib"
    export LD_LIBRARY_PATH="/path/to/unzipped_DBD_mysql/DBD-mysql-4.020/blib/arch/auto/DBD/mysql/:/path/to/some/mysql_binary_distribution/lib/"
    bzr branch lp:maria/5.3
    bzr branch lp:maria/5.5
    cd $PROJECT_HOME
    bzr branch lp:mariadb-tools
    $PROJECT_HOME/mariadb-tools/dbt3_benchmark/
    cd $PROJECT_HOME
    tar -zxf dbt3-1.9.tar.gz
    cp $PROJECT_HOME/mariadb-tools/dbt3_benchmark/dbt3_mysql/tpcd.h $PROJECT_HOME/dbt3-1.9/src/dbgen/
    cp $PROJECT_HOME/mariadb-tools/dbt3_benchmark/dbt3_mysql/Makefile $PROJECT_HOME/dbt3-1.9/src/dbgen/
    cd $PROJECT_HOME/dbt3-1.9/src/dbgen
    make
    export DSS_QUERY=$PROJECT_HOME/mariadb-tools/dbt3_benchmark/dbt3_mysql/mysql_queries/
    export DSS_QUERY=$PROJECT_HOME/dbt3-1.9/queries/pgsql/
    mkdir $PROJECT_HOME/gen_query
    
    
    1. Generate the explain queries
    
    
    Now the generated queries for MariaDB/MySQL test are ready and are stored into
    the folder `$PROJECT_HOME/gen_query/s30-m/` (-m is for MariaDB/MySQL).
    
    
    Additional reorganization of directories is up to the user.
    
    
    1. Create a directory for the generated workload
    
    
    1. Set the variable DSS_PATH to the folder with the generated table data. The generated workload for the test will be generated there.
    
    
    1. Generate the table data
    
    
    * NOTE: The example uses scale factor = `30`. If you want to change it, you should change the parameter `-s`.
    
    
    * Now the generated data load is stored into the folder set in `$DSS_PATH = $PROJECT_HOME/gen_data/`
    
    
    For the purpose of this benchmark these steps have been performed for scale
    factor 30 and are stored on facebook-maria1 in the following locations:
    
    
    * `/benchmark/dbt3/gen_data/s30` — the data load for scale factor 30
    * `/benchmark/dbt3/gen_query/s30-m` — generated queries for MariaDB/MySQL with scale factor 30
    * `/benchmark/dbt3/gen_query/s30-p` — generated queries for PostgreSQL with scale factor 30
    
    
    See [DBT3 example preparation time](dbt3-example-preparation-time.md) to see how long it would take you to
    prepare the databases for the test.
    
    
    ### Download [MySQL 5.5.x](https://dev.mysql.com/downloads/mysql/#downloads)
    
    
    1. Download the tar.gz file into your project folder `$PROJECT_HOME/bin/` for example
    
    
    1. Unzip the archive with the following command:
    
    
    Now the server could be started with the following command:
    
    
    
    ### Download [MySQL 5.6.x](https://dev.mysql.com/downloads/mysql/#downloads)
    
    
    1. Download the tar.gz file into your project folder `$PROJECT_HOME/bin/` for example
    
    
    1. Unzip the archive with the following command:
    
    
    Now the server could be started with the following command:
    
    
    
    ### Download and build [MariaDB 5.3](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-3-series/changes-improvements-in-mariadb-5-3).x / [MariaDB 5.5](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-5-series/changes-improvements-in-mariadb-5-5).x
    
    
    **NOTE:** These steps are the same for [MariaDB 5.5](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-5-series/changes-improvements-in-mariadb-5-5).x with properly replaced
    version numbers
    
    
    1. Download with Bazaar the [mariadb 5.3](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-3-series/changes-improvements-in-mariadb-5-3) project
    
    
    1. Build MariaDB
    
    
    1. Build a binary distribution tar.gz file
    
    
    1. Move the generated tar.gz file and unzip it to $PROJECT_HOME/bin from where
     it will be used by the automation script
    
    
    Now the server could be started with the following command:
    
    
    
    ### Prepare the databases for the benchmark
    
    
    **NOTE:** These instructions are the same for MariaDB, MySQL 5.5.x and MySQL
    5.6.x with changing only the database home folders, noted here as $DB_HOME (for
    example for MySQL 5.5.x $DB_HOME is
    `$PROJECT_HOME/bin/mysql-5.5.x-linux2.6-x86_64`). Also you can prepare InnoDB
    storage engine test databases. Instructions for preparing PostgreSQL could be
    found in the section for downloading, building and preparing PostgreSQL later
    on this page.
    
    
    1. Open the file `$PROJECT_HOME/mariadb-tools/dbt3_benchmark/dbt3_mysql/make-dbt3-db_innodb.sql` and edit the values for the call of the sql commands that look like this one:
    
    
    * They all look the same but operate with different tables.
    
    
    * Replace "/some/path/to/gen_data/" with the proper directory where the
     generated data load is stored. At the end the same command could look like
     this:
    
    
    1. Create an empty MySQL database into a folder that will be used for the benchmark
    
    
    * NOTE: For InnoDB change the defaults-file to `load_mysql_innodb_my.cnf`.
    
    
    1. Start the mysqld process`./bin/mysqld_safe --defaults-file=$PROJECT_HOME/mariadb-tools/dbt3_benchmark/config/s30/load_mysql_myisam_my.cnf --tmpdir=$PROJECT_HOME/temp/ --socket=$PROJECT_HOME/temp/mysql.sock --datadir=$PROJECT_HOME/db_data/myisam-s30/ &`
    
    
    * NOTE: For InnoDB change the defaults-file to `load_mysql_innodb_my.cnf`.
     Also make sure that you have enough space in the directory set by the
     parameter `--tmpdir`, since loading the database could take a
     lot of temporary space.
    
    
    1. Load the data into the database by executing the file `make-dbt3-db_pre_create_PK.sql` (for InnoDB) or `make-dbt3-db_post_create_PK.sql` (for MyISAM)`./bin/mysql -u root -S $PROJECT_HOME/temp/mysql.sock < $PROJECT_HOME/mariadb-tools/dbt3_benchmark/dbt3_mysql/make-dbt3-db_post_create_PK.sql`
    
    
    * NOTE: For faster creation, it is recommended to
     use `make-dbt3-db_pre_create_PK.sql` for loading InnoDB
     and `make-dbt3-db_post_create_PK.sql` for loading MyISAM databases.
    
    
    1. Shutdown the database server:
    
    
    Now you have a database loaded with scale 30. Its datadir is
    `$PROJECT_HOME/db_data/myisam-s30/`
    
    
    The same steps can be reproduced for different scale factors and for different
    storage engines.
    
    
    ### Download, build and prepare [PostgreSQL](https://www.postgresql.org/ftp/source/v9.1rc1/)
    
    
    1. Go to [](https://www.postgresql.org/ftp/source/v9.1rc1/)
    
    
    1. Download the file under the link postgresql-9.1rc1.tar.gz
    
    
    1. Unzip the archive to your project folder
    
    
    1. Execute the following commnads into the shell to install PostgreSQL:
    
    
    * NOTE: Configure script may not find the following libraries: readline
     and zlib. In that case you can run configure without these libraries by
     adding the following parameters to the command line:
     `--without-readline --without-zlib`
    
    
    1. Prepare the database to test with:
    
    
    1. Start the server:
    
    
    1. Load the dataload into the DB
    
    
    * NOTE: Here under `{YOUR_USERNAME}` you should put the
     database owner.
    
    
    1. Stop the server:
    
    
    
    The steps for preparing the workload for the benchmark on facebook-maria1 are
    already made for MariaDB, MySQL and PostgreSQL. Here are the directories for
    the different DBMS, storage engines and scale factors that are prepared on
    facebook-maria1:
    
    
    * `~/benchmark/dbt3/db_data/myisam_s30`
    — datadir for MariaDB/MySQL + MyISAM with scale
     factor 30
    
    
    * `~/benchmark/dbt3/db_data/innodb_mariadb_s30`
    — datadir for MariaDB + InnoDB with scale factor 30
     (TODO)
    
    
    * `~/benchmark/dbt3/db_data/innodb_mysql_s30`
    — datadir for
     MySQL + InnoDB with scale factor 30 (TODO)
    
    
    * `~/benchmark/dbt3/db_data/postgre_s30`
    — datadir for
     PostgreSQL with scale factor 30 (TODO)
    
    
    ### Prepare the results database
    
    
    The results of the benchmark will be stored in a separate database that will be
    run by [MariaDB 5.3](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-3-series/changes-improvements-in-mariadb-5-3).x.
    
    
    **NOTE:** The results database will be a subject to change in future versions
    of the DBT3 benchmarking project.
    
    
    The database is created by the file
    `$PROJECT_HOME/mariadb-tools/dbt3_benchmark/dbt3_mysql/make-results-db.sql`.
    In that file you can find details about every table and column in the database.
    
    
    To prepare the database for work follow these steps:
    
    
    1. Go to [MariaDB 5.3](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-3-series/changes-improvements-in-mariadb-5-3).x installation directory
    
    
    1. Install the system database tables into the datadir for the results (for
    example `$PROJECT_HOME/db_data/dbt3_results_db`)
    
    
    1. Start mysqld for results db
    
    
    1. Install the database
    
    
    1. Shutdown the results db server:
    
    
    ## Automation script
    
    
    ### Configuring and running a benchmark
    
    
    In order to run a benchmark, one should have:
    
    
    * 5 configuration files:
    
      1. DBMS server configuration (see [#dbms-server-configuration](#dbms-server-configuration))
      1. Test configuration (see [#test-configuration](#test-configuration))
      1. Queries configuration (see [#queries-configuration](#queries-configuration))
      1. Results database configuration (see [#results-database-configuration](#results-database-configuration))
      1. Top-level configuration file that combines all of the above (see
     [#top-level-configuration](#top-level-configuration))
    * an automation script `launcher.pl` that could be found
     under `mariadb-tools/dbt3_benchmark/`
    * startup parameters that should be passed to the automation script (see
     [#script-startup-parameters](#script-startup-parameters)).
    
    
    Details about each of these is given in the following sections.
    
    
    Each benchmark is configured by a set of configuration files. One can find
    example (default) configuration files under the directory
    'mariadb-tools/dbt3_benchmark/tests'. Each configuration file has an 'ini'
    configuration syntax and is parsed by the perl automation script with the CPAN
    module [Config::Auto](https://search.cpan.org/~simon/Config-Auto-0.03/Auto.pm)
    
    
    #### Configuration keywords
    
    
    Every configuration file could contain keywords that will be replaced by the
    script with particular values. They are used for convenience when you want to
    make your configuration files more common to the environment that you have
    prepared for the benchmark. These keywords are:
    
    
    * `$PROJECT_HOME` — used as the directory where the
     project '`mariadb-tools`' is located or as a base path for the whole project
     (e.g. "`DBMS_HOME = $PROJECT_HOME/bin/mariadb-5.3.x-beta-linux-x86_64`").
     It is replaced by the value set with the startup parameter 'project-home'
     passed to launcher.pl,
    
    
    * `$DATADIR_HOME` — used as the directory where the
     datadir folders are located for the benchmark (e.g.
     "`$DATADIR_HOME/myisam-s30`"). It is replaced by the value set
     with the startup parameter 'datadir-home' passed to launcher.pl.
    
    
    * `$QUERIES_HOME` — used as the directory where the
     queries are located (e.g. "`$QUERIES_HOME/s30-m`"
     — queries for MariaDB/MySQL for scale factor 30). It
     is replaced by the value set with the startup parameter 'queries-home'
     passed to launcher.pl.
    
    
    * `$SCALE_FACTOR` — the scale factor that will be
     used. It is usually a part of the name of the datadir directory (e.g.
     "`$DATADIR_HOME/myisam-s$SCALE_FACTOR`"), the queries directory (e.g.
     "`$QUERIES_HOME/s$SCALE_FACTOR-m`") or the database configuration directory
     (e.g. `$PROJECT_HOME/mariadb-tools/dbt3_benchmark/config/s$SCALE_FACTOR`).
     It is replaced by the value set with the startup parameter 'scale-factor'
     passed to launcher.pl.
    
    
    Note that if any of the configuration files contains such keyword, the
    corresponding startup parameter passed to **launcher.pl** will become required.
    
    
    #### Top-level configuration
    
    
    A top-level configuration file provides paths to the **Test, DBMS, Queries**
    and **Results database** configurations files
    
    
    There are default configuration files in the directory
    `mariadb-tools/dbt3_benchmark/tests/` and contain the following settings:
    
    
    
    | Parameter | Description |
    | --- | --- |
    | RESULTS_DB_CONFIG | The configuration file for results DB settings |
    | TEST_CONFIG | The configuration file for the test settings |
    | QUERIES_CONFIG | The configuration file for the queries settings |
    | DB_CONFIG | The configuration file for the DBMS server settigns |
    
    
    
    This file has the following format:
    
    
    
    **NOTE:** The settings `RESULTS_DB_CONFIG` and `TEST_CONFIG` should be set
    under the `[common]` section. They are common for the whole test (although
    some settings from `TEST_CONFIG` could be overridden in the
    `QUERIES_CONFIG` file). All settings that combine `QUERIES_CONFIG` and
    `DB_CONFIG` should be in a separate section (e.g. `[mariadb_5_3]`).
    
    
    A test configuration is passed as an input parameter to the automation script
    with the parameter
    `--test=/path/to/some_test_configuration.conf` (see
    [#script-startup-parameters](#script-startup-parameters))
    
    
    #### DBMS server configuration
    
    
    These configuration files contain settings that describe the benchmarked DBMS.
    They are usually contained into the folder
    `mariadb-tools/dbt3_benchmark/tests/db_conf`.
    
    
    Here is the list of parameters that could be set into this configuration file:
    
    
    
    | Parameter | Description |
    | --- | --- |
    | DBMS_HOME | Where the instalation folder of MariaDB / MySQL / PostgreSQL is located.NOTE: The automation script uses "./bin/mysqld_safe" to start the mysqld process. So the versions of MariaDB and MySQL should be a "binary distribution" ones. |
    | DBMS_USER | The database user that will be used. |
    | CONFIG_FILE | The config file that mysqld or postgres will use when starting |
    | SOCKET | The socket that will be used to start the server |
    | PORT | The port that the server will be started on |
    | HOST | The host where the server is located |
    | DATADIR | Where the datadir for mysqld or postgres is located |
    | TMPDIR | Where the temp tables will be created while sorting and grouping. |
    | DBNAME | The database (schema) name where the benchmark tables are located. |
    | KEYWORD | This text will be stored into the results database as a keyword. Also will be used as a name for a subfolder with results and statistics. |
    | DBMS | Database Management System that will be used. Possible values: "MySQL", "MariaDB" and "PostgreSQL" |
    | STORAGE_ENGINE | The storage engine that was used (MyISAM, InnoDB, etc.) |
    | STARTUP_PARAMS | Any startup parameters that will be used while starting the mysqld process or postgres process. Same format as given on the command line. |
    | GRAPH_HEADING | The heading of the graphic for that particular test. |
    | MYSQL_SYSTEM_DIR | See "MYSQL_SYSTEM_DIR note", below. |
    | READ_ONLY | If set to 1, mysqld process will be started with '--read-only' startup parameter |
    | PRE_RUN_SQL | SQL commands that are run prior each query run |
    | POST_RUN_SQL | SQL commands that are run after each query run |
    | PRE_TEST_SQL | SQL commands that are run prior the whole test with that database settings |
    | POST_TEST_SQL | SQL commands that are run after the whole test with that database settings |
    
    
    
    **`MYSQL_SYSTEM_DIR` note:**
    This option is added for convenience when you want to save time and disk space
    for generating databases for different DBMS (and different versions) and use a
    single data directory for all of them. When running different versions of
    MariaDB/MySQL over a single datadir, one should run
    [mysql-upgrade](https://dev.mysql.com/doc/refman/5.5/en/mysql-upgrade.html) in
    order to fix the system tables. So in one data directory, you could prepare the
    following directories for different MariaDB/MySQL system directories:
    
    
    * `mysql_mysql_5_5` — a copy of the system directory
     '`mysql`' upgraded by MySQL 5.5.x
    * `mysql_mariadb_5_3` — a copy of the system
     directory '`mysql`' upgraded by [MariaDB 5.3](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-3-series/changes-improvements-in-mariadb-5-3).x
    * `mysql_mariadb_5_5` — a copy of the system
     directory '`mysql`' upgraded by [MariaDB 5.5](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-5-series/changes-improvements-in-mariadb-5-5).x
    
    If `MYSQL_SYSTEM_DIR` is set to one of these directories, the automation
    script will unlink the current system directory 'mysql' and make a new symbolic
    link with that name to the one in the setting.
    Here is an example command that will be executed:
    
    
    
    **NOTE:** This approach is suitable for MyISAM tests.
    
    
    The configuration file looks like this:
    
    
    
    Note that the section `[db_settings]` is required for the file to be properly
    parsed by the automation script.
    
    
    #### Test configuration
    
    
    These configuration files contain settings describing the test. They are
    usually contained into the folder
    **mariadb-tools/dbt3_benchmark/tests/test_conf**.
    
    
    Here is the list of parameters that could be set into this configuration file:
    
    
    
    | Parameter | Description |
    | --- | --- |
    | QUERIES_AT_ONCE | If set to 1, then all the queries are executed sequentially without restarting the server or clearing the caches between queries. |
    | CLEAR_CACHES | If set to 1, the disk caches will be cleared before each query test. |
    | WARMUP | Perform a warm-up runs before running the query. |
    | EXPLAIN | Run an Explain command prior the run of the query. The explain results will be stored in a file under the results output directory. |
    | RUN | Perform the actual test |
    | ANALYZE_EXPLAIN | A result extraction mechanism where only the best execution plan (results from EXPLAIN select) will be measured. It is designed to be used when benchmarking InnoDB storage engine where execution plan is changing between server restarts (see [#results-extraction-mechanisms](#results-extraction-mechanisms)). |
    | MIN_MAX_OUT_OF_N | A result extraction mechanism where the minimal and maximal values out of N (set by the parameter NUM_TESTS) tests are taken as a result. This could be used when InnoDB storage engine is tested (see [#results-extraction-mechanisms](#results-extraction-mechanisms)). |
    | SIMPLE_AVERAGE | A result extraction mechanism where the final result is the average time taken for the tests. The number of tests is per query is set by the NUM_TESTS parameter. Note that if even one test has timed out, the result is 'time-out'. This is used when testing MyISAM storage engine since there the execution plan is constant (see [#results-extraction-mechanisms](#results-extraction-mechanisms)). |
    | NUM_TESTS | How many tests should be performed for each query. When ANALYZE_EXPLAIN is set, this value could be set to 0, meaning that the tests will continue until enough results are extracted (see setting CLUSTER_SIZE). This parameter is very important when MIN_MAX_OUT_OF_N or SIMPLE_AVERAGE is selected. |
    | MAX_SKIPPED_TESTS | When ANALYZE_EXPLAIN is set and an execution plan that is slower is selected, the execution of the query is skipped and the server is restarted in order to change the execution plan. If the server is restarted more than MAX_SKIPPED_TESTS, there are obviously no more different execution plans and the script continues to the next query benchmark. |
    | WARMUPS_COUNT | How many warmup runs will be performed prior the actual benchmark run. |
    | CLUSTER_SIZE | How big a cluster with results for a query should be in order to extract the final result. It is used when ANALYZE_EXPLAIN is selected as a result extraction method. |
    | MAX_QUERY_TIME | The maximum time that one query will be tested. Currently it is applicable only when ANALYZE_EXPLAIN is selected. |
    | TIMEOUT | The maximum time that one query could run. Currently timeout is applicable only for MySQL and MariaDB. |
    | OS_STATS_INTERVAL | What is the time interval between extraction of OS statistics for CPU, memory, etc. |
    | PRE_RUN_OS | OS commands that should be executed prior each query run |
    | POST_RUN_OS | OS commands that should be executed after each query run |
    | PRE_TEST_OS | OS commands that should be executed prior the whole test |
    | POST_TEST_OS | OS commands that should be executed after the whole test is complete |
    
    
    
    The configuration file looks like this:
    
    
    
    #### Queries configuration
    
    
    These configuration files contain the list of all the queries that will be
    benchmarked against each database. Some settings
    from `DBMS server configuration` and `Test configuration` could be
    overridden into the `Queries configuration` files. The folder that contains
    such configurations is `mariadb-tools/dbt3_benchmark/tests/queries_conf`.
    
    
    Here is the list of parameters that could be set into this configuration file:
    
    
    
    | Parameter | Description |
    | --- | --- |
    | QUERIES_HOME | Where the queries are located on disk. This value is concatenated to the QUERY setting and this makes the path to the particular query. NOTE: This setting should be set under the section [queries_settings]. |
    | CONFIG_FILE | This overrides the startup setting CONFIG_FILE from DMBS server configuration file and sets the database configuration file that is used. It could be used if some configuration file without any optimizations should be set for this particular queries configuration file.NOTE: This setting should be set under the section [queries_settings]. |
    | QUERY | The name of the query located into QUERIES_HOME folder. E.g. "1.sql" |
    | EXPLAIN_QUERY | The name of the explain query into QUERIES_HOME folder. E.g. "1_explain.sql" |
    | TMPDIR | This overrides the setting TMPDIR from the DMBS server configuration. |
    | STARTUP_PARAMS | This overrides the setting STARTUP_PARAMS from the DMBS server configuration. Using this setting one could change the particular startup parameters (like optimizations and buffers) for the DB server. |
    | PRE_RUN_SQL | This overrides the setting PRE_RUN_SQL from the DMBS server configuration. |
    | POST_RUN_SQL | This overrides the setting POST_RUN_SQL from the DMBS server configuration. |
    | RUN | This overrides the setting RUN from the test configuration. |
    | EXPLAIN | This overrides the setting EXPLAIN from the test configuration. |
    | TIMEOUT | This overrides the setting TIMEOUT from the test configuration. |
    | NUM_TESTS | This overrides the setting NUM_TESTS from the test configuration. |
    | MAX_SKIPPED_TESTS | This overrides the setting MAX_SKIPPED_TESTS from the test configuration. |
    | WARMUP | This overrides the setting WARMUP from the test configuration. |
    | WARMUPS_COUNT | This overrides the setting WARMUPS_COUNT from the test configuration. |
    | MAX_QUERY_TIME | This overrides the setting MAX_QUERY_TIME from the test configuration. |
    | CLUSTER_SIZE | This overrides the setting CLUSTER_SIZE from the test configuration. |
    | PRE_RUN_OS | This overrides the setting PRE_RUN_OS from the test configuration. |
    | POST_RUN_OS | This overrides the setting POST_RUN_OS from the test configuration. |
    | OS_STATS_INTERVAL | This overrides the setting OS_STATS_INTERVAL from the test configuration. |
    
    
    
    The queries configuration file could look like this:
    
    
    
    ...where "`QUERIES_HOME = /path/to/queries`" could be replaced with
    "`QUERIES_HOME = $QUERIES_HOME/s$SCALE_FACTOR-m`" for example and thus
    `$QUERIES_HOME` and $SCALE_FACTOR will be replaced by the script startup
    parameters passed to `launcher.pl` (see [#script-startup-parameters](#script-startup-parameters))
    
    
    **NOTE:** The section `[queries_settings]` is required for the configuration
    file to be parsed correctly. Also each query settings should be set under an
    uniquely named configuration section (e.g. `[query1]` or `[1.sql]`)
    
    
    #### Results database configuration
    
    
    These configuration files contain settings about the database where the results
    will be stored. They are usually contained into the folder
    `mariadb-tools/dbt3_benchmark/tests/results_db_conf`.
    
    
    Here is the list of parameters that could be set into this configuration file:
    
    
    
    | Parameter | Description |
    | --- | --- |
    | DBMS_HOME | Where the database directory is located. E.g. "$PROJECT_HOME/mariadb-5.3.x-beta-linux-x86_64". This should be a binary distribution of MariaDB or MySQL. |
    | DBMS_USER | The user that will be used by the DBMS |
    | DATADIR | Where the data directory is located for the results database |
    | CONFIG_FILE | What the configuration file used by the database is. |
    | SOCKET | The socket that will be used by the results database. This should be different socket than the one provided for the testing databases. |
    | PORT | The port that the results database will use. This should be different port than the one provided for the testing databases. |
    | STARTUP_PARAMS | Any startup parameters that should be set to start the server. |
    | DBNAME | The database name to use. |
    | HOST | The host where the results database is. |
    
    
    
    The results database configuration could look like this:
    
    
    
    #### Script startup parameters
    
    
    `launcher.pl` could accept startup parameters called in the manner
    "`--some-param`". Note that these startup parameters are
    case-sensitive. The ones that are with upper-case are used when overriding a
    setting in some of the configuration files.
    
    
    Here is a list of the startup parameters:
    
    
    
    | Parameter | Description |
    | --- | --- |
    | test | The top-level benchmark configuration file that will be run. This is a required startup parameter. |
    | results-output-dir | Where the results of the benchmark will be stored. A timestamp is automatically attached to the directory name so that it keeps track of time and date of the benchmark. This is a required parameter. |
    | dry-run | If set, no benchmark will be performed. Instead only messages will be displayed for the actions that were intended to be done. |
    | project-home | Required if any configuration file uses the variable '$PROJECT_HOME'. If all configuration files use absolute paths, not used. |
    | datadir-home | The value in this parameter will replace any occurrences of the string '$DATADIR_HOME' into the configuration files. If there are no such occurances, it is not a required parameter. |
    | queries-home | The value in this parameter will replace any occurrences of the string '$QUERIES_HOME' into the configuration files. If there are no such occurances, it is not a required parameter. |
    | scale-factor | The value in this parameter will replace any occurrences of the string '$SCALE_FACTOR' into the configuration files. If there are no such occurances, it is not a required parameter. |
    | CLEAR_CACHES | If set. this overrides the default setting set into the test configuration file. |
    | QUERIES_AT_ONCE | If set. this overrides the default setting set into the test configuration file. |
    | RUN | If set. this overrides the default setting set into the test configuration file. |
    | EXPLAIN | If set. this overrides the default setting set into the test configuration file. |
    | TIMEOUT | If set. this overrides the default setting set into the test configuration file. |
    | NUM_TESTS | If set. this overrides the default setting set into the test configuration file. |
    | MAX_SKIPPED_TESTS | If set. this overrides the default setting set into the test configuration file. |
    | WARMUP | If set. this overrides the default setting set into the test configuration file. |
    | WARMUPS_COUNT | If set. this overrides the default setting set into the test configuration file. |
    | MAX_QUERY_TIME | If set. this overrides the default setting set into the test configuration file. |
    | CLUSTER_SIZE | If set. this overrides the default setting set into the test configuration file. |
    | PRE_RUN_OS | If set. this overrides the default setting set into the test configuration file. |
    | POST_RUN_OS | If set. this overrides the default setting set into the test configuration file. |
    | OS_STATS_INTERVAL | If set. this overrides the default setting set into the test configuration file. |
    
    
    
    ### Results extraction mechanisms
    
    
    There are three possible result extraction mechanisms. They are set by the
    parameters set into the test configuration file:
    
    
    * `ANALYZE_EXPLAIN`
    * `MIN_MAX_OUT_OF_N`
    * `SIMPLE_AVERAGE`
    
    
    Only one of these should be set to true (1).
    
    
    **`ANALYZE_EXPLAIN`** is used for benchmarking InnoDB storage engine where the
    execution plan could change for the same query when the server is restarted. It
    is designed to run the query only with the fastest execution plan. This means
    that the server is restarted if the current execution plan is proven slower
    than the other. As a final result is taken the result for the query plan that
    turns out to be fastest and there are at least `CLUSTER_SIZE` tests with it for
    that query. By setting the configuration parameter `NUM_TESTS` you can set a
    maximum test runs that when reached will get the best cluster's average time
    (even if it is less than `CLUSTER_SIZE`). Also when a timeout for that query
    (`MAX_QUERY_TIME`) is reached, the scoring mechanism will return the best
    available cluster result.
    
    
    **`MIN_MAX_OUT_OF_N`** is also used for benchmarking InnoDB storage engine.
    As a result are stored the values for the fastest and the slowest run. It is
    assumed that when the execution plan has changed it has different execution
    plan and we are interested only in the min and max time.
    
    
    **`SIMPLE_AVERAGE`** is used for benchmarking storage engines that do not
    change the execution plan between restarts like MyISAM. The final result is the
    average execution time from all the test runs for the query.
    
    
    ### Results graphics
    
    
    After each query test run, the result is stored into a file named
    **results.dat** located into `{RESULTS_OUTPUT_DIR}`. This file
    is designed to be easy to be read by the plotting program Gnuplot 4.4. It is
    divided into blocks, separated by several new lines. Each block starts with a
    comment line containing details for the current block of results.
    
    
    Queries that have timed out have a value of `100000` so that they run out of
    the graphics and are cut off. Other queries have their real times (in seconds)
    starting from `0`. The graphics is cut off on the y-axis on the longest time
    for `completed test + 20%`. For example if the longest time is `100`
    seconds, the graphics is cut-off to `120` seconds. Thus the timed out queries
    will be truncated by this limitation and will seem as really timed out.
    
    
    During the test run, a gnuplot script file is generated with the necessary
    parameters for the graphics to be generated automatically. After each query
    test run is complete, the graphic is regenerated, so that the user can see the
    current results before the whole benchmark is complete. This file is called
    `gnuplot_script.txt` and is located into `{RESULTS_OUTPUT_DIR}`. The user
    can edit it to fine-tune the parameters or headings after the test is complete
    so that one could get the look and feel he/she wants for the final result.
    
    
    ### Script output
    
    
    #### Benchmark output
    
    
    In the directory set by the parameter {RESULTS_OUTPUT_DIR} (example:
    `/benchmark/dbt3/results/myisam_test_2011-12-08_191427/`) there are the
    following files/directories:
    
    
    * A directory for each test, named as the parameter {KEYWORD} from the test
     configuration (example: `mariadb-5-3-2`)
    * cpu_info.txt — the output
     of "`/bin/cat /proc/cpuinfo`" OS command
    * uname.txt — the output of "`uname -a`" OS
     command
    * results.dat — the results of each query
     execution in one file. This file will be used as a datafile for the gnuplot
     script. It also contains the ratio between the current test and the first
     one.
    * gnuplot_script.txt — the Gnuplot script that
     renders the graphics.
    * graphics.jpeg — the output graphics
    * A benchmark configuration file
     (example: `myisam_test_mariadb_5_3_mysql_5_5_mysql_5_6.conf`) copied
     from `mariadb-tools/dbt3_benchmark/tests/`
    * A results database configuration file (example: `results_db.conf`)
     copied from `mariadb-tools/dbt3_benchmark/tests/results_db_conf/`
    * A test configuration file (example: `test_myisam.conf`) copied
     from `mariadb-tools/dbt3_benchmark/tests/test_conf/`
    
    
    #### Test output
    
    
    In the subdirectory for the particular test, set by the parameter {KEYWORD}
    (example:
    `/benchmark/dbt3/results/myisam_test_2011-12-08_191427/mariadb-5-3-2/`),
    there are the following files:
    
    
    * pre_test_os_resutls.txt - the output of the OS commands (if any) executed
     before the first query run for that test
    * pre_test_sql_resutls.txt - the output of the SQL commands (if any)
     executed before the first query run for that test
    * post_test_os_resutls.txt - the output of the OS commands (if any)
     executed after the last query run for that test
    * post_test_sql_resutls.txt - the output of the SQL commands (if any)
     executed after the last query run for that test
    * all_explains.txt - a file containing all the explain queries, their
     startup parameters for the benchmark and the explain result
    * The config file (my.cnf) that was passed to mysqld or postgres
     (example: `mariadb_myisam_my.cnf`) copied
     from `mariadb-tools/dbt3_benchmark/config/s$SCALE_FACTOR/`
    * The queries configuration file (example: ''queries-mariadb.conf'') copied
     from `mariadb-tools/dbt3_benchmark/tests/queries_conf/`
    * The database configuration file (example: ''db_mariadb_5_3_myisam.conf'')
     copied from `mariadb-tools/dbt3_benchmark/tests/db_conf/`
    
    
    #### Query output
    
    
    For each query execution there are several files that are outputted by the
    automation script. They are all saved under the subdirectory set by the
    parameters `{KEYWORD}`:
    
    
    * Explain result - a file named
     '{query_name}_{number_of_query_run}_results.txt' (example:
     '1_explain.sql_1_results.txt' — first test for 1_explain.sql)
    * Pre-run OS commands - OS commands, executed before the actual query run.
     Output is a file named
     'pre_run_os_q_{query_name}_no_{number_of_query_run}_results.txt' (example:
     'pre_run_os_q_1.sql_no_2_results.txt' — second test for query 1.sql)
    * Pre-run SQL commands - SQL commands executed before the actual query run.
     Output is a file named
     'pre_run_sql_q_{query_name}_no_{number_of_query_run}_results.txt'.
    * Post-run OS commands - OS commands, executed after the actual query run.
     Output is a file named
     'post_run_os_q_{query_name}_no_{number_of_query_run}_results.txt'.
    * Post-run SQL commands - SQL commands executed after the actual query run.
     Output is a file named
     'post_run_sql_q_{query_name}_no_{number_of_query_run}_results.txt'.
    * CPU utilization statistics:
     '{query_name}_no_{number_of_query_run}_sar_u.txt'
    * I/O and transfer rate statistics:
     '{query_name}_no_{number_of_query_run}_sar_b.txt'
    * Memory utilization statistics:
     '{query_name}_no_{number_of_query_run}_sar_r.txt'
    
    
    ### Hooks
    
    
    The automation script provides hooks that allow the user to add both SQL and OS commands prior and after each test. Here is a list of all possible hooks:
    
    
    * Pre-test SQL hook: it is set with the parameter `PRE_TEST_SQL`. Contains
     SQL commands that are run once for the whole test configuration before the
     first run.
     (Example: "`use dbt3; select version(); show variables; show engines; show table status;`")
    * Post-test SQL hook: it is set with the parameter `POST_TEST_SQL`. Contains
     SQL commands that are run once for the whole test configuration after the
     last run.
    * Pre-test OS hook: it is set with the parameter `PRE_TEST_OS`. Contains OS
     commands that are run once for the whole test configuration before the first
     run.
    * Post-test OS hook: it is set with the parameter `POST_TEST_OS`. Contains OS
     commands that are run once for the whole test configuration after the last
     run.
    * Pre-run SQL hook: it is set with the parameter `PRE_RUN_SQL`. Contains SQL
     commands that are run prior each query run.
     (Example: "`flush status; set global userstat=on;`")
    * Post-run SQL hook: it is set with the parameter `POST_RUN_SQL`. Contains SQL
     commands that are run after each query run.
     (Example: "`show status; select * from information_schema.TABLE_STATISTICS;`")
    * Pre-run OS hook: it is set with the parameter `PRE_RUN_OS`. Contains OS
     commands that are run once prior each query run.
    * Post-run OS hook: it is set with the parameter `POST_RUN_OS`. Contains OS
     commands that are run once after each query run.
    
    
    The results of these commands is stored into the
    `{RESULTS_OUTPUT_DIR}/{KEYWORD}` folder (see [#script-output](#script-output))
    
    
    ### Activities
    
    
    Here are the main activities that this script does:
    
    
    1. Parse the configuration files and check the input parameters - if any of the
     required parameters is missing, the script will stop resulting an error.
    1. Collect hardware information - collecting information about the hardware of
     the machine that the benchmark is run. Currently it collects `cpuinfo`
     and `uname`. Results of these commands are stored into the results output
     directory set as an input parameter
    1. Loop through the passed test configurationsFor each passed in test
     configuration the script does the following:
    
      1. Start the results database server. The results of the test are stored into
     that database.
      1. Clears the caches on the serverClearing the caches is done with the
     following
     command:
    
    
        * NOTE: In order to clear the caches, the user is required to have
     sudo rights and the following line should be added to the `sudoers` file
     (edit it with "`sudo vusudo`"
     command):
      1. Start the database server
      1. Perform pre-test SQL commands. The results are stored
     under `results_output_dir/{KEYWORD}` folder and are
     called `pre_test_sql_results.txt`. `{KEYWORD}` is a unique keyword for
     the current database configuration.
      1. Perform pre-test OS commands. The results are stored
     under `results_output_dir/{KEYWORD}` folder and are
     called `pre_test_os_results.txt`.
    
        * NOTE: If in the test configuration the setting QUERIES_AT_ONCE is
     set to 0, then the server is restarted between each query run. Thus the
     steps 3.2, 3.3, 3.4 and 3.5 are executed only once right before step 3.6.2.
      1. Read all query configurations and execute the following for each of them:
    
        1. Check the test configuration parameters. If something is wrong with some
     required parameter, the program will exit resulting an error.
        1. Get the server version if that's the first run of the query
        1. Perform pre-run OS commands in shell. The results of these queries are
     stored into a file named `pre_run_os_q_{QUERY}_no_{RUN_NO}_results.txt`
     under `results_output_dir/{KEYWORD}` where `{QUERY}` is the query name
     (ex: `1.sql`), `{RUN_NO}` is the sequential run number
     and `{KEYWORD}` is a unique keyword for the particular test
     configuration.
        1. Perform pre-run SQL queries. The results of these queries are stored into a
     file named `pre_run_sql_q_{QUERY}_no_{RUN_NO}_results.txt`
     under `results_output_dir/{KEYWORD}` where `{QUERY}` is the query name
     (ex: `1.sql`), `{RUN_NO}` is the sequential run number and `{KEYWORD}` is
     a unique keyword for the particular test configuration.
        1. Perform warm-up runs if set into the test configuration file
        1. Perform actual test run and measure time.
    
          * During this step, a new child process is created in order to measure
     the statistics of the OS. Currently the statistics being collected are:
    
            * CPU utilization statistics. The command for this is:
            * I/O and transfer rate statistics. The command for this is:
            * Memory utilization statistics. The command for this is:
          * These statistics are measured every N seconds, where `N` is set with
     the `OS_STATS_INTERVAL` test configuration parameter.
          * The test run for MariaDB and MySQL has an implemented mechanism for
     cut-off when timeout exceeds. It is controlled with the `TIMEOUT` test
     parameter. Currently for PostgreSQL there is no such functionality and
     should be implemented in future versions.
        1. Execute the "explain" statement for that query.
    
          * NOTE: Running `EXPLAIN` queries with MySQL prior version 5.6.3 could
     result in long running queries since MySQL has to execute the whole query
     when there are nested selects in it. For MariaDB and PostgreSQL there is
     no such problem. The long-running explain queries are for queries #7, 8,
     9, 13 and 15. For that reason in MySQL prior version 5.6.3 for these
     queries no `EXPLAIN` selects should be executed.
        1. Perform post-run SQL queries
        1. Perform post-run OS commands in shell
        1. Log the results into the results database
        1. A graphics with the current results is generated using Gnuplot
      1. Shutdown the database server.
      1. Perform post-test SQL commands. The results are stored
     under `results_output_dir/{KEYWORD}` folder and are
     called `post_test_sql_results.txt`.
      1. Perform post-test OS commands. The results are stored
     under `results_output_dir/{KEYWORD}` folder and are
     called `post_test_os_results.txt`.
      1. Stop the results database server
    
    
    ### Script calling examples
    
    
    * Example call for MyISAM test for scale factor 30 and timeout 10 minutes:
    
    
    ...where `/path/to/project/home` is where the `mariadb-tools` project is
    located. This will replace all occurrences of the string "$PROJECT_HOME" in the
    configuration files (example: "`TMPDIR = $PROJECT_HOME/temp/`" will become
    "`TMPDIR = /path/to/project/home/temp/`").
    
    
    `--TIMEOUT` overrides the timeout setting into the test
    configuration file to 10 minutes.
    
    
    * Example for InnoDB test for scale factor 30 with 2 hours timeout per query
     and 3 runs for each query:
    
    
    * If a newer version of [MariaDB 5.5](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-5-series/changes-improvements-in-mariadb-5-5) is available:
    
      * copy or edit the DMBS server configuration
     file `mariadb-tools/dbt3_benchmark/tests/db_conf/db_mariadb_5_5_myisam.conf`
     and change the parameter `DBMS_HOME` to the new binary distribution. You
     can also edit `KEYWORD` and `GRAPH_HEADING`
    
    
    * If you want to add additional test in the MyISAM benchmark for [MariaDB 5.3](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-3-series/changes-improvements-in-mariadb-5-3),
     but with another defaults-file (my.cnf):
    
      * copy or edit the DMBS server configuration
     file `mariadb-tools/dbt3_benchmark/tests/db_conf/db_mariadb_5_3_myisam.conf`
     and change the parameter CONFIG_FILE to the new my.cnf
      * copy or edit the test configuration
     file `mariadb-tools/dbt3_benchmark/tests/myisam_test_mariadb_5_3_mysql_5_5_mysql_5_6.conf`
     and add the new configuration settings:
    
    
    * If you want to add additional startup parameters for query 6 for MariaDB for
     example:
    
      * copy or edit the
     file `mariadb-tools/dbt3_benchmark/tests/queries_conf/queries-mariadb.conf`
     and add a parameter
     "`STARTUP_PARAMS=--optimizer_switch='mrr=on' --mrr_buffer_size=96M`"
     for example for the section for query 6.
      * copy or edit the test configuration
     file `mariadb-tools/dbt3_benchmark/tests/myisam_test_mariadb_5_3_mysql_5_5_mysql_5_6.conf`
     to include the new queries configuration file
    
    
    ## Results
    
    
    ### MyISAM test
    
    
    DBT3 benchmark for the following configuration:
    
    
    * [MariaDB 5.3.2](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-3-series/mariadb-532-release-notes) Beta + MyISAM
    * [MariaDB 5.5.18](https://mariadb.com/kb/en/mariadb-5518-release-notes/) + MyISAM
    * MySQL 5.5.19 + MyISAM
    * MySQL 5.6.4 + MyISAM
    
    
    Results page: [DBT3 benchmark results MyISAM](dbt3-benchmark-results-myisam.md)
    
    
    ### InnoDB test
    
    
    DBT3 benchmark for the following configuration:
    
    
    * [MariaDB 5.3.2](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-3-series/mariadb-532-release-notes) Beta + XtraDB
    * [MariaDB 5.5.18](/en/mariadb-5518-release-notes/) + XtraDB
    * MySQL 5.5.19 + InnoDB
    * MySQL 5.6.4 + InnoDB
    
    
    Results page: [DBT3 benchmark results InnoDB](dbt3-benchmark-results-innodb.md)
    
    
    ### PostgreSQL test
    
    
    DBT3 benchmark for the following configuration:
    
    
    * [MariaDB 5.3.2](https://app.gitbook.com/s/aEnK0ZXmUbJzqQrTjFyb/mariadb-community-server-release-notes/old-releases/release-notes-mariadb-5-3-series/mariadb-532-release-notes) Beta + XtraDB
    * MySQL 5.6.4 + InnoDB
    * PostgreSQL
    
    
    Results page: (TODO)
    
    
    <sub>_This page is licensed: CC BY-SA / Gnu FDL_</sub>
    

    DBT3 Benchmark Results MyISAM

    Introduction

    This page shows the results for benchmarking the following configuration:

    • + MyISAM

    • MariaDB 5.5.18 + MyISAM

    • MySQL 5.5.19 + MyISAM

    • MySQL 5.6.4 + MyISAM

    The test is performed using the automation script /mariadb-tools/dbt3_benchmark/launcher.pl.

    Details about this automation script can be found on the page.

    Hardware

    The tests were performed on our facebook-maria1 machine. It has the following parameters:

    • CPU: 16 Intel® Xeon® CPU L5520 @ 2.27GHz

    • Memory: Limited to 16 GB out of 72 by adding 'mem=16G' parameter to /boot/grub/menu.lst

    • Logical disk: HDD 2 TB

    • Operating system:

    Scale factor 30

    This test was performed with the following parameters:

    • Scale factor: 30

    • Query timeout: 2 hours

    • Number of tests per query: 1

    • Total DB size on disk: about 50GB

    NOTE: The available memory is controlled by a parameter mem=16G added to the file /boot/grub/menu.lst

    Steps to reproduce

    Follow the instructions in to prepare the environment for the test.

    Before you run the test, ensure that the settings in the test configuration files match your prepared environment. For more details on the test configuration, please, refer to the .

    After the environment is prepared, the following command should be executed in the shell:

    Compared configurations

    The following configurations have been compared in this test:

    Case 1: + MyISAM

    Here are the common options that the mysqld server was started with:

    Case 2: + MyISAM

    Uses the same configuration file as in Case 1.

    Case 3: MySQL 5.5.19 + MyISAM

    Here are the common options that the mysqld server was started with:

    Case 4: MySQL 5.6.4 + MyISAM

    Here are the common options that the mysqld server was started with:

    The server has been restarted between each query run and the caches have been cleared between each query run.

    Results (without q20)

    Here is the graphics of the results:

    (Smaller bars are better)

    NOTE: Queries that are cut off by the graphics have timed out the period of 2 hours.

    Here are the actual results in seconds (smaller is better):

    Configuration
    1.sql
    2.sql
    2-opt.sql
    3.sql
    4.sql
    5.sql
    6.sql
    7.sql
    8.sql
    9.sql
    10.sql
    11.sql
    12.sql
    13.sql
    14.sql
    15.sql

    NOTE: The columns named "Ratio" are calculated values of the ratio between the current value compared to the value in the first test configuration. The formula for it is (current_value/value_in_first_row). For example if (the first column) handles a query for 100 seconds and MySQL 5.6.4 (the last configuration) handles the same query for 120 seconds, the ratio will be120/100 = 1.20. This means that it takes MySQL 5.6.4 20% more time to handle the same query.

    The archived folder with all the results and details for that benchmark can be downloaded from here:

    Notes

    Queries 2-opt.sql and 18-opt.sql are tested only for and

    • Additional startup parameters for 2_opt:

    • Additional startup parameters for 18_opt:

    • Additional modifications for 17-opt1:

    • Additional modifications for 17-opt2:

    • Additional modifications for 19-opt1:

    • Additional modifications for 19-opt2:

    Benchmark for q20

    This benchmarked only q20 with the same settings as described above for the other queries. The only difference is the timeout that was used: 30000 seconds (8 hours and 20 min).

    Compared cases

    The benchmark for q20 compares the following cases:

    • q20.sql - the original query is run with the IN-TO-EXISTS strategy for all servers. The following optimizer switches were used for MariaDB:

    • q20-opt0.sql - the original query is changed so that the same join order is chosen as for the two subsequent variants that test materialization where this order is optimal. The join order is:

    • Since the IN-TO-EXISTS strategy is essentially the same for both MariaDB and MySQL, this query was tested for MySQL only.

    • q20-opt1.sql - modifies the original query in two ways:

      • enforces the MATERIALIZATION strategy, and

      • enforces an optimal JOIN order via straight_join as follows:

    q20-opt1.sql uses the following optimizer switches for MariaDB:

    • q20-opt2.sql - the same as q20-opt1.sql but allows the optimizer to choose the subquery strategy via the following switch:

    • This switch results in the choice of SJ-MATERIALIZATION.

    NOTE: For MySQL there are no such optimizer-switch parameters, and the tests were started without any additional startup parameters. The default algorithm in MySQL is in_to_exists.

    Results for q20

    Here is the graphics of the results of the benchmarked q20: (Smaller bars are better)

    NOTE: Queries that are cut off by the graphics have timed out the period of 30000 seconds.

    Here are the actual results in seconds (smaller is better):

    Configuration
    20.sql
    20-opt0.sql
    20-opt1.sql
    20-opt2.sql
    Version
    Query and explain details

    This page is licensed: CC BY-SA / Gnu FDL

    Available memory: 16 GB
    16.sql
    17.sql
    17-opt1.sql
    17-opt2.sql
    18.sql
    18-opt.sql
    19.sql
    19-opt1.sql
    19-opt2.sql
    20.sql
    21.sql
    22.sql
    Version
    Query and explain details

    Configuration

    + MyISAM

    Ratio

    + MyISAM

    Ratio

    MySQL 5.5.19 + MyISAM

    Ratio

    MySQL 5.6.4 + MyISAM

    Ratio

    -

    5529

    5572

    5.5.18-MariaDB

    MySQL 5.5.19 + MyISAM

    17832

    >30000

    -

    -

    5.5.19

    MYSQL 5.6.4 + MyISAM

    19845

    >30000

    -

    -

    5.6.4-m7

    + MyISAM

    20070

    -

    5560

    5615

    5.3.2-MariaDB-beta

    Explain details

    MariaDB 5.5.18 + MyISAM

    DBT3 automation scripts
    DBT3 automation scripts
    Test configuration parameters
    MariaDB 5.5.18
    MyISAM s30 on facebook-maria1
    MariaDB 5.5.18
    dbt3-myisam-s30-hdd
    dbt3-q20-myisam-s30-hdd

    19922

    perl launcher.pl \
    --results-output-dir=/home/mariadb/benchmark/dbt3/results/myisam_test \
    --project-home=/home/mariadb/benchmark/dbt3/ \
    --datadir=/home/mariadb/benchmark/dbt3/db_data/ \
    --test=./tests/myisam_test_mariadb_5_3_mysql_5_5_mysql_5_6.conf \
    --queries-home=/home/mariadb/benchmark/dbt3/gen_query/ --scale-factor=30 \
    --TIMEOUT=7200
    net_read_timeout = 300
    net_write_timeout = 600
    key_buffer_size = 3G
    skip-external-locking
    key_buffer = 16M
    max_allowed_packet = 16M
    table_open_cache = 1024
    thread_cache = 512
    sort_buffer_size = 512K
    net_buffer_length = 8K
    read_buffer_size = 256K
    read_rnd_buffer_size = 512K
    myisam_sort_buffer_size = 8M
    max_connections = 256
    query_cache_size = 0
    query_cache_type = 0
    sql-mode = NO_ENGINE_SUBSTITUTION
    
    #Per-test optimizations
    optimizer_switch='index_merge=on'
    optimizer_switch='index_merge_union=on'
    optimizer_switch='index_merge_sort_union=on'
    optimizer_switch='index_merge_intersection=on'
    optimizer_switch='index_merge_sort_intersection=off'
    optimizer_switch='index_condition_pushdown=on'
    optimizer_switch='derived_merge=on'
    optimizer_switch='derived_with_keys=on'
    optimizer_switch='firstmatch=off'
    optimizer_switch='loosescan=off'
    optimizer_switch='materialization=on'
    optimizer_switch='in_to_exists=on'
    optimizer_switch='semijoin=on'
    optimizer_switch='partial_match_rowid_merge=on'
    optimizer_switch='partial_match_table_scan=on'
    optimizer_switch='subquery_cache=off'
    optimizer_switch='mrr=on'
    optimizer_switch='mrr_cost_based=off'
    optimizer_switch='mrr_sort_keys=on'
    optimizer_switch='outer_join_with_cache=on'
    optimizer_switch='semijoin_with_cache=off'
    optimizer_switch='join_cache_incremental=on'
    optimizer_switch='join_cache_hashed=on'
    optimizer_switch='join_cache_bka=on'
    optimizer_switch='optimize_join_buffer_size=on'
    optimizer_switch='table_elimination=on'
    
    join_buffer_space_limit = 3072M
    join_buffer_size = 1536M
    join_cache_level = 6
    mrr_buffer_size = 96M
    tmp_table_size = 96M
    max_heap_table_size = 96M
    net_read_timeout = 300
    net_write_timeout = 600
    key_buffer_size = 3G
    skip-external-locking
    key_buffer = 16M
    max_allowed_packet = 16M
    table_open_cache = 1024
    thread_cache = 512
    sort_buffer_size = 512K
    net_buffer_length = 8K
    read_buffer_size = 256K
    myisam_sort_buffer_size = 8M
    max_connections = 256
    query_cache_size = 0
    query_cache_type = 0
    sql-mode = NO_ENGINE_SUBSTITUTION
    
    join_buffer_size = 1536M
    tmp_table_size = 96M
    max_heap_table_size = 96M
    read_rnd_buffer_size = 96M
    net_read_timeout = 300
    net_write_timeout = 600
    key_buffer_size = 3G
    skip-external-locking
    key_buffer = 16M
    max_allowed_packet = 16M
    table_open_cache = 1024
    thread_cache = 512
    sort_buffer_size = 512K
    net_buffer_length = 8K
    read_buffer_size = 256K
    myisam_sort_buffer_size = 8M
    max_connections = 256
    query_cache_size = 0
    query_cache_type = 0
    sql-mode = NO_ENGINE_SUBSTITUTION
    
    optimizer_switch='mrr=on'
    optimizer_switch='mrr_cost_based=off'
    optimizer_switch='batched_key_access=on'
    optimizer_switch='index_condition_pushdown=on'
    
    join_buffer_size = 1536M
    tmp_table_size = 96M
    max_heap_table_size = 96M
    read_rnd_buffer_size = 96M
    --optimizer_switch='mrr_sort_keys=off'
    --optimizer_switch='semijoin=off' --optimizer_switch='index_condition_pushdown=on'
    SELECT
    SUM(l_extendedprice) / 7.0 AS avg_yearly
    FROM
     part STRAIGHT_JOIN lineitem
    WHERE
    p_partkey = l_partkey
    ...
    SELECT
     	SUM(l_extendedprice) / 7.0 AS avg_yearly
    FROM
     	lineitem STRAIGHT_JOIN part
    WHERE
     	p_partkey = l_partkey
    ...
    SELECT
     	SUM(l_extendedprice* (1 - l_discount)) AS revenue
    FROM
     	part STRAIGHT_JOIN lineitem
    WHERE
     	(
     		p_partkey = l_partkey
    ...
    SELECT
     	SUM(l_extendedprice* (1 - l_discount)) AS revenue
    FROM
     	lineitem STRAIGHT_JOIN part
    WHERE
     	(
     		p_partkey = l_partkey
    ...
    --optimizer_switch='in_to_exists=on,materialization=off,semijoin=off';
    SELECT s_name, s_address
     FROM supplier, nation
     WHERE s_suppkey IN (SELECT DISTINCT (ps_suppkey)
     FROM '''part straight_join partsupp'''
     WHERE ps_partkey = p_partkey ...
    SELECT s_name, s_address
     FROM supplier, nation
     WHERE s_suppkey IN (SELECT DISTINCT (ps_suppkey)
     FROM '''part straight_join partsupp'''
     WHERE ps_partkey = p_partkey ...
    --optimizer_switch='in_to_exists=off,materialization=on,semijoin=off';
    --optimizer_switch='in_to_exists=on,materialization=on,semijoin=on';

    1.sql

    261

    1.00

    308

    1.18

    259

    0.99

    277

    1.06

    2.sql

    47

    1.00

    48

    1.02

    499

    10.62

    49

    1.04

    2-opt.sql

    46

    1.00

    48

    1.04

    -

    -

    -

    -

    3.sql

    243

    1.00

    246

    1.01

    >7200

    -

    1360

    5.60

    4.sql

    137

    1.00

    135

    0.99

    4117

    30.05

    137

    1.00

    5.sql

    181

    1.00

    187

    1.03

    6164

    34.06

    1254

    6.93

    6.sql

    198

    1.00

    205

    1.04

    >7200

    -

    194

    0.98

    7.sql

    779

    1.00

    896

    1.15

    814

    1.04

    777

    1.00

    8.sql

    270

    1.00

    287

    1.06

    749

    2.77

    1512

    5.60

    9.sql

    252

    1.00

    254

    1.01

    >7200

    -

    298

    1.18

    10.sql

    782

    1.00

    854

    1.09

    >7200

    -

    1881

    2.41

    11.sql

    45

    1.00

    36

    0.80

    357

    7.93

    49

    1.09

    12.sql

    211

    1.00

    217

    1.03

    >7200

    -

    213

    1.01

    13.sql

    251

    1.00

    236

    0.94

    1590

    6.33

    244

    0.97

    14.sql

    88

    1.00

    91

    1.03

    1590

    18.07

    94

    1.07

    15.sql

    162

    1.00

    164

    1.01

    4580

    28.27

    165

    1.02

    16.sql

    154

    1.00

    152

    0.99

    174

    1.13

    173

    1.12

    17.sql

    1493

    1.00

    1495

    1.00

    865

    0.58

    794

    0.53

    17-opt1.sql

    795

    1.00

    794

    1.00

    862

    1.08

    794

    1.00

    17-opt2.sql

    1482

    1.00

    1458

    0.98

    2167

    1.46

    1937

    1.31

    18.sql

    971

    1.00

    931

    0.96

    >7200

    -

    >7200

    -

    18-opt.sql

    121

    1.00

    125

    1.03

    -

    -

    -

    -

    19.sql

    212

    1.00

    212

    1.00

    2004

    9.45

    61

    0.29

    19-opt1.sql

    59

    1.00

    59

    1.00

    1999

    33.88

    61

    1.03

    19-opt2.sql

    260

    1.00

    216

    0.83

    443

    1.70

    236

    0.91

    20.sql

    -

    -

    -

    -

    -

    -

    -

    -

    21.sql

    173

    1.00

    179

    1.03

    >7200

    -

    183

    1.06

    22.sql

    13

    1.00

    14

    1.08

    10

    0.77

    13

    1.00

    Version

    5.3.2-MariaDB-beta

    5.5.18-MariaDB

    5.5.19

    5.6.4-m7

    Query and explain details

    Explain details

    Explain details

    Explain details

    Explain details

    MariaDB 5.5.18
    Explain details
    Explain details
    Explain details

    MariaDB 5.1
    MariaDB 5.1
    MariaDB 5.2
    MariaDB 5.3
    MariaDB 5.3.2
    MariaDB 5.3.1
    MariaDB 5.3.1
    MariaDB 5.3.1
    MariaDB 5.3.1
    MariaDB 5.3.2
    MariaDB 5.3.2
    MariaDB 5.3.2
    MariaDB 5.3.2
    MariaDB 5.3.2
    MariaDB 5.3.2
    MariaDB 5.3.2
    MariaDB 5.2.2
    MariaDB 5.2.2
    MariaDB source code
    Branch a MariaDB or MySQL tree
    Aria
    aria_block_size
    mysqld_safe
    innodb_buffer_pool_size
    innodb_log_file_size
    threads_cache_size
    max_connections
    innodb_flush_log_at_trx_commit=0
    segmented key cache
    5.5 threadpool