Compiling & Debugging MariaDB(and MySQL) in Eclipse from scratch – Part 5: “Profiling in Eclipse with OProfile”

Section 6: “Profile a real case”


Profiling & Debugging is an argument that would require an entire book, the aim of this(and the others) posts of this series is to give you the basic knowledge on how to work with these tools and techniques withing Eclipse. For instance if you want to learn to profile with OProfile you should study on the abundant and separate resources, you may start from:


If you followed correctly the previous post (Part 4) you should have now a project in Eclipse that we will use to profile MariaDB(or MySQL if you are working on that). I remind you that we decided to prepare a nodebug build to avoid to have the debug logging functions to be counted in our profiling. The debug logging functions are quickly executed but they are in a very large number so that they would skew our statistics. The case we will study is a bug affecting MariaDB, MySQL and Percona where an outer join becomes extremely slow when using the join_cache, the bug is reported here: If you remember we decided to name our first project in Eclipse MariaDB10-test1, for this project we are going to use another project name to allow the existence of multiple type of builds. It is particularly useful when, apart from compliling our MariaDB with or without debugging enabled, we will compile different MariaDB, MySQL or Percona versions. The Eclipse project name we will use in this part is: MariaDB10-test2-nodebug Naming a project(and not only) is also an art, so that you should find what’s more suitable for your use in your specific case, make sure to include basic information like: Type of release: mariadb, mysql, percona Version: 51,55,56,10 or 5172,5512,10011, or even 10.0.11, 5.6.15-rel63.0, etc. Type of compilation: that is the most important parameters used with cmake, in our case we just need to distinguish between debug and nodebug builds. I will also assume that you have followed all the steps of previous Parts 1,2,3 to have this project correctly configured, successfully compiled, and run under Eclipse, I will not repeat the steps needed to do that, please refer to Parts 1,2,3 and proceed only if you have a MariaDB(or MySQL) project correctly compiling and running under Eclipse as explained in previous parts.


We have now the project ready to run in Eclipse, and we will run it in profile mode, the first profiling we will do will be an empty-run test, just to test that we are good to profile, to do so: Right-click on the project (MariaDB10-test2-nodebug) and select: [Profile As] --> [Profile configurations]. This screen should be familiar, it is the same as [Run] with an added tab [Profiler]. You should already have a Run configuration under [C/C++ Application], if not Right Click --> [New], and give it a name. Now select the [Profiler] tab. In the drop down list of available profiling tools select: ‘OProfile‘ Below in the [Global] tab select ‘opcontrol‘ from the drop down list ‘Profile with‘. Note that every profiling tool will have a different configuration requirements, you will easily notice that by changing the tool from OProfile to another. You have two checkboxes on the lower part of OProfile pane: [ ] Include dependent shared libraries [ ] Include dependent kernel modules In our case we will not include these two extra checks. The above are used when you want to profile not only your application but also the shared libraries and kernel modules used during your application profiling. This can be interesting in some cases and it is part of a larger scope, in our case we will just profile our application, so do not check the above two boxes. Click on [Profile] The project will be built and then run under OProfile, you will be asked for root password for that(OProfile cannot run in user mode), in the beginning and possibly in the end of profiling. At this point, always following the instructions on previous parts on how to connect to this instance you can connect and run anything you like, or nothing. Profiling will end when the process mysqld will end, so when you have finished your tests and you want to collect the profiling results you will shutdown mysqld with: $ bin/mysqladmin shutdown -uroot -h127.0.0.1 -P10011 (Assuming mysqld in this project was configured to run on port 10011, as explained on Part 3 "4.1 Create a 'Run configuration' and Run") You will see in Eclipse that OProfile detects that mysqld process is gone and it will start collecting data and creating the report for you, the results will be visible in the lower panel of Eclipse where all logs are shown, you will have a tab [OProfile] with inside a treeview control, in there, under ‘current’ you have the tree of the function calls ordered by presence, that is the number of calls. If all went well you just finished your first profiling in Eclipse, if you ran an empty-run test and you left mysql up for a few minutes without doing anything, you will probably see the main part of the function calls being some time related calls, calls used to execute temporized background operations. The calls are grouped per function name and listed according to line numbers, if you click on any row, Eclipse should take you to the actual source code line. Tip: Learn to rename the OProfile test results. Not very intuitively on the top right of this lower panel there is a Down-arrow, Click on ‘Save Default Session‘ and give it a name(like ’emptyrun’). This is important not to lose the results at next execution, we will need to save and compare two different profilings.


Now that we are familiar with profiling in Eclipse we can step to our real case. We have an extremely slow query and we want to debug why it is that slow, so the aim of this little exercise and of Part 4 & 5 of this blog series is to understand where we should look at before just guessing where the time might be spent. This is a case where profiling is extremely useful, we would have no or little clue where to start looking for debugging this general query slowness problem, while with profiling we can have an idea on where the time is most spent. Moreover this test case is particular good for compared testing, because we know how to inhibit the faulty behaviour by setting a session variable. All we need to do now is to profile like we just did, with the addition of executing the test case in:

SET join_cache_level=0; # First Test
#SET join_cache_level=2; # Second Test

drop table if exists t2,t1;

  `col1` varchar(255) NOT NULL DEFAULT '',
  PRIMARY KEY (`id`)

  `parent_id` smallint(3) NOT NULL DEFAULT '0',
  `col2` varchar(25) NOT NULL DEFAULT '',
  PRIMARY KEY (`id`)

select now();
  t1 t 
  LEFT JOIN t2 c1 ON c1.parent_id = AND c1.col2 = "val"
  LEFT JOIN t2 c2 ON c2.parent_id = AND c2.col2 = "val"
  LEFT JOIN t2 c3 ON c3.parent_id = AND c3.col2 = "val"
  LEFT JOIN t2 c4 ON c4.parent_id = AND c4.col2 = "val"
  LEFT JOIN t2 c5 ON c5.parent_id = AND c5.col2 = "val"
  LEFT JOIN t2 c6 ON c6.parent_id = AND c6.col2 = "val"
  LEFT JOIN t2 c7 ON c7.parent_id = AND c7.col2 = "val"
  LEFT JOIN t2 c8 ON c8.parent_id = AND c8.col2 = "val"
  LEFT JOIN t2 c9 ON c9.parent_id = AND c9.col2 = "val"
  LEFT JOIN t2 c10 ON c10.parent_id = AND c10.col2 = "val"
  LEFT JOIN t2 c11 ON c11.parent_id = AND c11.col2 = "val"
  LEFT JOIN t2 c12 ON c12.parent_id = AND c12.col2 = "val"
  LEFT JOIN t2 c13 ON c13.parent_id = AND c13.col2 = "val"
  LEFT JOIN t2 c14 ON c14.parent_id = AND c14.col2 = "val"
  LEFT JOIN t2 c15 ON c15.parent_id = AND c15.col2 = "val"
  LEFT JOIN t2 c16 ON c16.parent_id = AND c16.col2 = "val"
  LEFT JOIN t2 c17 ON c17.parent_id = AND c17.col2 = "val"
  LEFT JOIN t2 c18 ON c18.parent_id = AND c18.col2 = "val"
  LEFT JOIN t2 c19 ON c19.parent_id = AND c19.col2 = "val"
  LEFT JOIN t2 c20 ON c20.parent_id = AND c20.col2 = "val"
  LEFT JOIN t2 c21 ON c21.parent_id = AND c21.col2 = "val"
  LEFT JOIN t2 c22 ON c22.parent_id = AND c22.col2 = "val"
  LEFT JOIN t2 c23 ON c23.parent_id = AND c23.col2 = "val"
  LEFT JOIN t2 c24 ON c24.parent_id = AND c24.col2 = "val"
  LEFT JOIN t2 c25 ON c25.parent_id = AND c25.col2 = "val"
  LEFT JOIN t2 c26 ON c26.parent_id = AND c26.col2 = "val"
  LEFT JOIN t2 c27 ON c27.parent_id = AND c27.col2 = "val"
  LEFT JOIN t2 c28 ON c28.parent_id = AND c28.col2 = "val"
  LEFT JOIN t2 c29 ON c29.parent_id = AND c29.col2 = "val"
  LEFT JOIN t2 c30 ON c30.parent_id = AND c30.col2 = "val"
  LEFT JOIN t2 c31 ON c31.parent_id = AND c31.col2 = "val"
  LEFT JOIN t2 c32 ON c32.parent_id = AND c32.col2 = "val"
  LEFT JOIN t2 c33 ON c33.parent_id = AND c33.col2 = "val"

select now();

We will do two profilings:

  • one by setting SET join_cache_level=0 before running the test case # Runs fine
  • one by setting SET join_cache_level=2 before running the test case # Runs extremely slow and note, the dataset is empty!

Remember to save the profile results as explained above before running the second test(something like: ‘join_cache_level0‘ and ‘join_cache_level2‘. What you should do now is:

  1. Start profiling
  2. Execute the test case by setting join_cache_level=0
  3. Shutdown MariaDB(or MySQL)
  4. Collect the OProfile results
  5. Rename the OProfile results from ‘current’ to ‘joincachelevel0’

Repeat using join_cache_level=2 at step 2, and joincachelevel2 at step 5. If you run the two profile tests above you should immediately spot what happens when you set the session variable join_cache_level=2, most of the time is spent in , in particular in the method join_records() of the class JOIN_CACHE. Double clicking on any line will teleport you to the source code line. Now that we know that an unjustifiable amount of time is spent in that part of the code we can use this information to proceed to next Part 6: Debug in Eclipse(available soon).