DBA > Articles

Another Look at MySQL 5.1’s SQL Diagnostic Tools

By: Robin Schumacher
To read more DBA articles, visit http://dba.fyicenter.com/article/

Another Look at MySQL 5.1's SQL Diagnostic Tools

No matter what they say, I don't think it's true. Database 'experts' say that 80-90% of a database's overall performance comes from the SQL code that's running against it. And I just don't buy it. Instead, I think that most of a database's performance comes from optimal design, and after that SQL code comes next. If you properly design your database, then you're way ahead of the game in terms of performance issues. That's when well-written SQL code can glide along your design like butter on a hot skillet.

For example, most DBAs and developers know that large table scans typically cause performance issues, with the remedy many times being: (1) well-placed indexes, (2) horizontal partitioning, or (3) column-oriented tables where data warehouse use cases are in play. No matter how much you tweak your SQL code, not putting one of the aforementioned designs in place against big tables will most times stymie the performance on your database. This is why I firmly believe that design is the main contributor to a well-running database, and that when you monitor your database's performance, you're really monitoring the design you've given it.

Now – that said, I also believe that badly written SQL code is the second main culprit behind a lot of database sluggishness. The problem is, once a system enters production, how does a DBA locate and correct poor code that's clogging the arteries of the databases they have responsibility for?

Prior to MySQL 5.1, it wasn't all that easy to find the worst running SQL on a MySQL instance. And truth be told, MySQL still has a ways to go before it's an effortless process. But, the good news is there are some new aids in MySQL 5.1 and MySQL Enterprise that do make it more efficient to find and troubleshoot bad SQL code that's hiding inside a MySQL instance. Let's take a quick tour through these new features and see what help they provide.

Starting at the Top
Database performance analysts typically use three forms of analysis to interrogate a database that's ailing:
1. Bottleneck analysis – what is my database, user community, and SQL code waiting on?
2. Workload analysis – who's logged on and what work are they performing?
3. Ratio analysis – what do rules-of-thumb statistics say about my performance?


The above list is in order of the most useful sets of information. Eliminating bottlenecks is job #1, analyzing and massaging workloads is #2, and ratios are last in importance although they aren't completely without merit. To perform each style of analysis, you have to first capture information and then evaluate it for the keys that will help you determine what's going on and what (if anything) needs to change to make your system run faster.

In MySQL 5.1, there are some performance objects that you can now query – vs. issue show commands for – that yield data on overall MySQL performance metrics (that you can transform into ratios), as well show you who's currently on your system and what they're running.

Starting with who's logged on, the new processlist table in the information_schema database lets you query data about who's connected and if they're currently running anything:

mysql> select *
-> from information_schema.processlist\G

*************************** 1. row ***************************
ID: 33
USER: root
HOST: 154.231.0.3:1833
DB: gim2
COMMAND: Sleep
TIME: 17
STATE:
INFO: NULL
*************************** 2. row ***************************
ID: 31
USER: root
HOST: 154.231.0.3:1751
DB: NULL
COMMAND: Query
TIME: 0
STATE: executing
INFO: select *
from information_schema.processlist
2 rows in set (0.00 sec)


Before MySQL 5.1, you had the show processlist command, but you couldn't filter on anything via SQL; now you can look at only active or idle processes and more. You can also query all the various MySQL global statistics to build a few ratios:

mysql> select * from global_status;



+------------------------+----------------+
| VARIABLE_NAME          | VARIABLE_VALUE |
+------------------------+----------------+
| ABORTED_CLIENTS        | 7              |
| ABORTED_CONNECTS       | 2              |
| BINLOG_CACHE_DISK_USE  | 0              |
| BINLOG_CACHE_USE       | 0              |
| BYTES_RECEIVED         | 14912630       |
| BYTES_SENT             | 569470         |
| COM_ADMIN_COMMANDS     | 58             |
| COM_ASSIGN_TO_KEYCACHE | 0              |
| COM_ALTER_DB           | 0              |
| COM_ALTER_DB_UPGRADE   | 0              |
| COM_ALTER_EVENT        | 0              |
| COM_ALTER_FUNCTION     | 0              |
| COM_ALTER_PROCEDURE    | 0              |
| COM_ALTER_SERVER       | 0              |
| COM_ALTER_TABLE        | 14             |
| COM_ALTER_TABLESPACE   | 0              |
| COM_ANALYZE            | 0              |
| COM_BACKUP_TABLE       | 0              |
| COM_BEGIN              | 0              |
| COM_BINLOG             | 0              |
. 
.
.

These are good for a start, but then you'll need to go a little deeper and do some more work to ferret out problem SQL – let's look at that now.

Finding Bad SQL
MySQL supplies two logs that allow you to see SQL code that's sent against a server: the general query log gets all SQL, while the slow query log only collects SQL with certain performance characteristics (e.g. SQL that runs longer than a specified period of time, etc.) Prior to MySQL 5.1, these logs weren't that simple to use – you had to set an option in the MySQL configuration file, stop and restart the server, and when you were finished collecting what you wanted, reverse the process.

Now in 5.1, two things have changed: (1) You can dynamically trace SQL in both logs by issuing a command from a query prompt, (2) You have the option of tracing to a file or a table, with file (what was present in earlier MySQL versions) being the default. Being able to dynamically start/stop tracing of SQL gives you much more control over how much data you collect, and being able to query the traced information gives you more flexibility in terms of analyzing and working with what you collect.

To trace issued SQL into tables vs. files, issue the following command:
mysql> set global log_output='table';
Query OK, 0 rows affected (0.00 sec)

Note that this will tell MySQL to push all traced SQL into their appropriate tables, which are based off the CSV (comma separated value) engine. One nice aspect of the CSV tables that I like is that you can take the underlying files and directly import them into Microsoft Excel or OpenOffice Calc to prepare reports and the like.

Full article...


Other Related Articles

... to read more DBA articles, visit http://dba.fyicenter.com/article/