DBA > Articles

MySQL Developer, meet “Quan” (aka the MySQL Query Analyzer)

By: Rob Young
To read more DBA articles, visit http://dba.fyicenter.com/article/

The MySQL Query Analyzer ("Quan") is designed to save development time on query coding and tuning by expanding on all of the good things found in the Slow Query Log, SHOW PROCESSLIST; EXPLAIN plan, and 5.1 profiler all with no dependence of any of these atomic things. To this end, we integrated Quan into the Enterprise Monitor so developers can monitor security, performance, availability AND all of their queries across all their MySQL servers from a single, consolidated view.

MySQL Developer, meet “Quan” (aka the MySQL Query Analyzer)
During my years as an application developer I was most in my element when working on projects that involved building applications from the ground up. Somehow taking user and customer pain points and requirements and turning them into a working solution was much more satisfying than upgrading an existing application, fixing bugs, or tracking down and tuning a slowdown, especially when these tasks involved code I had not created. Along the line, I developed and supported applications that ran on SQL Server, Oracle, DB2 (z390!) and MySQL, each of which posed some interesting twists when it comes to proactively tuning code as it is being developed and tested. Early versions of Visual Studio and SQL Server 7.0 provided pretty good application and stored procedure debuggers and the Microsoft Enterprise Manager provided a decent query profiler, although it only worked for a single server and was a little difficult to use if your queries spanned multiple data sources. Oracle 8i and later had the DBMS_PROFILER package and API that allows for the profiling of embedded PL/SQL statements in things like stored procedures and packages, which is good for finding bottlenecks in shared code objects. DB2, specifically on z390, is probably the most cryptic in providing query and SQL code tuning help and diagnostics, especially when coding in, gulp, COBOL and CICS (can you say “READY TRACE”?).

When I first started working with MySQL back in 2001, I appreciated its simplicity, data handling capabilities and overall performance. I was working at Embarcadero Technologies as the Product Manager over DBArtisan and Rapid SQL at that time and was working on a spec that would add support for MySQL 4.x to both products. After working on similar specs for Oracle, SQL Server, DB2 and Sybase specific features, I could not believe how easy it was to use SHOW commands and simple queries to return MySQL performance related variables and metrics. Everything was great! Then I tested some SQL code I developed that would be passed on to my Engineering team for inclusion in the next version of the product apps. It was then I realized that my options for collecting MySQL query diagnostics for a running application were somewhat limited and involved restarting my servers to enable this thing called the Slow Query Log and manually running SHOW PROCESSLIST from the MySQL command line to determine what queries were running and how long each was taking. Further, I had to then run the queries again to generate an EXPLAIN plan to understand which access path the MySQL optimizer chose for a specific execution. I thought, if it was this painful to write good queries for an app that I was creating from scratch, then how difficult would it be if I was under pressure and trying to find “good queries gone bad” in an app I had inherited or was babysitting with production support?

As Product Managers with MySQL, Robin Schumacher and I have heard loud and clear from customers and users that they love MySQL’s ease of use and reliability, but see it as a “black box”, specifically when it comes to identifying problematic SQL code. Polling of our Community and Enterprise users also reveals that finding problem SQL is commonly the #1 challenge they say that comes with deploying and scaling apps on MySQL. Unfortunately, the MySQL server is not well-instrumented to provide the query metrics needed to provide a compelling tool or solution that helps with this. With this in mind, a cross-functional technical team from MySQL Engineering and Product Management met in Amsterdam in early 2007 to explore how to provide low-level query activity stats outside of the MySQL server release schedule. Robin presented solutions the other dbms vendors provide, with an emphasis on the query analytics available the Oracle 10g Enterprise Manager. He and I defined the “what” behind the problem our combined team needed to solve, namely we needed to provide DBAs and Developers with a MySQL solution that provided a quick, easy way to find their most expensive, resource hogging queries while working within the constraints of the MySQL server. We had built similar solutions around Oracle, SQL Server, DB2 and Sybase while working for Embarcadero Technologies, but MySQL presented a bigger challenge as it provided no true data dictionary or performance interface to gather the needed statistics. We discussed things like pulling the relevant MySQL logs and SHOW PROCESSLIST result sets into the Enterprise Monitor for formatting and correlation analysis. This is OK, but not much more beneficial that what many MySQL users are currently doing on their own.

I will never forget the moment the conversation shifted from Product Management’s “what” to Engineering’s “how” (ideally, this is how the product development life cycle truly begins, but I digress…). Jan Kneschke of the MySQL Enterprise Tools engineering team and author of the MySQL Proxy said something like, “we could use the Proxy to collect and aggregate queries as they run and send them back to the Monitor for analysis…” In that moment, the MySQL Query Analyzer or “Quan” was born. Traditional Query Monitoring

Before jumping into details around Quan, it may first help to cover some of the customer feedback and use cases we have gotten on the challenges of monitoring queries as apps are implemented on a scale-out/up architecture with MySQL. The most popular and commonly used feature for developing and scaling apps on MySQL is replication, which ships with every version of the MySQL server. It is very easy to use and setup, very reliable, offers high rates of transfer speed and is used primarily to “scale-out” an application, by using different physical servers to make up one logical, horizontally cloned database. In a scale-out deployment, apps are developed and deployed with separate connection handlers for read/write splitting allowing them to direct read activity to dedicated read slaves, freeing up the master to service only writes.

This common deployment adds complexity to the tuning of underlying queries during development and QA, mainly because (in a load balanced environment) queries execute across multiple data sources and may perform differently depending on the load or configuration of the target server. It is also difficult to determine the total expense of a specific query across a replicated database as query stats (via the General Log or an enabled Slow Query Log) are kept locally and not readily available for aggregation. Enabling the Slow Query Log

A common method for monitoring queries in the above deployment is to enable the Slow Query Log on each of the read slaves to capture queries as they run. The Slow Query Log is enabled by starting the monitored MySQL server, or read slave in this case, with the --log-slow-queries[=file_name] option. When enabled the Slow Query Log lists queries that took longer than X seconds (user defined value typically between 1 and 10). The entries in the Slow Query Log are formatted to include client IP address, date/time the query started, execution time, rows sent to client, rows read and finally the query content. A typical entry looks like this:

Full article...


Other Related Articles

... to read more DBA articles, visit http://dba.fyicenter.com/article/