DBA > Articles

Using Partitioning and Event Scheduler to Prune Archive Tables

By: Gregory J. Haase
To read more DBA articles, visit http://dba.fyicenter.com/article/

First in our series of Use Case reports on new MySQL 5.1 features, we have Greg Haase of Lotame describing his innovative use of partitioning. Usually, the creators of new applications are unaware of the various tweaking that users may submit their features to, in order to achieve surprising results. We in the community team are usually on the tweaking side, and we like to surprise developers with (positive) side effects of the existing features. This time, we were caught by surprise. Greg's usage of partitioning and events is really cool!

Using Partitioning and Event Scheduler to Prune Archive Tables

At Lotame, we're using the MySQL 5.1 event scheduler and partitioning to regularly prune data stored in the archive engine since June 25, 2008. We're now on MySQL 5.1.24 and we started development using partitiong and events with MySQL 5.1.23.

Purpose of our application
Lotame's Crowd Control is a revolutionary technology that creates demographic and behavioral profiles, based on consumer engagement and interests within social and community networks. This behavioral targeting system is designed to help publishers reach and gain a better understanding of their audience, increasing the value of their ad inventory.

Our Environment
Crowd Control is built using Java, PHP, MySQL, and other technologies. We use CentOS 5.1 x86_64 primarily on Dell hardware. We use the MySQL provided RPMs from the community downloads page. We also make extensive use of MySQL replication. Some metrics about our application

Our application database exceeds 200GB and we are currently averaging over 3000 queries per second. We commonly see more than 4GB of bin-logs in a single hour. The reason we needed partitioning and even scheduling

Our application processes tens of millions of rows of raw data every hour. After we aggregate that raw data into a useful format we move it into archive tables for storage. Even in compressed format, these tables grow by more than 1GB a day. Although we do need some current data around for troubleshooting and analysis, it isn't worth the cost of storage to keep the raw data around indefinitely. We decided to keep 7 days worth of data. Unfortunately for us, the archive storage engine does not allow deletes.

Prior to the release of 5.1, we implemented a stored procedurethat creates a new table like the current one, does an atomic rename of the tables, copies 7 days of data from the old table to the new via INSERT...SELECT, and then drops the old table (see rotate archive table). We wrote a perl script to call this procedure, and the perl script was scheduled to run once a week via cron job.

This implementation presented several challenges:
1. Copying a weeks worth of data (roughly 7GB) in archive tables was both CPU and I/O intensive
2. It took a long time to complete
3. Replication of the rotate table commands caused a significant lag in replication
4. Data volume fluctuated considerably. Since we ran once a week, archive tables were between seven and fourteen days in size.
5. Multiple technologies required - many moving parts (perl, cron).

Using partitioning and event scheduling in 5.1, we were able to address all of these issues. We partitioned each of the archive tables by day according to a date string, naming each partition pYYYYMMDD (e.g.p20080717) and including one additional partition pEOW for VALUES LESS THAN MAXVALUE. We created a procedure that drops partitions older than 7 days and reorganzies pEOW into a partitions for future days (see rotate archive partitions). Finally, we created an event and scheduled it to run once a day (same script).

With this approach, all 5 of our challenges were met:
1. No longer copying data between tables relieves CPU and I/O tension
2. Dropping existing partitions and reorganizing empty partitions is extremely fast
3. Replication no longer lags
4. Data volume in the archive tables fluctuates by one day instead of seven.
5. Everything needed to run is found within the database itself

Usability
The implementation of partitioning is pretty straight forward and usable enough. In typical MySQL fashion, it's almost easier to implement than replication. Event scheduling does present some issues. Logging general event information into the error log is problematic. We believe the error log should be reserved for error messages and start up and shutdown information only. Another issue we have is not being able to effectively trap SQL exceptions inside of stored procedures or events.

In our perl script, we can capture a generic SQL exception and send it to a separate log or even email. With our event, we'd have to write a separate handler for every known exception. We already had a log scraper running on the MySQL error log, which we modified to ignore non-errors from the event scheduler.

Documentation
During the development phase, we relied almost exclusively on chapters 21 (Partitioning), 25 (Event Scheduler), and 27 (INFORMATION_SCHEMA tables) of the MySQL 5.1 Reference Manual online. The documentation is quite clear and the examples are thorough. It did help that we were already familiar with the general principle of partitioning from working in other databases.

Full article...


Other Related Articles

... to read more DBA articles, visit http://dba.fyicenter.com/article/