DBA > Articles

Tuning Up for OLTP and Data Warehousing - Part 3

By: Scott Hayes and Philip Gunning
To read more DBA articles, visit http://dba.fyicenter.com/article/

SQL Performance

We mentioned that 20 to 25 percent of performance tuning improvements result from configuring the database manager, database, and tablespace container and parameter settings. But 75 percent or more of performance improvements come from properly tuning SQL.

Tuning is a bit of a misnomer because SQL tuning actually involves physical design changes such as providing optimum indexes; using indexes and clustering to avoid or reduce sort costs; implementing Automatic Summary Tables (ASTs); deploying replicated tables; and properly using generated columns, stored procedures, triggers, and more. (In fact, many vendor OLTP applications prohibit modifications to the SQL and primary key/unique indexes. So making physical design changes by adding, changing, or dropping non-unique indexes may be the only option for such handcuffed DBAs.) Here’s how to get the most out of these changes.

Clustering
You can often get huge performance improvements by defining a handful of clustering indexes or changing existing indexes to add clustering, so that sort time and CPU consumption is reduced. We often tell people, “Give us four hours with your OLTP database, and we’ll cut CPU consumption in half.” Cutting CPU costs in half (or more) means big savings when you consider the price of CPUs and the software license costs that are tied to the number of CPUs. At one client, in just two hours, correcting index defects and setting proper clustering sequences avoided $2 million of hardware upgrades.

DB2 can maintain data in its tables sorted in a particular sequence. The sorted order is controlled by the clustered index, of which there can be only one. When an SQL statement requires data to be retrieved in ordered sequence, DB2 can avoid performing a sort by using the clustered index to retrieve the data in presorted sequence. Sorts, even of just a few rows, can be very expensive in terms of overall CPU cost. At one client site, 70 percent of total CPU time on an eight-CPU machine was consumed by a six-row sort from a table with only 350 total rows. After we created a clustered index and executed the REORG and RUNSTATS utilities, the relative cost of this SQL statement plummeted to less than 1 percent of total CPU time, freeing up approximately six of eight CPUs and providing faster transaction rates and growth capacity. Listing 1 shows the SQL statement and full solution.

Clustering is also important for DW databases because optimum clustering sequences will improve the performance of large sorts (or let you avoid them) and the efficiency of joins, groupings, and aggregations. How do you determine which clustering indexes to employ? An application might execute millions of transactions in a day, but there are a finite number of unique SQL statements that make up these transactions. (A robust, feature-rich application may run 1,000 to 2,000 different SQL statements during the day.) The trick, whether in an OLTP or DW database, is to identify the SQL statements with the highest sort costs relative to all other SQL statements, then define clustering indexes to eliminate or reduce these costs. For any given table, there could be a handful to a few dozen different SQL statements that access its data. Of these statements, determine the one that has the highest percentage of total sort time cost against that table, and then define a clustering INDEX that will eliminate or reduce the sort costs. Every table should have a clustering index (ideally created as part of the physical design process).

You can find the high-sort cost statements the easy way or the hard way. The hard way includes using the DB2 Event Formatter (db2evmon) to format SQL Event records. To ensure optimum choice of clustering index, analyze event records from several hours of application execution. During analysis, you’ll need to identify unique statements independent of host values and track frequency of execution counts, sum of CPU, sort, and elapsed times for each statement individually and for the database as a whole. The sums can be divided by the execution counts to arrive at average CPU, sort, and elapsed time costs. You should also compare the sums to the total costs for the database to determine the relative cost percentage for CPU, sort, and elapsed times. It’s a painstaking, laborious, process. Listing 2 shows the procedure for using IBM DB2 supplied program db2evmon.

The easy way is to buy a SQL analysis tool from a third-party vendor. Such tools can accurately find unique SQL statements with high sort, CPU, or elapsed time costs in a matter of minutes. The equivalent manual process could take days or weeks. And, after defining just a handful of the optimum clustering indexes, performance and capacity should improve so dramatically that the return on investment for such tools is virtually instantaneous.

Unique index with include
Any given table can have only one clustered index and one unique index, but any number of non-unique indexes could be created on a table in support of high-speed, highly efficient data retrieval for SQL. DBAs often overlook the ability to achieve index-only access with the INCLUDE feature of unique indexes. With INCLUDE, DB2 can store data columns in the index structure without contaminating the business rules for the primary key. Because the data exists within the index, DB2 can avoid accessing data pages for the table, resulting in substantial CPU savings. To create a unique index with the INCLUDE clause, use this syntax:

create UNIQUE index PROD.UNIQ_IX on PROD.TABLENAME (PKCOL1) INCLUDE (DATACOL1, DATACOL2)

A word of caution: Don’t add dozens and dozens of data columns to a unique index via INCLUDE. Limit additional columns to a handful of carefully chosen columns so indexes won’t become so large that the value of INCLUDE is mitigated by a higher cost of maintaining the index and processing more index pages. In a DW database, using INCLUDE with indexes on lookup tables can greatly improve the efficiency of nested loop joins.

Generated columns
Another often overlooked performance technique, generated columns have data values that are automatically maintained by DB2. Most often, these values are used to generate sequence numbers for customer IDs, contact numbers, or other identifiers. They can also be used to maintain shadow columns that have had certain functions applied, and this is where generated columns can really shine. Many applications use SQL that contains WHERE clauses similar to WHERE UCASE(LASTNAME) LIKE ‘HAY%’. Ordinarily, the UCASE function would prohibit DB2 from using an index that might be created on the LASTNAME column, resulting in an expensive scan instead of indexed retrieval of data. But using a generated column that maintains LASTNAME stored as uppercase and indexing that generated column, the DB2 optimizer can successfully choose efficient indexed retrieval for the SQL query. Listing 3 shows an example of this generated columns technique.

Full article...


Other Related Articles

... to read more DBA articles, visit http://dba.fyicenter.com/article/