This Presentation Assumes That the System Administrator Has Installed DB2 and Provided

This Presentation Assumes That the System Administrator Has Installed DB2 and Provided

ABSTRACT

This presentation assumes that the System Administrator has installed DB2 and provided an adequate subsystem with respect to basic performance parameters. Thus, the starting point is the DB2 Performance Monitor’s Systems Statistics and Application Accounting Reports. These are used to identify peak periods and the plans that used excessive system resources during these periods. A standardized approach is used for analyzing the class one elapsed time for the plans. This approach then drills-down into the use of application logic, SQL options and database design alternatives that contribute most to class one time.

The presentation also attempts to answer the question of when to approach OS/390 systems programmers, the disk subsystem configurators, the network team, the database administrator or application owners for help in resolving performance issues.

The methods illustrated and opinions expressed in this paper are those of the author based on his experience with tuning database systems. The information contained in this document has not been submitted to any formal review and is distributed on an "AS IS" basis without any warranty either expressed or implied. The use of this information or the implementation of any of these techniques or suggestions is a DB2 installation's responsibility and depends on that installation's ability to evaluate and integrate them into its operational environment. While the material may have been reviewed for accuracy in a specific situation, there is no guarantee that the same or similar results can or will be obtained elsewhere. Installations attempting to adapt this material to their own environments do so at their own risk.

A few words about the general philosophy of tuning are appropriate. You can spend the rest of your career tuning a single DB2 system if you let yourself get bogged down in minutia. Don’t waste valuable time worrying about everything you’ve ever heard about DB2 performance options. Instead, remember a variation on the title of one of the most popular recent books--”Don’t Sweat the Small Stuff, and It’s All Small Stuff”. When tuning, MOST details are small stuff and they can kill you if you strive for the 100% perfect system. Perfection is expensive and elusive. Therefore, adopt a goal of excellence.

Also remember the 80-20 principle. From an elapsed time perspective, 20% of your application programs typically consume 80% or more of your CPU, storage and disk accessing load. Within those programs, 20% of your SQL statements consume 80% or more of the critical resources, which means that about 4% of the SQL statements in your shop are worth an effort toward excellence. The other 96% should be written in a clean, easy-to-read style with an eye toward easy understanding in case modifications are required in the future. A clean writing style ranks second only behind producing the correct results.

Actually, there is some “big stuff” in any DB2 subsystem and good tuners succeed by first spotting it and then eliminating or reducing it. Effective tuners don’t waste precious time with out-of-tune elements that don’t make a substantial contribution to costly resource and time consumption.

Last, do not believe in black magic. Application threads only do three things in a DB2 system: (1) use a CPU to execute logic, (2) wait for a resource they can’t currently have (for example, a CPU or a locked database element) or (3) wait for an external device to complete an input/output operation. Any time spent outside these areas is delving into black magic and there is no such thing!

A previous presentation entitled “Tuning DB2 From The Bottom Up” approached tuning from the perspective of the DB2 system administrator. At the very least, a system administrator installs DB2 and specifies the parameters for storage pools and for governing the execution time characteristics of the DB2 subsystem.

“From the top down” begins with the applications--including the logic of the programs, their use of SQL and the physical implementation of the database. Application designers and administrators may participate actively or in an advisory capacity along with DB2 system administrators, MVS systems programmers, VTAM and network administrators, CICS and IMS system administrators as well as with configuration specialists. Every aspect of the use of hardware, network and operating system can contribute to performance problems. All-in-all, application design can be a formidable task. It becomes even more formidable when performance doesn’t match expectations because many people have a tendency to point to DB2 as the main cause of their problems. This presentation is designed to help application administrators identify those areas where tuning, inside or outside DB2, has the highest probability of true benefit.

Following delivery of a functional system, the next challenge is to provide and maintain satisfactory performance. Performance almost always follows function when using the design and implementation practices in use today. This is also true when the design step was skipped because of the use of a prewritten packaged application. As always, the sooner a performance problem is identified, the less costly it is to correct. Well designed, “pretuned” systems are more rare than they should be.

Some programs in need of performance improvements are quickly spotted during the early stages of testing. An example is a batch program performing inserts to a table with four indexes. Its first trial run with live data was shut-off after ten hours of highly I/O-bound processing. Based on the work completed, it was forecast that nearly two days would be required for the entire workload. This revelation prompted reconsideration of the value of each index. Table partitioning and multiple parallel inserting threads were evaluated and implemented as the best approach for accomplishing the work within a reasonable time.

Production systems become “out of tune” because of changes in the workload and peak periods of the day, week, month or year. Therefore, it is important to track workloads and identify trends that strain resources and affect performance.

Programs that really don’t run for long periods of time, such as on-line transactions, often tend to be overlooked. Their response time may be perceived as relatively short; the workstation user considers two to three seconds fairly quick. Yet, two to three seconds is an enormous amount of time in a modern CPU. The response time of other transactions may vary widely, or the transaction may be executed such a large number of times that the total resource utilization is worth examining. How should an installation prepare for top-down tuning?

The best method for providing a foundation for tuning is to add a fundamental tracking system to the installation by using DB2 instrumentation and a reporting package such as the DB2 Performance Monitor (DB2 PM). Whether approaching tuning from the ground-up or the top-down, the DB2 System Statistics and Application Accounting reports are as meaningful to the DB2 tuning team as blood pressure, heart rate and body temperature are to a physician.

Installation panel DSNTIPN is provided to establish the basic tracing specifications for a DB2 subsystem. It’s strongly recommended that both the SMF accounting and statistics trace types be started automatically within DB2’s start-up procedure. Further; classes 1, 3 and 4 should be specified for the statistics type and a minimum of classes 1 and 3 for the accounting type. IMS shops should include class 2 accounting unless extremely limited in CPU resources. Class 2 is questionable in CICS shops because of the way that CICS handles multi-tasking relative to the DB2 attachment.

Thirty minutes is a reasonable cycle for the statistics reports. Accounting records are written for each execution of a DB2 application plan, utility or command. Higher accounting classes (7 and 8) include package-level accounting and latch wait times.

This chart shows the instrumentation facility component identifiers (IFCIDs) that are included as a result of this specification. For a complete and current list of the trace classes and IFCIDs, check the DB2 sample library DSNxxx.DSNSAMP member DSNWMSGS where xxx is the release number of your DB2 system. This extensive list includes all of the fields within the individual IFCID records.

To verify that these trace types and classes are started, issue the DB2 command

“-DISPLAY TRACE” and you should receive output as shown.

The next step can be to produce the standard DB2 performance monitor reports. Another approach is to tailor a data reduction system to fit your needs. As an example this is the flow established in the Louisiana Department of Public Safety in Baton Rouge. Mr. John C. Berry refers to himself as a “busy” database administrator who has little time to spend examining details that may or may not aid in his performance objectives. Along with his practical approach, Mr. Berry is a better-than-average designer and still knows how to write COBOL code and JCL. Mr. Berry’s system began with an examination of the DB2 statistics reports generated by his installation. He studied both the “short” and “long” formats of these reports and concluded that neither was short enough. He identified a total of 110 fields (out of more than 400) that he considered critical. He used the DB2 Performance Monitor option to write the data in the “FILE” format so that he could collect the data for the DB2 loader to load into tracking tables. That step was easy.

Preparing an extensive collection of DSNUTILB loader control cards for the next step was much more time consuming. IBM does provide some help with basic DDL and LOAD DATA definitions in DNSxxx.DSNSAMPDSNWMSGS.

The result of Mr. Berry’s efforts is a table containing general statistics and another containing buffer pool statistics. This permits individual pools within a multi-pool configuration to be tracked and analyzed. Mr. Berry then wrote his own tailored COBOL report program that summarizes his 110 key fields and ratios for the 48 half-hour statistical reports that were produced during the day. This is Mr. Berry’s base for tuning and trend analysis.

For example, it is easy to extract data from the statistics reports and chart the logical workload performed by the DB2 subsystem. An excellent measurement of the overall workload is the GETPAGE rate. This is the workload in terms of pages accessed in support of the applications. This chart from another installation shows the profile of one week’s worth of work during all three shifts, and it provokes thought on several issues.

Examining the prime shift of 8 a.m. to 4 p.m. it is clear that the week-end is light, Monday is slow and the peak day is Friday. Take an in-depth look at the make-up of prime shift on Friday and don’t waste time studying prime shift on the other days of the week.

The non-prime shift periods and the evening shift on Sunday, coupled with the midnight shift on Monday, show extremely heavy processing. Questions: Is this the result of normal week-end processing? Or, is this the result of week-end processing coupled with month-end operations? Is the work getting done on time or is there a growing risk that it will start to affect the normal shift on Monday mornings? Are the application owners aware of any especially critical batch programs being run during this period? If so, might these programs benefit from being “tightened-up?”

Therefore, there are two targets for more in-depth analysis: prime shift on Friday and the period from 4 p.m. on Sunday through 8 a.m. on Monday.

Whether looking deeper at on-line transaction processing or batch, the basic source of information is the DB2 Application Accounting Reports. For on-line work, the IMS Monitor or CICS Monitoring Facility contribute useful measurements.

Once focused on specific programs and their use of SQL, the next information source is the output from EXPLAIN and DB2 catalog statistics. If your installation has migrated to DB2 Version 5 or beyond, your performance specialists should install the newly offered Visual EXPLAIN and familiarize themselves with its very useful features.

Systems statistics are of most help to the DB2 system administrator. The statistics should be made available to administrators because they give insight into how well DB2 components such as the buffer pools, EDM pool, lock manager and so forth are supporting the applications.

The standard DB2 PM Accounting Summary Report provides insight into the application mix during a user-specified time period. Based on our example, summary reports pertaining to Friday, the evenings of both weekends and the midnight shifts should be produced. The new User-Tailored Reporting (UTR) feature in DB2 PM Version 5 is useful in tailoring a report that will help derive the key values in addition to sorting the data into a more meaningful sequence. This sample report has multiplied the number of times a plan was executed by (1) the average class one elapsed time, (2) the sum of the average TCB and SRB CPU time used, (3) the average class two elapsed time and (4) the sum of the average TCB and SRB time used within DB2. The results are sorted into descending sequence by total class one time. The objective is to identify those plans that consume the most total elapsed time and, therefore, should offer the most payback in return for tuning efforts.

As usual, this report shows an interesting mix. First, the list is topped by a batch program that is very intensive in its use of DB2 resources, as shown by the ratio of the class two to the class one time. It is also very I/O-bound based on the ratio of CPU time to elapsed time.

Next, there are two on-line transactions with vastly different profiles. P18DD1A1 uses DB2 extensively while P18DD300 barely touches DB2.

By sorting the statistics in descending sequence by total class one time, the tuning effort can be focused on those plans that contribute most to the competition for system resources. Plotting the contribution of the plans to the total elapsed time produces a very common curve--a curve that illustrates the application of the 80-20 rule.

In this specific instance, 46 plans (13.7% of all plans captured in the accounting summary) account for 80% of the elapsed time. The fertile area for in-depth analysis is now localized to a relatively small percentage of all of the plans. When tuning from the top-down, the next documents are the DB2 PM accounting reports for these 46 plans.

The long form of the DB2 PM Accounting Report lists 12 key measurements relating to the use of time while executing the plans of interest. Remember, there is no black magic. An application spends its time in one of three ways: [a] executing logic, [b] waiting for a resource it cannot currently have or [c] waiting for an external device to complete an I/O operation. This example shows a two hour and ten minute batch program. The 12 key measurements are as follows:

A.Total elapsed time from thread creation (or sign-on) to termination (class one time or the sum of [a], [b] and [c]).

B.Total CPU time for the task (TCB) from creation to termination (contributor to [a])

C.Total CPU time for special subtasks (SRB) from creation to termination (con-tributor to [a])

D.That portion of A that is spent in DB2

E.That portion of B that is spent using DB2

F.That portion of C that is spent using DB2

G.Total elapsed time spent queued waiting for lock and latches (contributor to [b])

H.Total elapsed time spent waiting for synchronous disk reads to complete (contributor to [c])

I.Total elapsed time spent waiting for other read operations to complete (contributor to [c])

J.Total elapsed time spent waiting for disk write operations to complete (contributor to [c])

K.Total elapsed time spent waiting for serialization events other than locking (contributor to [b])

L.Total elapsed time spent waiting because of serialization of log functions (contributor to [b])

or the resource-intensive plans, record the twelve key measurements in a simple spreadsheet to help determine where the elapsed time is spent during plan execution. The “Factor” column in this illustration shows how to derive the values that will reflect whether the application or DB2 is the majority user of CPU cycles and elapsed time. You can also determine where delays occur due to contention for CPU cycles in those instances when the thread is not dispatched by MVS as well as I/O and locking waits.


Pie charts are a great way to present DB2 performance information to other specialists and to provoke discussion for improvement. After charting, the basic approach is to look for large numbers, unusual ratios and any “dead time” when the application is not dispatched. These issues represent the “big stuff”--not insignificant trivia. This pie chart makes it obvious that application logic, lock/latch waits and unoverlapped DB2 I/O wait are not critical factors for tuning this plan. They are, indeed, “small stuff” with respect to this program.