1/31/2016

Performance tuning tools and methodology

Benchmark testing:

Benchmark testing is a normal part of the application development life cycle. It is a team effort that involves both application developers and database administrators(DBAs).

Benchmark testing is performed against a system to determine current performance and can be used to improve application performance. If the application code has been written as efficiently as possible, additional performance gains might be realized by tuning database and database manager configuration parameters.

Different types of benchmark tests are used to discover specific kinds of information. For example:
  • An infrastructure benchmark determines the throughput capabilities of the database manager under certain limited laboratory conditions.
  • An application benchmark determines the throughput capabilities of the database manager under conditions that more closely reflect a production environment.
Benchmark testing to tune configuration parameters is based upon controlled conditions. Such testing involves repeatedly running SQL from your application and changing the system configuration (and perhaps the SQL) until the application runs as efficiently as possible.

The same approach can be used to tune other factors that affect performance, such as indexes, table space configuration, and hardware configuration, to name a few.

Benchmark testing helps you to understand how the database manager responds to different conditions. You can create scenarios that test deadlock handling, utility performance, different methods of loading data, transaction rate characteristics as more users are added, and even the effect on the application of using a new release of the database product.

Characteristics of good benchmarks include:
  • The tests are repeatable.
  • Each iteration of a test starts in the same system state.
  • No other functions or applications are unintentionally active in the system.
  • The hardware and software used for benchmark testing match your production environment.

Benchmark preparation

There are certain prerequisites that must be satisfied before performance benchmark testing can be initiated.

Before you start performance benchmark testing:

·        Complete both the logical and physical design of the database against which your application will run.
·        Create tables, views, and indexes.
·        Normalize tables, bind application packages, and populate tables with realistic data; ensure that appropriate statistics are available.
·        Plan to run against a production-size database, so that the application can test representative memory requirements; if this is not possible, try to ensure that the proportions of available system resources to data in the test and production systems are the same (for example, if the test system has 10% of the data, use 10% of the processor time and 10% of the memory that is available to the production system).
·        Place database objects in their final disk locations, size log files, determine the location of work files and backup images, and test backup procedures.
·        Check packages to ensure that performance options, such as row blocking, are enabled when possible.

Benchmark test creation

You will need to consider a variety of factors when designing and implementing a benchmark testing program.

Because the main purpose of the testing program is to simulate a user application, the overall structure of the program will vary. You might use the entire application as the benchmark and simply introduce a means for timing the SQL statements that are to be analyzed. For large or complex applications, it might be more practical to include only blocks that contain the important statements. To test the performance of specific SQL statements, you can include only those statements in the benchmark testing program, along with the necessary CONNECT, PREPARE, OPEN, and other statements, as well as a timing mechanism.

Another factor to consider is the type of benchmark to use. One option is to run a set of SQL statements repeatedly over a certain time interval. The number of statements executed over this time interval is a measure of the throughput for the application. Another option is to simply determine the time required to execute individual SQL statements.

Although the elapsed time for each query is an important factor in performance analysis, it might not necessarily reveal bottlenecks. For example, information on CPU usage, locking, and buffer pool I/O might show that the application is I/O bound and not using the CPU at full capacity. A benchmark testing program should enable you to obtain this kind of data for a more detailed analysis, if needed.

Not all applications send the entire set of rows retrieved from a query to some output device. For example, the result set might be input for another application.
Formatting data for screen output usually has a high CPU cost and might not reflect user needs. To provide an accurate simulation, a benchmark testing program should reflect the specific row handling activities of the application. If rows are sent to an output device, inefficient formatting could consume the majority of CPU time and misrepresent the actual performance of the SQL statement itself.

Although it is very easy to use, the DB2 command line processor (CLP) is not suited to benchmarking because of the processing overhead that it adds. A benchmark tool (db2batch) is provided in the bin subdirectory of your instance sqllib directory. This tool can read SQL statements from either a flat file or from standard input, dynamically prepare and execute the statements, and return a result set. It also enables you to control the number of rows that are returned to db2batch and the number of rows that are displayed. You can specify the level of performance-related information that is returned, including elapsed time, processor time, buffer pool usage, locking, and other statistics collected from the database monitor. If you are timing a set of SQL statements, db2batch also summarizes the performance results and provides both arithmetic and geometric means. By wrapping db2batch invocations in a Perl or Korn shell script, you can easily simulate a multiuser environment. Ensure that connection attributes, such as the isolation level, are the same by selecting the appropriate db2batch options.

Note that in partitioned database environments, db2batch is suitable only for measuring elapsed time; other information that is returned pertains only to activity on the coordinator database partition.

Benchmark test execution:

Follow these steps to benchmark test a database application:
Step 1 Leave the DB2 registry, database and database manager configuration parameters, and buffer pools at their standard recommended values, which can include:
·        Values that are known to be required for proper and error-free application execution
·        Values that provided performance improvements during prior tuning
·        Values that were suggested by the AUTOCONFIGURE command
Default values; however, these might not be appropriate:
– For parameters that are significant to the workload and to the objectives of the test
– For log sizes, which should be determined during unit and system testing of your application
– For any parameters that must be changed to enable your application to run
Run your set of iterations for this initial case and calculate the average elapsed time, throughput, or processor time. The results should be as consistent as possible, ideally differing by no more than a few percentage points from run to run. Performance measurements that vary significantly from run to run can make tuning very difficult.
Step 2 Select one and only one method or tuning parameter to be tested, and change its value.
Step 3 Run another set of iterations and calculate the average elapsed time or processor time.
Step 4 Depending on the results of the benchmark test, do one of the following:
·        If performance improves, change the value of the same parameter and return to Step 3. Keep changing this parameter until the maximum benefit is shown.

·        If performance degrades or remains unchanged, return the parameter to its previous value, return to Step 2, and select a new parameter. Repeat this procedure until all parameters have been tested.

1 comment:

  1. hello vinay..The site is very useful. Thanks for sharing valuable information.

    ReplyDelete

ads