Microsoft SQL Server 2000 Scalability Project – Server Consolidation

Author: Man Xiong
Microsoft Corporation

March 2002

Abstract: In this paper we focus on a common scenario where a single system hosts many databases with a relatively small number of users per database. The effects of the parameters mentioned above are studied on an actual application, called the "PACE." PACE is a financial application from Microsoft® bCentral™ that is running on a combination of servers: Microsoft® SQL Server™ 2000 Enterprise Edition, Microsoft Windows® 2000 Datacenter™ Server, and Dell PowerEdge 8450 servers.


The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.

This White Paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, email address, logo, person, place or event is intended or should be inferred.

© 2002 Microsoft Corporation. All rights reserved.

Microsoft, bCentral, Windows, are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

13

3

Table of Contents

Executive Summary 1

Introduction 2

PACE Application 2

Use Multiple Instances 3

Memory Configuration for Multiple Instances 6

Processor Affinity 7

Disk Layout 7

Recovery Models 9

Conclusion 9

Appendix A: A Dell Solution for Scalable Enterprise Computing 9

Appendix B: Hardware/Software Configuration 10

Appendix C: PACE Workload Simulation Configuration 11

Appendix D: Test Configuration 11

Appendix E: Disk Configuration 12

Appendix F: Optimal System Configuration for 500 databases on each of 8 instances 13

13

13

Executive Summary

This white paper is a joint effort by Microsoft and Dell to demonstrate the scalability of Microsoft SQL Server™ 2000 and Dell hardware. SQL Server 2000 running on a Dell enterprise eight-way server can support thousands of databases, and their users, while providing the performance necessary to allow for centralization. SQL Server 2000 maximizes return on investments in symmetric multiprocessing (SMP) systems, enabling users to add processors, memory, and disks to build large centrally managed enterprise servers.

More companies are outsourcing database services to Application Service Providers (ASP). Internal information technology (IT) organizations are consolidating their database services as justified by total cost of ownership (TCO) and manageability benefits. This paper focuses on a common ASP scenario, where a single system hosts many databases with a relatively small number of users, per database. This scenario is also applicable to companies looking to consolidate databases from across an enterprise of servers onto a centralized server. Therefore, the goal of this paper is to demonstrate an approach to successfully scaling increased workloads on a single server using multiple instances. An application from Microsoft® bCentral™ is used to study the effects of different configurations on workload performance as measured by transactions per minute (TPM).

The following list provides you with the benefits for using multiple instances on a single server:

·  Ability to support larger workloads on a single server.

·  Flexibility to separate databases based on meeting the requirements of different Service Level Agreements (SLA).

·  Ability to separate databases with different performance requirements.

·  Ability to separate databases with different backup and recovery requirements.

·  Ability to separate databases with different security requirements.

·  Ability to separate database based on change control, operational and maintenance requirements.

In this case study, the results show that:

·  Using multiple instances versus a single instance to support a large number of databases increased the workload supported by a single server by a factor of eight.

·  Setting processor affinity increased the workload supported by 80 percent over the default setting, when used with multiple instances.

·  Separating transaction logs from data files increases the workload by 10 percent. This provides additional support for the best practice of not placing logs and data on the same device.

·  Best results are achieved when a server is dedicated to SQL Server.

This paper provides general guidelines for understanding the criteria involved in successfully configuring multiple instances for optimal throughput.

Introduction

As more and more customers move to server consolidation, we expect ASP and corporate IT to deploy multiple instances of SQL Server 2000 for very large numbers of databases.

The ability to separate databases allows an ASP or corporate IT provider more flexibility in providing different service levels to customers without requiring separate machines. Some of the challenges of this approach include determining when to use multiple instances and how to configure them for optimal performance.

Optimal configuration is defined by studying the impacts of several parameters, including multiple instances, memory configuration, CPU affinity, disk layout and recovery models on TPM.

In this paper we focus on a common scenario where a single system hosts many databases with a relatively small number of users per database. The effects of the parameters mentioned above are studied on an actual application, called the "PACE." PACE is a financial application from Microsoft bCentral that is running on a combination of servers: Microsoft SQL Server 2000 Enterprise Edition, Microsoft Windows® 2000 Datacenter™ Server, and Dell PowerEdge 8450 servers.

PACE Application

The PACE application is an accounting and financial management product deployed as one of the service offerings available to customers of Microsoft bCentral. Designed to help small businesses to complete everyday tasks more efficiently, PACE includes financial, banking, payroll, sales, and purchasing capabilities as well as quick and easy reporting.

To offer every customer entity-secure accounting controls and reliable multiuser access, the application has a large number of small financial databases on a single server, one database per customer. This also allows a more granular control over security, backup and recovery, change control, and maintenance operations. The application has more than 200 stored procedures per database to support the Web service.

This non-traditional design brings new challenges for system management and performance tuning for SQL Server. The biggest challenge is the amount of memory required to support a large number of stored procedures, multiplied by the number of databases. For SQL Server, virtual memory space is required so that an execution plan can be compiled for each procedure on each database, with that plan being kept in the procedure cache. For the PACE application, the number of the execution plans cached for 500 databases is 200*500 (or 100,000 cache entries). The greater the number of PACE databases, the greater the required size of the server procedure cache. When there are more execution plans than the procedure cache can keep in memory, execution recompilations occur, which can reduce query-processing throughput. Traditional methods, such as parameterization of the stored procedures, do not address the problem.

Effective tuning requires specific configurations to increase effective memory space for the procedure cache. Other challenges include how to efficiently use CPUs for the high degree of concurrent activity due to the number of databases and users, the optimal disk layout, and the optimal recovery model.

Use Multiple Instances

Multiple instances can scale up the number of databases and workload per server.

When the number of databases and resultant workload reach a certain level, it is a good practice to group databases in multiple SQL Server instances as it relieves the pressure on memory. Good performance is accomplished by enabling the use of more memory for the server procedure cache for each instance; and provides better operational and security isolation.

Our tests demonstrate that using multiple instances allows for scale up of the total number of databases and resultant workload hosted on the system while maintaining good throughput per database.

Figure 1 shows the performance degradation on a single instance when the number of PACE databases increases from 500 to 4000. It also shows that running 500 databases per each of the 8 instances gives 8 times the number of transactions per minute as running 500 databases on one single instance. Running 500 databases, per each of 8 instances, with a heavy workload comes close to utilizing the total CPU capacity at this hardware configuration so further increasing number of total databases by running 500 databases on each of 16 instances can not increase the total throughput (however the system still demonstrates an acceptable throughput).

Figure 2 shows that the workload throughput per database decreases when the number of databases (also the number of client connections) increases on a single SQL Server. When the number of databases is kept at 500 per instance, throughput per database stays the same on a single instance and on 8 instances, but decreases when increased to 16 instances due to CPU constraints.

Figure 1 Total workload throughput on the system versus total number of PACE databases on the system.


Figure 2 Workload throughput on each database versus the number of PACE databases on the system.

Why we need multiple instances for this scenario

Performance degradation for many thousands of PACE databases on a single instance.

SQL Server 2000 can access up to 2 gigabyte (GB) of virtual memory (or 3GB if the /3GB switch is set in boot.ini) for the procedure cache. When the number of databases on a single instance increases from 500 to 1000, there is not enough virtual memory for the procedure cache to hold all of the increased number of query execution plans in memory. Some execution plans in the procedure cache are dropped in order to free memory to make room for the plans of other stored procedures; but doing this requires the dropped stored procedure plans to be recompiled when needed. Note that the resulting frequent recompilations can hurt workload performance.

Using multiple instances relieves the pressure on memory

As shown in Figure 3, with greater than 4GB physical memory, running the databases on multiple instances makes more memory available for procedure cache (each instance has its own virtual address space and procedure cache).

When the number of PACE databases increases to many thousands for a single instance, the sheer number of database objects consumes so much memory space available to procedure cache that the performance will degrade. The procedure cache will be overwhelmed and recompilation will occur. Therefore, we recommend multiple instances.

Figure 3 Effective memory for procedure cache in different configurations

Same challenge for other similar designs

The key determinant of whether multiple instances benefit performance is the total amount of memory space needed for all the execution plans. The total amount of memory space needed is determined by the average size of execution plans, number of stored procedures per database, and the number of databases. The same challenge can occur when using a smaller number of databases if there are more procedures in each database. Additionally, the same challenge can also occur with very complicated procedures on a smaller number of databases, with a smaller number of stored procedures, per database.

Memory Configuration for Multiple Instances

To achieve great performance for multiple instances, we only needed to specify a reasonable minimum server memory without additional tuning on memory configuration. By reserving 1GB minimum server memory for each instance, and keeping the maximum server memory open, we observed a 25 percent performance gain over using the default dynamic memory allocation. The performance is as good as using optimal static memory allocation without the high cost of constant recalibration. A customer using this method should be aware of the possible impact on memory allocation, for other applications on the same server. Other applications competing for memory impact dynamic memory allocation. For this reason, Microsoft recommends dedicating the system to SQL Server.

Another advantage of dedicating the system to a SQL server is that it allows for the configuration to support differing workloads on each instance without special testing and tuning to determine the optimum memory configuration. The practice of dedicating the system to a SQL Server reduces the need for outages to reconfigure memory to achieve performance goals if workloads are varied.

Simple memory configuration gives a performance boost

The use of a minimum floor for memory for each instance achieved the same performance as a perfect static allocation. The burden of testing to determine the ideal static allocation for each instance can be reduced.

Tests have been run on 8 and 16 instance configurations using both static and dynamic memory allocation settings to determine if one method provides better performance. Because every database has the same workload and every instance has the same number of databases, the workload is even among the instances.

However, when running 16 instances, the use of static memory provides 25 percent better performance than using dynamic allocation.

For the 16-instance configuration, reserving 1GB minimum of server memory for each instance, and keeping the maximum server memory open, we observed the same performance as using optimal static memory allocation.

Why this memory configuration is desired

When multiple instances of SQL Server are running on the same computer, each instance independently uses the standard algorithm for dynamic memory management. The amount of memory allocated to each specific SQL Server instance is driven by the relative workload of each instance. This is designed to ensure that the instances with higher workloads acquire more memory while instances processing lighter workloads acquire less memory.