Authors: Mike Ruthruff, Sanjay Mishra
Contributor: Steve Schmidt
Technical Reviewers: Kevin Farlee, Sunil Agarwal, Burzin Patel, Lindsey Allen, Mark Souza, Mark Tomlinson, Denny Lee, Prem Mehra, Alexey Yeltsov, Yuriy Toropov, Glenn Berry (SQL Server MVP)
Backup compression is a new feature in SQL Server 2008 that can help provide smaller sized backups and reduce backup time. This document provides guidance related to tuning options for backup performance. All of the information and test results presented here were done specifically by using the backup compression feature of SQL Server 2008; however, they apply broadly to any backup scenario whether backup compression is used or not. They also apply to restore operations; however, restore will not be covered in depth in this document. For an introduction to the backup compression feature, see Backup Compression, in SQL Server Books Online.
One major benefit of backup compression is space saving. The size of the compressed backup is smaller than that of the uncompressed backup, which results not only in space savings, but also in fewer overall I/O operations during backup and restore operations. The amount of space you save depends upon the data in the database, and a few other factors, such as whether the tables and indexes in the database are compressed, and whether the data in the database is encrypted. To determine the effectiveness of backup compression, the following query can be used:
SELECT b.database_name 'Database Name', CONVERT (BIGINT, b.backup_size / 1048576 ) 'UnCompressed Backup Size (MB)', CONVERT (BIGINT, b.compressed_backup_size / 1048576 ) 'Compressed Backup Size (MB)', CONVERT (NUMERIC (20,2), (CONVERT (FLOAT, b.backup_size) / CONVERT (FLOAT, b.compressed_backup_size))) 'Compression Ratio', DATEDIFF (SECOND, b.backup_start_date, b.backup_finish_date) 'Backup Elapsed Time (sec)' FROM msdb.dbo.backupset b WHERE DATEDIFF (SECOND, b.backup_start_date, b.backup_finish_date) > 0 AND b.backup_size > 0 ORDER BY b.backup_finish_date DESC
Table 1 shows the output of the above query after taking an uncompressed and a compressed backup.
Table 1: Comparing compressed and uncompressed backup
For an uncompressed backup, the compression ratio is 1 and the Compressed Backup Size reports the same value as the Uncompressed Backup Size. The higher the compression ratio, the greater the space saving. For our test database, we achieved a compression ratio of 3.05 and saved approximately 67% space for the backup. Figure 1 illustrates the benefits of backup compression in terms of space and time. Databases using the Transparent Database Encryption (TDE) feature of SQL Server 2008 may not see as high backup compression ratios (in most cases, it will be close to 1) due to the fact that encrypted data does not lend itself well to compression.
Figure 1: Benefits of backup compression in terms of space and time
As illustrated in Figure 1, a compressed backup is smaller in size, and hence requires fewer write I/Os to the backup media. This results in reduced backup time. Backup is an I/O intensive operation so reduction in I/O will be beneficial to performance.
Our test environment for these results consisted of a database representing an OLTP stock trading application with a database size (excluding free space) of approximately 300 GB. The hardware used for the testing was a 4-socket dual core DELL 6950 server and an EMC Clariion (CX700) storage array with 4GB of cache allocated 80% to write operations. The volume/LUN layout is shown in Table 2.
Table 2: Disk volume layout for backup compression tests
In each of the disk volumes shown in Table 2 there was no sharing of physical disks between any of the volumes. There were 32 total disks for the data, 8 for the log and 16 for the backup files.
SQL Server data files were striped such that each filegroup contained four data files—each file of a filegroup was placed on a separate data volume.
Whenever you attempt to tune the performance of backup compression, it is important to first understand the achievable theoretical throughputs of the given hardware configuration, specifically the storage. The throughput of backup compression is bound by the CPU resources, the throughput of the input devices, or the throughput of the output devices.
One method of determining the throughput of a given configuration is to perform backup to a NUL device multiple times by varying the BUFFERCOUNT setting. Use the default BUFFERCOUNT first, and then explicitly specify BUFFERCOUNT in subsequent tests—each time increasing BUFFERCOUNT to a higher value. For example:
BACKUP DATABASE [BCTEST] TO DISK = 'NUL' WITH COMPRESSION
BACKUP DATABASE [BCTEST] TO DISK = 'NUL' WITH COMPRESSION, BUFFERCOUNT = 50
Default BUFFERCOUNT is determined dynamically by SQL Server based on the number of database volumes and output devices. Default values for BUFFERCOUNT are chosen so they will apply to a broad range of environments which is why tuning these on high end systems may be necessary to obtain optimal performance.
More information on tuning the BUFFERCOUNT parameter will be given later in this document. The BACKUP section in SQL Server Books Online also contains related information.
At some point during the above tests, you will observe one of the following, which will determine the limits of the hardware:
If total CPU utilization nears 100% across all cores, this is your bounding resource. If the total CPU is less than 100%, the observed throughput of the reads is your maximum achievable throughput. Once you have this value, you can then perform a backup to a real output device. If you do not achieve the same throughput attained by using BACKUP to NUL, write throughput to the output device is your bottleneck.
Figure 2 illustrates the throughput we achieved as we increased BUFFERCOUNT during our BACKUP to NUL tests.
Figure 2: Backup throughput to NUL device with varying BUFFERCOUNT
Similarly, the performance monitor graph in Figure 3 illustrates the fact that I/O on the read device is our bounding resource. This performance monitor log was run continuously across all iterations of our tests. Our achieved throughput was approximately 400 MB/sec before the read operations became I/O bound.
Figure 3: Disk throughput and latency during backup compression
Notice that near the intersection of the black straight lines there is a leveling off in the throughput (Disk Read Bytes/sec counter), while the latency (Avg. Disk sec/Read counter) continues to increase. This indicates a bottleneck on the I/O resources.
Ultimately, in our tests with a real output device this was the throughput we achieved, which means the bottleneck was on the I/O performance of our input devices.
Several considerations come into play when you attempt to increase the performance of compressed backup operations. The goal of our testing was to determine the highest throughput obtainable with the simplest configuration. Performance of backup operations in SQL Server is influenced by the following:
Database volume and the device layout have an impact on backup performance, irrespective of whether the backup is compressed. For backup operations, there is a single reader thread per database volume (drive letter or mount point volume), and a single writer thread per backup device. Figure 4 illustrates the performance differences of backup compression between the following configurations. In this article, a backup device is synonymous with a file.
Figure 4: Backup compression performance versus number of backup devices
In Figure 4, BC refers to BUFFERCOUNT and MTS refers to MAXTRANSFERSIZE. These settings are discussed in more detail later in this document.
As shown in Figure 4, having multiple backup devices improves backup performance. We observed very slight improvement by having the multiple backup devices on separate volumes. This is likely because our test was bound by read throughput as illustrated in Figure 3.
BUFFERCOUNT and MAXTRANSFERSIZE were left at the default values for the test results in Figure 4. By default we observed MAXTRANSFERSIZE to be an average of 512 KB, and BUFFERCOUNT was 13 and 28 for 1 device and 4 devices respectively. SQL Server determines the default setting for BUFFERCOUNT based on the number of reader volumes and output devices. Using multiple backup devices results in an implicit increase in the BUFFERCOUNT value.
The increase in BUFFERCOUNT, in addition to the increased number of threads writing to the backup files explains why the performance of multiple devices on a single volume increases (described more later). Results using other values are shown later in Figure 6.
Figure 5: Parallelism influence on CPU utilization
The number of threads used for compression operations is dynamically determined and comes from the SQLOS thread pool. In our case, having either 1) increased BUFFERCOUNT with a single output device, or 2) multiple output devices, resulted in greater throughput and more concurrent compression operations. As shown in Figure 5, either of these increased parallelism and overall CPU usage across all CPUs in our server. A greater number of backup devices improves backup performance. However, it adds a management complexity. You have now multiplied the number of backup files to track. If you lose one of these files, the entire backup is useless.
Using backup compression results in higher CPU utilization when compared to uncompressed backups; this is due to the compression of the data. This cost is generally offset by the benefits of backup compression discussed earlier in this paper.
For backups that are executed concurrently with user workload, reducing the number of backup devices or BUFFERCOUNT may reduce the overall CPU used by the backup operation, leaving more CPU resources for the workload. This trade-off can be made to accommodate scenarios where CPU is a constrained resource that must be preserved for user workload.
In some scenarios, tuning the size of the I/O issued by backup / restore operations as well as the number of internal buffers used for the transfer of data may help increase throughput independently of the underlying volume / device layout. These are controlled by the MAXTRANSFERSIZE and BUFFERCOUNT options of the backup and restore commands. When tuning these, keep in mind the following:
Figure 6 extends the data in Figure 4 by including tuning of MAXTRANSFERSIZE and BUFFERCOUNT. Figure 6 shows backup time and CPU utilization for six different configurations. As illustrated in Figure 6, throughput comparable to that attained by using multiple devices can be achieved when using a single device on a single LUN by adjusting the MAXTRANSFERSIZE and BUFFERCOUNT parameters.
Figure 6: Backup time and CPU utilization (average for all CPU’s) versus device/volume configuration, MAXTRANSFERSIZE and BUFFERCOUNT
Although tuning MAXTRANSFERSIZE may result in slightly higher throughput, it also results in much larger I/O sizes. Larger I/O sizes of the backup operation when mixed with a concurrent OLTP workload may result in slower overall I/O response times. The recommended tuning approach is to adjust BUFFERCOUNT first, and then determine if tuning MAXTRANSFERSIZE will provide any additional performance advantage. Keep in mind when tuning this parameter that increasing the value may increase the size of the I/O with limited increase in overall throughput. Using multiple output devices as opposed to tuning these parameters may be a simpler approach to achieving greater throughput. However, this also may introduce the additional management overhead of having to maintain multiple backup files. Due to time constraints, we did not test all configurations in our scenario.
Figures 7 shows the read and write throughput attained during these tests. The write throughput is less than the read throughput because of compression.
Figure 7: Read and write throughput during backup operations
Backup compression operations are parallelized dynamically up to the number of CPUs on the machine so there is a potential performance benefit to a machine with more physical CPUs. Similarly, the physical characteristics of the I/O configuration including the number of disks, cache memory available for I/O operations, and the throughput attainable on the I/O path, have a significant effect on performance. I/O operations related to backup / restore may ultimately be bound by the limits of the hardware. It is important to understand the potential throughput of your specific configuration in order to know when you are exhausting the capabilities of the hardware.
This article provides a good amount of detail into the internals of backup compression. A summary of our recommendations for tuning backup compression and restore are:
Keep in mind that the results described here indicate the performance that is attainable in one specific configuration. Performance may be much higher or lower in different configurations. As an additional data point, Backup More Than 1GB per Second Using SQL2008 Backup Compression on the SQLCAT blog illustrates the performance attainable on very high-end systems.
There are many approaches to tuning the performance of backup in SQL Server 2008. Understanding how volume and device layout, and number of CPUs influence parallelism, as well as understanding other tuning options can provide you with the ability to significantly increase the performance of backup and restore. The decisions on which approach to use depend largely on what is optimal for your given environment.
DELL PowerEdge 6950
EMC Clariion CX700
Hi all, Just wanted to thank all the attendees that participated to yesterday's event on SQL Server
Hi all, Just wanted to thank all the attendees that participated to yesterday's event on SQL Server 2008
There is a vast array of new features in SQL Server 2008. While I would love to explore all of them,
Author: Burzin Patel Contributor: Kun Cheng Reviewers: Sanjay Mishra, Denny Lee, Mike Ruthruff, Sharon
One of the most common question I've received regarding data and backup compression in SQL Server 2008
Good, excelente, people, you are have messenger? if anybody would like changes ideas about sql 2008, you are my added firstname.lastname@example.org, thanks a lot! ;)
Author: Mike Weiner Contributor: Burzin Patel Reviewers: Lubor Kollar, Kevin Cox, Bill Emmert, Greg Husemeier
Authors : Sanjay Mishra, Mike Ruthruff Technical Reviewers : Steve Schmidt, Mike Purtell, Mark Souza
Author : Sanjay Mishra Contributors : Mark Scurrell, Richard Waymire Technical Reviewers : Denny Lee
Here are some notes on “SQL Server 2008 DR, Testing, Waits” I took while attending an advanced class
Pingback from ChoirulAmri.org » Blog Archive » Some Notes ON SQL Server 2008 Backup compression
По материалам технической статьи Майкрософт: A Case Study: Fast and Reliable Backup and Restore of a
I now have support for compression, backup to multiple files and the options BUFFERCOUNT and MAXTRANSFERSIZE in my maintenance solution.
@Databases = 'USER_DATABASES',
@Directory = 'C:\Backup',
@BackupType = 'FULL',
@Compress = 'Y',
@BufferCount = 50,
@MaxTransferSize = 4194304,
@NumberOfFiles = 4
Pingback from Top 10 SQL Server 2008 Features for the Database Administrator (DBA) « Weblog for all users.
Автор: Майк Вайнер ( Mike Weiner ) Соавтор: Бурцин Пэйтел ( Burzin Patel ) Редакторы: Любор Коллар (