Building and Deploying Large Scale SQL Server Reporting Services Environments Technical Note Series
Authors: Denny Lee, Lukasz Pawlowski Contributors: David Reinhold, Robert Bruckner Technical Reviewers: Burzin Patel
This technical note is the introduction to the Building and Deploying Large Scale SQL Server Reporting Services Environments Technical Note Series, which provides general guidance on how to set up, implement, and optimize an enterprise scale-out architecture for your SQL Server Reporting Services (SSRS) environment. This note provides guidance for both SQL Server 2005 and 2008 Reporting Services. The focus of this technical note is the Reporting Services scale-out architecture, which is referenced throughout this technical note series.
Figure 1 represents a typical scale-out Reporting Services environment.
Figure 1: Reporting Services Scale-Out Architecture
From this architecture, you will notice the following:
To ensure a successful enterprise implementation of your Reporting Services environment, please note first that a lot of documentation is available online. Many mistakes in enterprise customer implementations can be avoided by reviewing the following white papers.
Below is an example customer scenario that has been shrunk down to its core components.
Figure 2: Customer Scenario
This customer currently has:
To help handle the high query load, the customer also made use of:
With this setup, they were able to use Visual Studio Test System to verify that the above configuration could handle 1,800 concurrent users with a 10-sec think time (time delay between the execution of each user’s reports) and an average transaction time of 35 sec.
Another key component to ensuring a successful enterprise implementation is the ability to determine the query load prior to actually placing it into production. It is important that you understand your reporting scenarios and the data that needs to be delivered so that you can better plan capacity (hardware, resources, etc.). The above customer executed tests to make sure they could handle their expected maximum workload of 1,800 concurrent users. This allowed them to better plan for hardware capacity—and they realized that they did not need to build out more SSRS servers to handle their workload.
Your report scenarios are defined by user personas and usage patterns. For user personas, you need to understand the type of users who are using your reports, whether it is the director who typically wants high-level summary statistics or the analyst who needs detailed reports. Understanding usage patterns enables you to know if you have users who typically want to review a few sets of reports or want to heavily interact (drill down / up / through) with these reports. Once you understand your typical report scenarios, it is important for the purpose of performance to use actual reports when you test. If you do not have these, your test reports should closely reflect what users will actually be querying. As well, any and all tests that you do should isolate Reporting Services from other systems. This is implied from the above architecture diagrams, but a common issue is that customers run performance tests on shared systems causing both performance degradation and making it difficult to isolate problems.
To run performance tests, a good tool to use is Visual Studio Team System (VSTS)—and a great reference is Using Visual Studio 2005 to Perform Load Testing on a SQL Server 2005 Reporting Services Report Server. As well, please refer to the technical note Scaling Up Reporting Services 2008 vs. Reporting Services 2005: Lessons Learned.
Our customer was able to use and define within VSTS for their tests:
The configuration of the tests involved the use of a single test controller (where VSTS would execute and control the tests) that used four 32-bit agents, which went against the SSRS servers. The SSRS servers all used the same hardware with 8-GB RAM with four cores.
To define the report query workflow, they used a predefined workload where 75% was simple and 25% was complex using a randomized selection of report parameters. An example of a workflow is:
Figure 3: VSTS workload to determine number of servers to maximum number of concurrent users for sustained time period (>= 15min) on the SSRS server
For this specific customer’s tests (i.e., their workload and their data), you can see from Figure 3 that one SSRS server could handle a maximum of 608 concurrent users for a sustained time period (>=15 minutes). By scaling out the SSRS servers, they also found that they could handle 1,218 concurrent users with two servers and approximately 2,300 concurrent users with four servers.
From this specific test scenario (four servers with this specific workload and tests), they also noted that with >2,000 concurrent users there was a higher rate of failed tests (SSRS queries executed with no results or errors returned). But at 1,800 concurrent users (the expected maximum workload), the rate of failed tests was at an acceptable level. If they wanted to handle a larger number of concurrent users, they would only need to scale out the number of SSRS servers further. Also of interest in their environment was that doubling the memory and CPU of their hardware (increasing to 16-GB RAM and eight cores) resulted in only a 1/3 increase in load capacity. Note, SQL Server 2008 Reporting Services should make better use of resources when adding more cores and memory. Therefore, it made sense for them to scale out (by increasing the number of servers) instead of scaling up (by adding more memory and CPUs to the servers) to handle higher workloads.
Please note, the results of these tests might not apply to your environment. These test results are specific for this customer scenario according to their workload, hardware, and scenarios. Nevertheless, you can see the importance of doing performance testing within your Reporting Services environment so you can more accurately predict the load it can handle.
As noted in the above section, this is the first technical note of the Building and Deploying Large Scale SQL Server Reporting Services Environments Technical Note Series. While this technical note provides the high level architecture, we also provide some additional guidance:
We’d also like to thank Craig Utley, Brian Welcker, Jim Carroll, Bill Emmert, and Nicholas Dritsas for their original contributions.
Introduction Microsoft SQL Server 2008 Reporting Services contains many improvements designed to improve
Reporting Services 2005 useful Links Written By: Itay Braun Premier Field Engineer - SQL Server and BI
Building and Deploying Large Scale SQL Server Reporting Services Environments Technical Note Series Authors
Over the past few months, I contributed to a series of technical notes on http://sqlcat.com/ that provide
Based on the Building and Deploying Large Scale SQL Server Reporting Services Environments Technical
Over the past few months, I contributed to a series of technical notes by my esteemed colleagues Denny
Denny Lee and I presented on Wednesday at the SQL PASS Community Summit 2008 on Building SSRS 2008 Large
At the recently concluded GISV Days in Pune one of the gentleman had asked questions around how to scale
Pingback from Reporting Services Performance Optimizations - Technical Notes
Authors : Ayad Shammout (CareGroup Healthcare), Denny Lee Technical Reviewers : Lukasz Pawlowski, Sanjay
As a Systems Administrator/DBA or even as an user have you ever realised "What is your data worth to
Pingback from Reporting Services Performance Optimization « Denny Lee's SQL BI and DW Musings
Pingback from Reporting Services Performance and Scalability Technical Notes « Denny Lee's SQL BI and DW Musings
Pingback from SQL Server Reporting Services Performance « Blog Home for MSSQLDUDE
Pingback from Downloads and??Catch-Up « Dan English's BI Blog