Authors: Denny Lee, Edward Melomed, Mike VovchikContributors: Duy Vuong, Greg WashburnReviewers: Prem Mehra, Stuart Ozer, Thomas Kejser, Nicholas Dritsas, Kevin Cox, Wayne Robertson
When administering your enterprise Analysis Services environment, it is generally a best practice to create separate querying, processing, and synchronization windows to ensure that these processes do not interfere with each other. This is a common approach for enterprise SQL environments where techniques such as clustered servers and read-only databases are used for querying and the various replication techniques or custom SQL Server Integration Services (SSIS) packages are used to perform SQL database synchronization. The key to having dedicated servers for querying and processing is the ability to effectively synchronize data between different servers.
As noted in the Analysis Services Processing Best Practices white paper, long-running Analysis Services queries that are executed concurrently with processing can prevent processing from completing on the Analysis Services database. Long-running queries prevent Analysis Services from taking an exclusive lock on the database; therefore, processing must wait until the queries complete.
To prevent processing and querying from interfering with each other, one tactic is to create separate querying and processing windows on your Analysis Services server. A common approach is during the business day (such as 8 A.M. – 6 P.M.) the Analysis Services server is for querying only. After 6 P.M., processing and other operational tasks like backup and patches can be done. The problem with this methodology is that many customers require that processing occur multiple times throughout the day. As well, with global customers and/or users, there may very few opportunities for a processing window. An effective solution to this problem is to have a separate querying and processing architecture, as shown in the following figure (derived from the above linked paper).
Figure 1: Querying/processing Analysis Server server architecture
The basic idea is that you set up your regular Analysis Services server, which acts as your processing server and connects to your relational data source (in this case a SQL Server database). After processing has completed, you can replicate your database from the processing server to a separate Analysis Services server that acts as a query server. User queries ping the Analysis Services query server while processing can occur on the Analysis Services processing server. For more information on this approach, see the Scale-Out Querying with Analysis Services white paper, which describes this approach in detail. Another white paper, Scale-Out Querying with Analysis Services Using SAN Snapshots, resolves the separation of query and processing tasks by using SAN snapshots.
Whichever approach you decide to use, it is important to separate the querying and processing tasks so that you will not encounter maintenance issues associated with long-running queries, incomplete queries, and incomplete processing tasks.
There are four main techniques for synchronizing an Analysis Services database from one server to another. In general the attach/detach, backup/restore, and synchronize methods are great for individual database synchronizations. The robocopy method is best for one database per Analysis Services instance environments because you must stop/restart the service.
Included as part of Analysis Services, the Analysis Services Synch method is a common way to synchronize an Analysis Services database. What is great about this particular method is that it is easy to operate and does not require a lot of development to get it to work in your production environment. As well, you do not need multiple versions of your database (more on this later) because this method synchronizes with an existing database. This process scans the differences between two databases on the two servers and transfers over only files that have been modified. A past disadvantage of this method is that it could take a long time to complete. For example, in one scenario, a 10‑GB database with about 250,000 files took around 11 hours to complete (if at all) in Analysis Services 2005. Using the same database and hardware on Analysis Services 2008 CTP5, the synchronization now takes around 40 minutes.
The performance of this new synchronization is quite impressive and is due to the changes to the underlying file system as part of Analysis Services 2008. But by digging into the details of this synchronization, we discovered that when we ran the synchronization between the servers when no database existed on the target server, the synchronization finished in less than 10 minutes. That is, there is an approximately 30 minute difference in synchronization time when a database exists as compared to when a database does not exist. There is an overhead to scan and verify the metadata when the database exists on both source and target servers, but we had not expected it to be this much.
When we ran the synchronization in SQL Server Profiler, we discovered that there were a large number of OBJECT DELETE statements to delete the metadata objects, which occurs when deleting existing database objects as well as when instantiating new database objects. Deleting these temporary files took about 25 minutes. After we deleted the database on the target server, it also took 25 minutes to validate that the act of deletion uses most of the time. The good news is that based on these results, it took only five minutes for Analysis Services to scan and verify the metadata of the source and target databases. But the question remains why it took so long for the deletion tasks to complete. While there were 250,000 files within the database and the deletion process is an operating system single-threaded process, we still did not expect the process to take this long. We have synchronized very large databases with even more files that did not see such an extreme difference in time. On further analysis, this issue appeared to be specific to this particular database and was an indicator that the underlying file system needed to be optimized (in this case, the SAN needed to be reconfigured) so that the deletion could work faster. For more information on optimizing SQL interactions with disk I/O, see the Predeployment I/O Best Practices white paper.
During synchronization, a number of locks are placed on the databases to ensure validity. A write lock is applied before the transfer of the files and a read commit lock is applied to the source DB at almost the same moment. The write lock is released from the target Analysis Services database once the metadata is validated and the transaction is committed. The read commit lock is released at about the same moment as the write lock as it is taken in this distributed transaction. This means that during the synchronization process a write lock is applied on the target server (when a target database exists), preventing users from querying and/or writing over the database. There is only a read commit lock on the source server, which prevents processing from committing new data but allows queries to run so multiple servers can be synchronized at the same time. In the end, this means that while synchronization has improved quite a bit for SQL Server 2008 Analysis Services (and the performance degradations seen were specific to this database and disk configuration issues), you still need a maintenance window because users cannot query the database for a short while. This maintenance window must be a bit longer if you must synchronize from one source to multiple target servers.
The backup/restore method is the most common data recovery method for an Analysis Services database and can be adapted as a synchronization technique. After you back up your database on your source server, you can copy it to multiple target servers, and then execute the restore method on to your target servers. In addition, when you restore your database, you can rename the database on your target servers so that you can keep two different versions of the database. This synchronization process would be in the form of:
1. Back up the database [Foodmart] on the source server.
2. Point the Report UI to the [Foodmart] database on the target server.
3. Copy the database to your target servers.
4. Restore the database with a new name, [Foodmart V2].
5. Point the Report UI to the [Foodmart V2] database on the target server.
The advantage of this approach is that if your restoration of the [Foodmart V2] database fails, users can still query the original [Foodmart] database. As well, your backup is your data redundancy and synchronization methodology so it is slightly easier to maintain. A disadvantage with this approach is that you need two copies of the database—this requires twice the disk space you originally required. For enterprise environments, the main disadvantages involve time—the time required to robocopy the full database backup from one server to another (versus just copying the delta) and the time to restore the database.
Note that when you are restoring the database on the target server, a read lock is first initiated and then released for validation, a write lock is initiated if the database exists, restoration of files occurs, the write lock is released, and then the read lock is released. This mechanism of locks ensures the integrity of the database as its being restored. At the same time, this also means that the database cannot be queried during the restore. This is the reason we suggest using two databases with different names so that you can minimize query downtime. At the same time, if you have a wide query downtime window (or your UI cannot change connection strings easily), you can always restore over the existing database instead of having two instances.
As part of SQL Server 2008 Analysis Services (recently introduced in the SQL Server 2008 CTP6 release), you have the option to attach/detach your Analysis Services database and set it to read-only. This makes it safer for your Analysis Services databases because you can ensure that no one can make changes to your database. The great thing about attach/detach (analogous to the SQL attach/detach feature) is that you can detach your database from your processing server, copy it over to your query server, and then simply attach it. The synchronization process for attach/detach involves:
1) Complete processing the [Foodmart] database on the source server.
2) Detach the [Foodmart] database on the source server.
3) Robocopy the database from the source server to target server(s).
4) Re-attach the[Foodmart] database on the source server.
5) Detach the old [Foodmart] database on the target server.
6) Attach the new [Foodmart] database on the target server; you can attach it as read-only usage mode within the Attach command.
The advantage of this approach is that while you have to robocopy the full database (versus the delta files), this involves simply attaching a database. Since the attached database on the target server is read-only, you can also set your SAN mounts to be optimized for reads so your storage engine queries will execute faster.
Similar to the backup/restore technique, the attach/detach method requires more disk space because you need two copies of the database. As well, you must detach the database on the target server before attaching the new one on the target server (you cannot attach with a different name) because of the database ID issue noted in Renaming Olap Databases Issues within Analysis Services.
Note that in the near future as we complete tests (including performance numbers) with real customer implementations, we will have more on the read-only feature, which allows multiple query servers to query a single database on a shared SAN disk space.
There is already a bit written on the robocopy method, which can be found in the following articles:
A general summary is that this method enables you to copy over only the delta files (that is, the changed files) of your Analysis Services database from your source to target servers (similar to the Synch method). In enterprise environments, this is especially handy because you want to copy over only the changed files and you want to do it faster. Robocopy is an especially fast and effective tool for copying files from one server to another and it can also be run in parallel if you want to upload to multiple query servers concurrently. Disadvantages of this method include the need to stop/restart your Analysis Services server, the Synch and robocopy methods do not intermix, and some functionality (such as writeback, ROLAP, real-time updates, and so on) is lost. This methodology is especially effective for query/processing architectures that involve only one database per server.
As you can see, all of the above techniques are quite useful and have their advantages and disadvantages. But irrelevant of the technique you use to synchronize your database, all of these methods require some operational maintenance window to allow for the synchronization to complete. An effective way to get around this is to have two instances (as noted Scale-Out Querying with Analysis Services) but this requires two instances of the same database and control of the UI connection strings.
Figure 2: Queries to active instances (AS1, AS3) while synchronization occurs to the idle instances (AS2, AS4)
Regardless of the technique you choose, it is important to remember that your techniques of synchronization require clear cut‑off times between processing, querying, and synchronization so that it is possible to perform each of these tasks without interference from the other. One important note for this is that you should ensure that lazy processing is disabled and use ProcessIndexes after your ProcessUpdate. ProcessIndexes completes the index and aggregation build process much faster than lazy aggregation and there is a definitive cut‑off time so you know when you can start your synchronization or allow queries to run.
As you can see, there are a number of ways to synchronize your Analysis Services database from your source to target servers. The robocopy method is extremely fast and you can transfer files to multiple servers concurrently but has its own set of disadvantages; it is most effective for the scenario where there is a single database on an Analysis Services server. For SQL Server 2008 Analysis Services, the backup/restore and synchronization methods have great improvements in performance but require space and/or maintenance windows during which queries cannot be executed.
Note: The numbers quoted in this technical note are specific to a single customer database; we expect your system to see improved performance though not necessarily to the magnitude seen here. As well, the SQL Server 2008 Analysis Services read-only (attach/detach) feature introduces a new and intriguing way to synchronize and an optimized way to query as well. The read-only feature allows multiple servers to query the same single Analysis Services database to allow for more concurrent queries—more on this topic in a later document. All of these techniques involve some form of a maintenance window; hence it is important for you to create clear-cut windows for querying, processing, and synchronization processes for your enterprise Analysis Services environment. This will minimize the conflicts for your enterprise Analysis Services environments and make your system a little easier to manage.
We are calling out lazy processing in the context of synchronization because while lazy processing is occurring in the background it may interfere with synchronization processes and this could result in a non-queryable database on the target server. Lazy processing performs the task of building indexes and aggregations for dimensions and measure group partitions at a lower priority to reduce foreground processing time and to allow users to query the cube sooner. For lazy processing to occur, you must switch the ProcessingMode = LazyAggregations of your measure group partitions; by default this value is Regular (lazy processing is turned off). When processing a dimension with flexible aggregations such as parent-child or virtual dimension by using the processing enumeration of ProcessUpdate (such as to take into account of member name or hierarchy changes), lazy processing is initiated to ensure that the aggregations are rebuilt on the associated measure group partitions. The problem with this, as noted above, is that if synchronization is occurring concurrently with lazy processing, this may result in a non-queryable database due to file lock conflicts. While you can monitor lazy processing by using the SQL Server Profiler, there is no AMO method that allows one to automatically monitor the creation of lazy aggregations. It is also quite difficult to estimate the time it takes for lazy processing to complete due to the size and cardinality of your cube. As well, since lazy processing is a background process, the processing time is dependent on what is executing in the foreground at the time. Therefore, to ensure that synchronization does not interfere with processing, it may make sense to shut off lazy processing completely (ProcessingMode = Regular). To ensure that the aggregations of your measure group partitions are rebuilt when processing a dimension with flexible aggregations, use the processing enumeration of ProcessIndexes after processing your dimensions with ProcessUpdate. While the processing will occur in the foreground, this allows you to have a clear cut-off time for when your dimension and measure group partitions have completed processing. After this is done, you can perform the task of synchronizing your cube without conflicting with the lazy processing.
Building and Deploying Large Scale SQL Server Reporting Services Environments Technical Note Series Authors
Author: Carl Rabeler Contributor: Josh Caplan Reviewers: Prem Mehra, Denny Lee, John Desch, Wayne Robertson
Pingback from Scale-Out Querying with Analysis Services…does it apply to SQL Server 2008 or 2008 R2? « Denny Lee
Pingback from Using Trace to Monitor and Audit Analysis Services | MarkTab Data Mining
Pingback from Resource Monitoring | MarkTab Data Mining
Here are this and some other articles on Analysis Services Synchronization:
Pingback from Dividing Production and Service Cubes - Windows 8 Guide | Server 2008 | Sql Servers | Tutorial