SQL Server recommendations for System Center Configuration Manager environment


The goal of this White Paper is to provide guidance to ConfigMgr administrators and IT professionals on how to configure and administrate the SQL Server instances related to System Center Configuration Manager. This guide will deal with instance settings, database settings, global settings and maintenance plan considerations. Finally, some tips and guidance are treated at the end of this document for monitoring and performance considerations.

The recommendations exposed in this White Paper are covering main of SQL Server versions, especially in this case from SQL 2008 to SQL 2016. Most of the time, the exposed topics are general whatever is the SQL version. However, in case of exception, the particularity will be mentioned with a note (for some SQL enhancements for example).

SQL Instance settings

The purpose of this section is to deal with the best practices applied to the global settings of the SQL Instance hosting the Configuration Manager data. Most of these recommendations are common to any application, but we will highlight recommendations specific to ConfigMgr as well.

Maximum Degree of Parallelism (aka MaxDOP)

The Maximum Degree of Parallelism parameter set the maximum number of processors used to execute a parallelized query. In this context, we mean here processor as a core processor.

By default the value is set to 0, implying that the SQL engine will use ALL the available cores.


There is no best practice to setup MaxDOP for ConfigMgr but it usually better to configure something else than 0.

How to check it

You can use the system view sys.configurations to check if your MaxDOP parameter is properly set:

select name, value
from sys.configurations
where name = 'max degree of parallelism'

Why is that important

If we consider an example of a parallelized query, the request is divided into several tasks, and each of them will execute a piece of the required work. As described below, because some task will be accomplished quicker than others, the mechanism will imply some waits called CxPacket for threads synchronization.

Yet, the more thread you will use the more CxPacket you will get, and then the more waits you will encounter. So, the recommended value is about the threshold from which the performance gain will stop to increase, or worse may decrease.

If you let the MaxDOP value to zero on a server with a high number of cores, you risk having a huge CxPacket waits, impacting your ConfigMgr platform performances.

Advanced considerations - Optional

CxPacket is not evil, but setting a correct value for the MaxDOP parameter is a good way to mitigate the parallelism excess. To measure the CxPacket, you can check the global wait stats with this query:

select TOP 10 * from sys.dm_os_wait_stats

order by wait_time_ms DESC

If the CxPacket is the first wait type with a huge wait_time_ms value comparing to the other wait types, you can decrease again the MaxDOP value to 4, then 2 or 1. It can be especially the case if you have a very large Configuration Manager environment.

How to configure it

To set the MaxDOP value to 2 for example by script:

Exec sp_configure 'max degree of parallelism', 2




To set the MaxDOP value to 2 by the management studio:

  • Right Click on the SQL instance
  • Property
  • "Advanced" TAB
  • "Parallelism" section

Technical Reference


The Maximum Server Memory parameter set the maximum Buffer Pool size of the SQL instance.

By default, the value is set to 2147483647, implying that the SQL engine will consume all the memory it can. The recommendation is let an amount of memory to the system and the other applications hosted on the same server (if there is, which is not recommended).

How to check it

You can use the system view sys.configurations to check if your Maximum Server Memory parameter is properly set:

select name, value


where name ='max server memory (MB)'

Why is that important

By "system", we mean here the Kernel system but also all kinds of agents like antivirus agents, monitoring agents, specific drivers, and so on… If SQL is allowed to consume the whole quantity of memory, and if it does, you may encounter some external memory pressure issues. Consequently, it may impact indirectly the performance of your ConfigMgr platform and the global health of your system.

So regarding the total amount of physical memory, it's recommended for a dedicated server to let 2GB-4GB to the system, granting him enough memory to work without this kind of risk. For example, if your server has 32 GB physical memory, set the Maximum Server Memory to 28 GB.

Advanced considerations - Optional

Additionally, note that there's no specific advantage to apply the rule "keep 10% of memory for the system". If your server has a huge amount of physical memory (let's say 256GB), there's no benefit to let 25 GB for the system. That's too much, and does not make sense for a dedicated SQL server.

Recommended configuration for ConfigMgr

  1. If SQL Server is installed on the same server than ConfigMgr binaries, then configure max server memory to 50/60% of total memory.
  2. If SQL Server is installed on a different server than ConfigMgr binaries, then configure max server memory to 70/80% of total memory.

Note: % of total memory might change depending of the total amount of memory available on the server

How to configure it

Referring to the example above, to set the Maximum Server Memory value by script:

Execsp_configure'max server memory (MB)', 28672




To set the Maximum Server Memory value by the management studio:

  • Right Click on the SQL instance
  • Property
  • "Memory" TAB
  • "max server memory (MB)" section

Technical Reference

Database Collation

The database collation defined a combination of character set and sort order.

The SQL collation is defined during the instance installation phase, what's why it's important to keep it in mind at the very beginning.

How to check it

You can use the following statement to check what collation is defined once the installation completed:


Why is that important

At each site, both the instance of SQL Server that is used for the site and the site database must use the following collation: SQL_Latin1_General_CP1_CI_AS.

Configuration Manager supports two exceptions to this collation to meet standards that are defined in GB18030 for use in China. Chinese exceptions are described here: https://technet.microsoft.com/library/hh738544.aspx

If this collation is not respected, your platform risks to be unsupported and may affect CPU consumption.

How to configure it

As mentioned, the collation name is defined during the installation setup:

Additional instance settings

A couple of SQL instance settings have to be enabled as well in a System Center Configuration Manager context.

Normally they are enabled by default with the installation of ConfigMgr. You can find their description below.

SQL instance settings

A couple of SQL instance settings have to be enabled as well in a System Center Configuration Manager context.

Normally they are enabled by default with the installation of ConfigMgr. You can find their description below.

CLR Enabled

The Management Point Control Manager uses CLR to connect to the site database.

So it has to be turned on.

Max text repl size

The max text repl size is especially required in a "replica Management Point" context.

If it is implemented, you can set the Max text repl size to 2 GB on both the site database server and replica server.

How to check it

You can use the system view sys.configurations to check if these settings are properly configured :

select name, value

from sys.configurations

where name in ('CLR Enabled', 'max text repl size (B)')

TraceFlags consideration

A trace flag is a SQL startup parameter. Generally, its role is to bring a specific modification on the SQL engine. Even if the two following trace flags are especially significant for the Tempdb database, please note that when you enable a traceflag, it will always impact the whole SQL instance and then all the databases.


Refering to the paragraph below dealing with the TempDB specifications (§2.1), it's recommended to apply the TraceFlags 1117 et 1118 on the SQL instance.

Important SQL Server 2016 consideration

Beginning with SQL Server 2016, the behavior of the trace flags 1117 et 1118 are overrided by the AUTOGROW_ALL_FILES and MIXED_PAGE_ALLOCATION options of the Alter Database statement, and are not required anymore.

How to check it

You can use the following DBCC statement to check if these two trace flags are enabled.

DBCC TraceStatus (-1)

The expected result should return two records if these trace flags are enabled.

Note that you may have another additional record, because ConfigMgr requires the Trace flag 8295 by design for change tracking reason.

If you do not see this kind of results, and if you read "" instead, that means there's no traceflag enabled on your SQL environment.

Why is that important

Trace Flag 1117: When a file in the filegroup meets the autogrow threshold, all files in the filegroup grow. This is especially usefull for the Tempdb Database in order to apply the recommendation to keep all the files on the same size.

Trace Flag 1118: When enabled, this trace flag allocates all eight pages from the same extent when creating new objects, avoiding mixte extents, and then PFS and SGAM contention in a TempDB context. See "Advanced considerations - Optional" part below for further information.

Advanced considerations - Optional

When a new object is created, by default, the first eight pages are allocated from different extents (mixed extents). Afterwards, when more pages are needed, those are allocated from that same extent (uniform extent). The SGAM page is used to track these mixed extents, so can quickly become a bottleneck when numerous mixed page allocations are occurring.

As part of allocating a page from the mixed extent, SQL Server must scan the Page Free Space (PFS) page to find out which mixed page is free to be allocated. The PFS page keeps track of free space available on every page, and each PFS page tracks about 8000 pages. Appropriate synchronization is maintained to make changes to the PFS and SGAM pages; and that can stall other modifiers for short periods.

When SQL Server searches for a mixed page to allocate, it always starts the scan on the same file and SGAM page. This results in intense contention on the SGAM page when several mixed page allocations are underway, which can cause some PFS and SGAM contentions, globally identified as "TempDB bottleneck".

How to configure it

The trace flags are defined thru the SQL Configuration Manager.

The setting is done in the properties of the SQL Server engine service, as shown below:

 In the property window, select the "Startup Parameters" tab, and add both -T1117 and -T1118 parameters in the top section to add them.

This action needs the SQL service to restart.

Technical Reference

Local Security Policies

Some local security accounts can have some performance impacts, and two of them are recommended to be applied to the SQL service engine account. Note that this can be done automatically with a GPO.


It's highly recommended to apply the "Lock Page in Memory" and "Perform maintenance volume task" on the SQL service engine account.

Why is that important

The Lock Page In Memory (LPIM) privilege has to be used in 3 scenarios:

  • To enable AWE feature on 32 bits platform... ok let's forget it, past is the past.
  • Previous versions of Windows had some issues with memory management that could cause large amounts of paging unnecessarily, requiring the use of LPIM to prevent a SQL Server instance from having a large percentage of its Working Set paged out to disk.
  • If you are running the latest version of SQL Server on the latest version of Windows Server, LPIM should not be necessary.
  • LPIM is also required in order to support Large Page Allocation in SQL Server. Essentially, this setting allows for more efficient memory mapping as it leads to a smaller total number of allocations.

The Perform maintenance volume task privilege gives the Instant File Initialization (IFI) functionality to SQL Server when enabled. The benefits of IFI is not to fulfill the pages of the datafiles from all databases with zero in the following cases:

  • Creating a database.
  • Adding files to an existing database.
  • Increasing the size of an existing file (including auto growth operations).
  • Restoring a database or filegroup.

Notice that it does not apply on log files because virtual log files have to be formatted.

How to check it / configure it

These functionalities are available by applying local policies to SQL service accounts.

In the Local Security Policy management console, check and ensure that your SQL Service account is well associated to the two privileges "Lock Page in Memory" and "Perform maintenance volume task". If not, add it in the right click property windows.

Technical Reference

Database Settings

The Database settings are both relating to system databases and ConfigMgr databases. For each case it will be mentioned the specific scope.

TempDB and autogrowth settings

The size and physical placement of the tempdb database can affect the performance of a system. For example, if the size that is defined for tempdb is too small, part of the system-processing load may be taken up with autogrowing tempdb to the size required to support the workload every time you restart the instance of SQL Server. Multiple tempdb files can reduce contention on various global allocation structures by spreading activity over multiple physical files. This is particularly useful for those transactions using tempdb database, which spend a large percentage of time allocating and de-allocating tables.


Multiple tempdb files can reduce contention on various global allocation structures by spreading activity over multiple physical files. This is particularly useful for those transactions using tempdb database, which spend a large percentage of time allocating and de-allocating tables.

How to check it

You can use the system view sys.master_files to check if your TempDB file parameter is properly set:

USE master;



MF.database_id, MF.name,MF.type_desc as File_type,

MF.size*8/1024 AS Initial_Size_MB,

DF.size*8/1024 AS Current_Size_MB, MF.is_percent_growth,

case MF.is_percent_growth when 1 then DF.size*8/1024*MF.growth/100

else MF.growth*8/1024 end as Next_growth_MB

FROM sys.master_files MF

INNER JOIN tempdb.sys.database_files DF

ON MF.name=DF.name

WHERE MF.database_id=2 and MF.type=0

Why is that important

When the tempdb database is heavily used, SQL Server may experience contention when it tries to allocate pages. Depending on the degree of contention, this may cause queries and requests that involve tempdb to be unresponsive for short periods of time.

Advanced considerations - Optional

When you examine the Dynamic Management Views sys.dm_exec_request or sys.dm_os_waiting_tasks, you observe that these requests or tasks are waiting for tempdb resources. You will notice that the wait type and wait resource point to LATCH waits on pages in tempdb. These pages might be of the format 2:1:1, 2:1:3, etc.

How to configure it

To set the correct value by script:



-- declaring variables


@database_file_name sysname,

@sqlcmd nvarchar(4000),

@tempdbfilesize INT,

@tempdbnumberofFile INT,

@tempdbfilegrowthinMB INT,

@FileCount INT,

@tempdbFilePath NVARCHAR(500),

@tempnewdatafilename nvarchar(500)

--specifying values to be used

SET @tempdbfilesize=[value]

SET @tempdbfilegrowthinMB=[value]

SET @tempdbnumberofFile=[value]

SET @tempnewdatafilename='[value]'

-- adding additional files

SELECT @FileCount=count(1)FROM tempdb.sys.database_files df

WHERE df.type=0

--getting current path for tempdb data files based on the first tempdb datafile

SELECTTOP 1 @tempdbFilePath=replace(physical_name,'.mdf','')FROM tempdb.sys.database_files df

WHERE df.type=0 andfile_id= 1

Set @tempdbFilePath=Replace(@tempdbFilePath, reverse(left(reverse(@tempdbFilePath), charindex('\',reverse(@tempdbFilePath))-1)),'')

--Adding additional files

WHILE @FileCount<@tempdbnumberofFile


SET @filecount=@FileCount+1

SET @sqlcmd =N'ALTER DATABASE TempDB ADD FILE ( NAME = N'''+ @tempnewdatafilename +cast(@filecount asnvarchar(5))+''',

FILENAME = N'''+ @tempdbFilePath + @tempnewdatafilename +cast(@filecount asnvarchar(5))+'.ndf'', SIZE = '+CAST(@tempdbfilesize asNVARCHAR(30))+'MB, filegrowth = '+CAST(@tempdbfilegrowthinMB asNVARCHAR(30))+'MB)'



--making sure to have equal size and AUTOGROWTH settings

DECLARE database_file_name_cursor CURSORFOR



FROM tempdb.sys.database_files df

WHERE df.type=0

OPEN database_file_name_cursor;

FETCHNEXTFROM database_file_name_cursor INTO @database_file_name;




(NAME = '+ @database_file_name +', SIZE = '+CAST(@tempdbfilesize asNVARCHAR(30))+'MB, filegrowth = '+CAST(@tempdbfilegrowthinMB asNVARCHAR(30))+'MB );'


FETCHNEXTFROM database_file_name_cursor INTO @database_file_name;


CLOSE database_file_name_cursor;

DEALLOCATE database_file_name_cursor;


Then replace all the values in the placeholders with your specific values

  • To configure TempDB File and Autogrowth by the management studio:
    • Expand Databases (by clicking the + (plus) sign)
    • Expand System Databases (by clicking the + (plus) sign)
    • Right-click the database tempdb, and in the pop-up menu, click Properties.
    • On the Database Properties screen, in the Select a page section, click Files.
    • Adjust initial file size for existing data files.
      • Select the column Initial Size (MB) for the tempdb data file and adjust it to the desired initial size value.
      • Adjust AUTOGROWTH settings for the existing data files: Click the button at AUTOGROWTH / Maxsize.
        • On the Change AUTOGROWTH screen, click In Megabytes.
        • Change the value for megabytes to grow.
        • Click OK on the Change AUTOGROWTH screen.
    • Repeat the Step e. and f. for each existing data file of the tempdb
  • To add additional data files
  1. Click the button ADD
  2. Enter the Logical name for the new data file
  3. Adjust the initial file size to the value used in step e.
  4. Adjust AUTOGROWTH settings for the existing data files
  • Click the button at AUTOGROWTH / Maxsize.
  • On the Change AUTOGROWTH screen, click In Megabytes.
  • Change the value for megabytes to grow.
  • Click OK on the Change AUTOGROWTH screen
  1. Check path and adjust it if required
  2. Add a file name for the new data file. Repeat step h. for each data file which needs to be added
  3. Click OK.

Technical Reference

Database compatibility level

This section applies only if your SQL instance is on version 2014 or 2016.


Apply the right compatibility level of the databases regarding the SQL Server version.

SQL Server version

Supported compatibility level values

Recommended compatibility level for ConfigMgr Database

SQL Server 2016

130, 120, 110, 100


SQL Server 2014

120, 110, 100


Why is that important

Since SQL 2014 introduce an enhancement of the engine base on the Cardinality Estimation, you may encounter performance issues with certain Configuration Manager queries in some environments, depending on variables such as site configuration, environment complexity, and the general state of load and performance of the SQL server.

Then, if you upgrade or migrate a ConfigMgr database from a lower version than 2014 / 2016 to a newest one, you have to ensure that after the upgrade or migration your database is well adapted to the new environment. To do that, check the compatibility level of the upgraded / migrated database versus the expected recommendations mentioned above.

How to check it

You can use the following SQL statement to check if the database compatibility level is well aligned.

select name, compatibility_level from sys.databases

How to configure it

To set the database compatibility level by script:



To set the database compatibility level on the SSMS console:

Technical Reference

Read committed snapshot isolation level


If the SQL application is suffering of multiple locks problems, the recommended solution is to enable the RCSI option on the database.

This requires an exclusive access on the CM_DB database but it does not require a restart of the service.

Why is that important

By default, all queries are executed in a read committed isolation level. It ensures atomic transactions and consistent reads. Then, when a session tries to access to a data page while another session has an exclusive access, the first one is blocked by a lock. The solution to bypass the exclusive lock, is to enable the read committed snapshot isolation level on the database. Then, any data page used in exclusive mode will be copied into a dedicated area in the TempDB named "Version Store". And then, any query will seek the version stored instead of being locked on the exclusive original resource, and it will return a consistent result immediately.

How to check it

To know if the read committed snapshot isolation level is already enabled, you can query like following:

selectname, is_read_committed_snapshot_on fromsys.databases

Note that since SQL 2012 it's possible to check it as well with SSMS, in the database properties windows – Tab Options:

How to configure it

To enable the read committed snapshot isolation level, first ensure that you have an exclusive access to the database. Then you can alter it:




Technical Reference

Global Settings

SQL Maintenance plan

The maintenance plan is somehow crucial both for the availability and consistency for databases, and for global SQL performance.


There's no strict recommendation about the whole sequence of maintenance tasks, because it will depend on the available maintenance timeframe, especially if the ConfigMgr databases are hosted on a mutualized instance. However, if possible you can consider these ones:

  • Check Integrity
  • Rebuild indexes
  • Update Statistics
  • Full backup
  • Cleanup backups retention

These apply both on ConfigMgr databases and system databases (model, msdb, master).

The "when", "how" and "how often" will depend on the volume of the overall data and the available daily window.

Integrity Check

An integrity check task will verify logical and physical integrity of all the objects of a specified database. It is typically operated by a DBCC CheckDB statement.

If any corruption occurs on the database, it will be identified. If it is, enough full backups have to be available to make a rollback possible if the corruption can't be fixed.

The integrity check task can both be done both via the maintenance plan task or via a script scheduled by job. You can refer for example to this kind of following script to apply it for all the databases of the SQL instance:

exec sp_MSforeachdb 'Use ?; DBCC CheckDB (?); Print ''DB '' + db_name() + '' Checked'' + char (10) + char (13)'

Rebuid Index

The role of the rebuild index task is to avoid as much as possible the page fragmentation from indexes.

Note that a rebuild index task from the integrated SQL maintenance plan will rebuild ALL the indexes from a specified database, whatever is its size, and whatever is its fragmentation rate. So this has no impact on small and medium databases, but on big ones it may spend some time.

It is possible to proceed a rebuild index task with a custom T-SQL script, for example to rebuild only the most fragmented indexes. Be aware that scripts available on internet blogs are supported by the authors of these scripts. It's not Microsoft role to troubleshoot if there is any "bug" in it.

SQL Server 2016 enhancement

SQL 2016 involve the possibility to define a fragmentation rate threshold. Then it's possible to rebuild only fragmented enough indexes, and then only spend time on necessary objects. Additionally, it's also possible to specify a page count value to consider only big enough indexes.

Note that proposed values are default values and may be changed.

Even if the Max degree of parallelism is recommended to be set to 2 for ie for ConfigMgr databases, you can define here a value of 8 for the MaxDOP dedicated to the Rebuild Index task.

Finally, keep in mind that Rebuild Index task applies only on Clustered and NonClustered indexes, not on heaps. It's one of the many reasons to have a clustered index per table.

The Rebuild Index task can both be done both via the maintenance plan task or via a script scheduled by job. You can refer for example to the scripts proposed by Pedro Lopes on his blogs for customized rebuilding index script : https://blogs.msdn.microsoft.com/blogdoezequiel/2011/07/03/adaptive-index-defrag/

Update Statistics

It's crucial to have up-to-date statistics to keep accurate execution plan and good performances.

Even if when an index is rebuilt its statistics are automatically updated, it does not cover auto created statistics and column statistics. Moreover, if the rebuild operation is done with a minimum fragmentation threshold, it's still important to consider non-rebuilt indexes in the update statistics phase.

The Update Statistics task can both be done both via the maintenance plan task or via a script scheduled by job. You can refer for example to the following script to update all the indexes from all the databases of the SQL instance:

exec sp_MSforeachdb 'Use ?; exec sp_updatestats; Print ''Statistics updated on DB '' + db_name() + char (10) + char (13)'

Full Backup Database

Once you ensured that your database is not corrupted, that the indexes are properly rebuilt and that the statistics are up-to-date, you can serenely backup it. ConfigMgr databases are expected to be on a simple recovery mode. So only full backup should be enough.

Ensure that your backup retention is aligned with your integrity check frequency to be able to restore a valid database.

Backup your ConfigMgr Site with SQL maintenance plan

Configuration Manager can recover the site database from the Configuration Manager backup maintenance task or from a site database backup that you created with another process. For example, you can restore the site database from a backup that is created as part of a Microsoft SQL Server maintenance plan

In the same spirit, ensure that the "verify backup integrity" option is checked.

Finally, it's recommended to enable backup compression.

SQL Backup to Azure Storage

It's possible since SQL Server 2012 SP1 CU2, and more commonly on SQL 2014 and SQL 2016 to backup a database directly to the Windows Azure Blob service. It can be done in the maintenance plan context by specifying URL as the backup destination.

Cleanup backups retention

A cleanup retention task is simply a step to help you to delete old backup files beyond a defined retention period. It is necessary to avoid volume saturation.

The most reliable way to apply it is still the cleanup task from the maintenance plan.

How to check it

You will find existing maintenance plan in SQL Server Management Studio in maintenance plans node or amongst the list of existing jobs. Additionally, in the jobs node you can get the details of each execution by requesting the Jobs History.

On another way, you can check the impact and the accuracy of maintenance plan components detailed above.

Integrity Check

The last execution of the DBCC CheckDB Statement can be defined as show below :

DBCC DBINFO('NameOfTheDB') with TableResults

You can search into the resultset to find the value from the dbccLastKnownGood indicator. If the value refers to the 1rst January 1900, that means that the database have never been checked.








1900-01-01 00:00:00.

The recommendation is to apply it at least every 14 days, ideally daily.

Rebuild Index

As explained in the section above, the rebuild index task role is to avoid index fragmentation. So, you can check it with the following statement (per database) :

select TOP 20 OBJECT_NAME(object_id) as TableName, Name as indexName, index_id, index_type_desc, avg_fragmentation_in_percent, page_count

from sys.dm_db_index_physical_stats (db_id(), NULL, NULL, NULL, 'limited') IPS

inner join sys.sysindexes I

on IPS.object_id = I.id AND IPS.index_id = I.indid

order by avg_fragmentation_in_percent DESC

Update Statistics

Index statistics and column statistics are both registered in the sys.stats system view.

It's possible to retrieve update information thru the dm_db_stats_properties function :

SELECTOBJECT_NAME (stat.object_id) as TableName,

stat.object_id, stat.name, stat.stats_id, last_updated, rows, rows_sampled, modification_counter

FROM sys.stats stat

CROSS APPLY sys.dm_db_stats_properties(stat.object_id, stat.stats_id) AS sp

Order by TableName

The result indicates for each statistic of each table the weight of the object, the number of modifications on these statistics and the last update date.

Note that _WA_SYS_* objects are auto created statistics.

Database backups

As mentioned previously, the classical way to check if the backups have been run is to right click on the Jobs node and choose "jobs history". But querying directly the Backupset table in the MSDB database can bring more accurate information, like the last backup date, the backup size, and so on..

select TOP 50

database_name, type, backup_start_date, backup_size, compressed_backup_size, user_name

from msdb..backupset

order by backup_start_date DESC

The backupset table registers only successful backups. So, if the resultset indicates that there is no backups since several days, it's definitively related to a backup issue.

Why is that important

Maintenance Plan are crucial in SQL Server, for a couple of obvious reasons mentioned above. Database integrity check and full backups ensure the integrity and the recovery state of the databases, and the Rebuild index and Update Statistics tasks guaranty stable performances.

Advanced considerations - Optional

You may also involve a reorganize index task, but it's not mandatory. Indeed the rebuild index task impacts data page fulfillment and the allocation page order while reorganize index task only impacts the allocation page order only.

Most of the time if a rebuild index task is scheduled, it's enough to recover a global good enough fragmentation on indexes. Consider only reorganize operation on huge SCCM data platform.

Technical Reference

Windows operating system configuration: power saving settings


It's highly recommended to set the operating system power saving plan to High Performance.

Why it is important

In Windows Server 2008 and later OS, the default power saving setting is set to Balanced, which means that components such as CPU and storage will be scaled back if the system is not busy. In some cases, this may result in performance degradation for SQL Server.

How to configure it

The settings are located in the path Control Panel > Hardware > Power options

If Balanced setting is shown as recommended, it's meant from an energy consumption point of view. From global SQL performances the recommendation is to set it in High performance.

Note that you can manage power saving with GPO.

Technical references

Antivirus exclusion


It's highly recommended to set properly the antivirus exclusion if any is installed (which is recommended as well) on your SQL Server.

Why it is important

If no exclusion is set into the antivirus software, any SQL activity may be caught by the antivirus and then the disk activity will be highly impacted with significant performance deterioration.

How to configure it

Globally there's two ways to set the antivirus exclusions:

  • By folder
  • By file extension

The main targets have to be related to the following topics:

SQL Server service binary file


SQL Server data files

These files usually have one of the following file-name extensions:

  • *.mdf
  • *.ndf
  • *.ldf
SQL Server backup files

These files frequently have one of the following file-name extensions:

  • *.bak
  • *.trn
Full-Text catalog files

Default instance: Program Files\Microsoft SQL Server\MSSQL\FTDATA

Named instance: Program Files\Microsoft SQL Server\MSSQL$instancename\FTDATA

Trace files

These files usually have the *.trc file-name extension

In a failover cluster environment

Q:\ (Quorum drive)


Technical references

Disks configuration and sizing

Microsoft published some insights regarding VM configuration for high performance. Recommendation for Azure VMs apply to physical/VM machine onpremise.

Configuring optimal disk performance is often viewed as an obscure part from data administrators. A best practice that is essential yet often overlooked is disk partition alignment. Yet an understanding of disk performance best practices can result in significant improvements. Some of the many factors that affect disk I/O performance include the number, size, and speed of disks.

Volume alignment, commonly referred to as sector alignment, should be performed on the file system (NTFS) whenever a volume is created on a RAID device. Failure to do so can lead to significant performance degradation; these are most commonly the result of partition misalignment with stripe unit boundaries. This can also lead to hardware cache misalignment, resulting in inefficient utilization of the array cache. When formatting the partition that will be used for SQL Server data files, it is recommended that you use a 64-KB allocation unit size (that is, 65,536 bytes) for data, logs, and tempdb. Be aware however, that using allocation unit sizes greater than 4 KB results in the inability to use NTFS compression on the volume. While SQL Server does support read-only data on compressed volumes, it is not recommended.

Technical references


Azure VM size and type, Azure VM disks (premium storage is recommended, especially for SQL Server), networking latency, and speed are the most important areas.

So, tell me more about Azure virtual machines; what size VMs should I use?

In general, your compute power (CPU and Memory) need to meet the recommended hardware for System Center Configuration Manager. But there are some differences between regular computer hardware and Azure VMs, especially when it comes to the disks these VMs use. What size VMs you use depends on the size of your environment but here are some recommendations:

  • For production deployments of any significant size we recommend "S" class Azure VMs. This is because they can leverage Premium Storage disks. Non "S" class VMs use blob storage and in general will not meet the performance requirements necessary for an acceptable production experience.
  • Multiple Premium Storage disks should be used for higher scale, and striped in the Windows Disk Management console for maximum IOPS.
  • We recommend using better or multiple premium disks during your initial site deployment (like P30 instead of P20, and 2xP30 in a striped volume instead of 1xP30). Then, if your site later needs to ramp up in VM size due to additional load, you can take advantage of the additional CPU and memory that a larger VM size provides. You will also have disks already in place that can take advantage of the additional IOPS throughput that the larger VM size allows.

The following tables list the initial suggested disk counts to utilize at primary and central administration sites for various size installations:

Co-located site database

Primary or central administration site with the site database on the site server:

Desktop Clients

Recommended VM size

Recommended Disks

Up to 25k


2xP30 (striped)

25k to 50k


2xP30 (striped)

50k to 100k


3xP30 (striped)

Remote site database

Primary or central administration site with the site database on a remote server:

Desktop Clients

Recommended VM size

Recommended Disks

Up to 25k

Site server: F4S Database server: DS12_V2

Site server: 1xP30 Database server: 2xP30 (striped)

25k to 50k

Site server: F4S Database server: DS13_V2

Site server: 1xP30 Database server: 2xP30 (striped)

50k to 100k

Site server: F8S Database server: DS14_V2

Site server: 2xP30 (striped) Database server: 3xP30 (striped)

The following shows an example configuration for 50k to 100k clients on DS14_V2 with 3xP30 disks in a striped volume with separate logical volumes for the Configuration Manager install and database files:

Monitoring topics


The original monitoring pack will allow you to monitor the health of Microsoft System Center Configuration Manager by monitoring general health; data replication between Configuration Manager sites; server and service availability; SQL Server configurations; Backup and recovery; backlog monitoring; software update synchronization; and other server role configuration. This update will extend the capabilities to monitor availability, performance, and health of the Microsoft Intune connector site system role for companies who integrate Configuration Manager and Microsoft Intune in a hybrid environment.

Updated Feature Summary

This release of the monitoring pack delivers improved capabilities including the following:

  • Monitors availability status of the Intune connector
    • SMS_DMP_DOWNLOADER – SMS_Executive component
    • SMS_DMP_UPLOADER – SMS_Executive component
    • SMS_CLOUD_USERSYNC – SMS_Executive component
    • SMS_OUTGOING_CONTENT_MANAGER – SMS_Executive component
  • Monitors hman.box for backlogs that exceed the threshold
    • Monitor inbox - hman.box performance counter
  • Monitors availability status of the Microsoft Intune service
    • Microsoft Intune service offline
    • Local network issue prevents DMP connector from reaching Intune service
    • DMP connector doesn't sync with the Intune service

Link to download Management Pack for Configuration Manager: https://www.microsoft.com/en-us/download/details.aspx?id=34709&WT.mc_id=rss_alldownloads_all

ConfigMgr Custom Reports

Builtin Reports in Configuration Manager

Reporting in System Center Configuration Manager provides a set of tools and resources that help you use the advanced reporting capabilities of SQL Server Reporting Services (SSRS) and the rich authoring experience that Reporting Services Report Builder provides. Reporting helps you gather, organize, and present information about users, hardware and software inventory, software updates, applications, site status, and other Configuration Manager operations in your organization. Reporting provides you with a number of predefined reports that you can use without changes, or that you can modify to meet your requirements, and you can create custom reports. Use the following sections to help you manage reporting in Configuration Manager.

Configuration Manager provides report definitions for over 400 reports in over 50 report folders, which are copied to the root report folder in SQL Server Reporting Services during the reporting services point installation process. The reports are displayed in the Configuration Manager console and organized in subfolders based on the report category. Reports are not propagated up or down the Configuration Manager hierarchy; they run only against the database of the site in which they are created.


Builtin reports in Configuration help you to follow deployments, monitoring your Configuration Manager infrastructure and more.

Custom Reports in Configuration Manager

Configuration Manager uses Microsoft SQL Server Reporting Services Report Builder as the exclusive authoring and editing tool for both model-based and SQL-based reports. When you initiate the action to create or edit a report in the Configuration Manager console, Report Builder opens. When you create or modify a report for the first time, Report Builder is installed automatically. The version of Report Builder associated with the installed version of SQL Server opens when you run or edit reports.

The Report Builder installation adds support for over 20 languages. When you run Report Builder, it displays data in the language of the operating system that is running on the local computer. If Report Builder does not support the language, the data is displayed in English. Report Builder supports the full capabilities of SQL Server 2008 Reporting Services, which includes the following capabilities:

  • Delivers an intuitive report authoring environment with an appearance similar to Microsoft Office.
  • Offers the flexible report layout of SQL Server 2008 Report Definition Language (RDL).
  • Provides various forms of data visualization including charts and gauges.
  • Provides richly formatted text boxes.
  • Exports to Microsoft Word format.
  • You can also open Report Builder from SQL Server Reporting Services.

In case missing information that are not provided in built-in reports in Configuration Manager, you can use Report Builder to create your custom reports and design it the way you need.

POP – System Center Configuration Manager Advanced Dashboards

The CMAD solution (Configuration Manager Advanced Dashboards) delivers a data-driven reporting overview of the System Center Configuration Manager environment.

This solution consists of a rich set of dashboards designed to deliver real-time reporting of ongoing activity in your Configuration Manager environment.

Native Configuration Manager Reports are not replaced with this solution, the CMAD solution amplifies the data they show by providing additional data insights.

The dashboards in this solution were created based on field experience and on customers' needs to provide an overall view of various Configuration Manager functionality. The embedded charts and graphics provide details across the entire infrastructure.

Key Features and Benefits

The CMAD solution consist of 160+ dashboards covering the following Configuration Manager topics:

  • Asset Inventory
  • Software Update Management
  • Application Deployment
  • Compliance Settings
  • Infrastructure Monitoring:
    • Site Replica
    • Content replication
  • Software Distribution
  • Clients Health
  • Servers Health
  • SCEP

POP - PowerBI Dashboard Integration for System Center Configuration Manager and Intune

Microsoft System Center Configuration Manager is widely used as a management platform for the whole datacenter, thus managing business critical systems of all types (and platforms)

Beginning with version 1702 you can use the Data Warehouse service point to store and report on long-term historical data for your Configuration Manager deployment. You can also consume this data with PowerBi, one of the goals of this delivery will be to show the value of data warehouse and the how to view this data using PowerBi.

Maximize Your Microsoft Investment

The POP - PowerBI Dashboard Integration for System Center Configuration Manager and Intune has been developed to help your organization consume the existing Configuration Manager Data by Connecting the existing database to PowerBi and with the new data warehouse we can store this information for longer period of time.

The data warehouse supports up to 2 TB of data, with timestamps for change tracking. Storage of data is accomplished by automated synchronizations from the Configuration Manager site database to the data warehouse database. This information is then accessible from your Reporting Services point.

Data that is synchronized includes the following from the Global Data and Site Data groups:

  • Infrastructure health
  • Security
  • Compliance
  • Malware
  • Software deployments
  • Inventory details (however, inventory history is not synchronized)

When the site system role installs, it installs and configures the data warehouse database. It also installs several reports so you can easily search for and report on this data. Once this is completed, we can also connect this database to PowerBI and have easier to consume reports.

SQL Custom Report

Builtin SQL reports are available through SQL management Studio.

In order to access to those reports, follow the below instruction:

Reports Examples

Management Dataware House

The management data warehouse is a relational database that contains the data that is collected from a server that is a data collection target. This data is used to generate the reports for the System Data collection sets, and can also be used to create custom reports.

The data collector infrastructure defines the jobs and maintenance plans that are needed to implement the retention policies defined by the database administrator.

This feature allows to collect information specific to SQL Server. This information can subsequently be viewed through 3 types of report:

  • Disk use
  • Server Activity
  • Query Statistics

The MDW, also called Data Collector, is based on a principle of collection of metadata stored, consolidated and aggregated in a dedicated database. For reasons of control of consumption of resources, it is highly recommended in production environments to set up this base on a separate SQL instance to the one monitored. The objective of these data storage, and power "timestamper" a metadata and thus to the retromonitoring, to be able to find information or a measure through time.

The 3 types of reports available are as follows.

Disk Usage

This report differs from the other two where it fits in the context of capacity planning. Via daily or weekly collection, it is possible to measure the evolution of the size of the data files and of their filling. Therefore, we can use it as a base in order to make projections. This report can also be useful to have a better understanding of the behavior of the TempDB database for a given application.

Server Activity & Query Statistics

Server Activity and Query Statistics reports are in contrast with a view to supervision of performance. They provide information both on the State and health of the platform level (both system and SQL), and they also bring light on consumption latchs and expectations SQL types. This makes it possible for example to identify a potential bottleneck on a period, such a lock, a Page IO latch, etc... Finally, these reports provide the ability to operate a drill-down drill-up in order to move from a macroscopic view of the report to the request related to the event, or even to its execution plan.


The configuration of a MDW solution of implements two-step.

First of all, the first step is to launch the "Configure Management Data Warehouse" Wizard to create the MDW database.

The second step is setting itself the MDW, via the option "Set up data collection. Through this wizard defined roles that are mapped users authorized to administer the MDW, then creates jobs including for collecting and loading metadata in the base dedicated to this purpose. Collection and upload frequency can be modified later on each of the collectionsets via right-click - property.


After you run the Wizard "Configuration Management Data Warehouse", you will have three default Collection Set in the two modes of collection. The collection set "Disk use" is not in cache mode (because the goal is to set up a kind of report capacity planning, so collect data once per day is sufficient and it is not useful to be cached), and 'Server Activity' and 'Query Statistics' collection sets are in cached mode. The caching mode determines the relationship between data collection and downloading data to the MDW database.


In non-cached mode, the collection and upload are set on the same schedule. Packages start, collect and upload data to their configured frequency and run until they finished. There is only one job created for non-cached mode Collection Set. This mode allows also the collection and upload of application data, the intervals specified in the job.

Cached Mode

In cached mode, the collection and the data upload are attached to two schedules of two different jobs. So in this case two jobs are created for each collection set. Each collection should be in general relatively close, while the upload is scheduled less frequently. For example, the "Server Activity" report, the upload is set by default every 15 minutes while the collection is made every 60 seconds. It's the frequency by default, but this is completely adjustable for each Collection Set.

Summary of the default settings:

Technical Reference

Custom queries

System settings and resources

From performance perspective, a couple of counters and indicators can be monitored as well directly thru SQL Server queries. Some of them have been already mentioned in custom reports parts. The purpose of this chapter is to allow you point in time checkups.

As explained previoulsy, some configuration settings (like Max Degree of Parallelism for example) are in relation with the global settings and the resources of the server. Typically, it can be interesting to get the value of CPU processors count or global available memory to define these settings. This information can be caught up with the dm_os_sys_info view:

select sqlserver_start_time, cpu_count, hyperthread_ratio, physical_memory_kb, virtual_memory_kb, virtual_machine_type_desc


This query gets the information as well about when the SQL Server service has been started and if it is a physical or virtual server.

Performance counters

In the same scope, we have seen in the SCOM Management Pack chapter a couple of usefull counters. Some of them can be queried as well thru SQL. The global rule is "each Perfmon counters specificly related to SQL".

Then, it's possible still with system views to get easily the value of the Page Lige Expectancy counter, of the free space available in the TempDB database:

select * from sys.dm_os_performance_counters

where counter_name in

('Page Life Expectancy',

'Free Space in tempdb (KB)')

Wait types and waiting sessions

Select TOP 20 *

from sys.dm_os_wait_stats

order by wait_time_ms DESC

select session_id, database_id, DB_NAME(database_id) as DBName, blocking_session_id, last_wait_type, cpu_time, total_elapsed_time

from sys.dm_exec_requests

where session_id > 50

Expensive queries

select TOP 10



total_elapsed_time / execution_count / 1000 as "Avg duration(ms)",

total_worker_time / execution_count / 1000 as "Avg CPU (ms)",

total_logical_reads / execution_count as "AVG Reads",

text, query_plan

from sys.dm_exec_query_stats QS

cross apply sys.dm_exec_sql_text (QS.sql_handle)

cross apply sys.dm_exec_query_plan (QS.plan_handle)

order by "Avg duration(ms)" DESC