Configure storage and infrastructure resources
In this we will learn about configuring storage and infrastructure resources.
Storage configuration for SQL Server VMs
When you configure a SQL Server virtual machine (VM) image in Azure, the Azure portal helps to automate your storage configuration. Further, this includes attaching storage to the VM, making that storage accessible to SQL Server, and configuring it to optimize for your specific performance requirements.
New VMs
The following sections describe how to configure storage for new SQL Server virtual machines.
Azure portal
When provisioning an Azure VM using a SQL Server gallery image, select Change configuration on the SQL Server Settings tab to open the Performance Optimized Storage Configuration page.
Secondly, select the type of workload you’re deploying your SQL Server for under Storage optimization. With the General optimization option, by default you will have one data disk with 5000 max IOPS, and you will use this same drive for your data, transaction log, and TempDB storage. Further, selecting either Transactional processing (OLTP) or Data warehousing will create a separate disk for data, a separate disk for the transaction log, and use local SSD for TempDB.
Additionally, you have the ability to set the caching for the disks. Azure VMs have a multi-tier caching technology called Blob Cache when used with Premium Disks.
Disk caching for Premium SSD can be ReadOnly, ReadWrite or None.
- Firstly, ReadOnly caching is highly beneficial for SQL Server data files that are stored on Premium Storage. ReadOnly caching brings low read latency, high read IOPS, and throughput as, reads are performed from cache, which is within the VM memory and local SSD. These reads are much faster than reads from data disk, which is from Azure Blob storage.
- Secondly, None cache configuration should be used for the disks hosting SQL Server Log file as the log file is written sequentially and does not benefit from ReadOnly caching.
- Lastly, ReadWrite caching should not be used to host SQL Server files as SQL Server does not support data consistency with the ReadWrite cache. Further, writes waste capacity of the ReadOnly blob cache and latencies slightly increase if writes go through ReadOnly blob cache layers.
tempdb database
The tempdb system database is a global resource that’s available to all users connected to the instance of SQL Server or connected to Azure SQL Database. tempdb holds:
- Firstly, Temporary user objects that are explicitly created. They include global or local temporary tables and indexes, temporary stored procedures, table variables, tables returned in table-valued functions, and cursors.
- Secondly, Internal objects that the database engine creates. They include:
- Work tables to store intermediate results for spools, cursors, sorts, and temporary large object (LOB) storage.
- Work files for hash join or hash aggregate operations.
- Intermediate sort results for operations such as creating or rebuilding indexes (if SORT_IN_TEMPDB is specified), or certain GROUP BY, ORDER BY, or UNION queries.
- Thirdly, Version stores, which are collections of data pages that hold the data rows that support features for row versioning. There are two types: a common version store and an online-index-build version store. However, the version stores contain:
- Row versions that are generated by data modification transactions in a database that uses READ COMMITTED through row versioning isolation or snapshot isolation transactions.
- Row versions that are generated by data modification transactions for features. They are online index operations, Multiple Active Result Sets (MARS), and AFTER triggers.
Optimizing tempdb performance in SQL Server
- The size and physical placement of the tempdb database can affect the performance of a system. For example, if the size that’s defined for tempdb is too small, part of the system-processing load might be taken up with autogrowing tempdb to the size required to support the workload every time you restart the instance of SQL Server.
- Further, Preallocate space for all tempdb files by setting the file size to a value large enough to accommodate the typical workload in the environment. Preallocation prevents tempdb from expanding too often, which affects performance. The tempdb database should be set to autogrow to increase disk space for unplanned exceptions.
- However, data files should be of equal size within each filegroup, because SQL Server uses a proportional-fill algorithm that favors allocations in files with more free space.
To check current size and growth parameters for tempdb, use the following query:

Performance improvements in tempdb for SQL Server
Starting with SQL Server 2016 (13.x), tempdb performance is further optimized in the following ways:
- Firstly, ttemporary tables and table variables are cached. Caching allows operations that drop and create the temporary objects to run very quickly.
- Secondly, the allocation page latching protocol is improved to reduce the number of UP (update) latches that are used.
- Thirdly, logging overhead for tempdb is reduced to reduce disk I/O bandwidth consumption on the tempdb log file.
- Next, setup adds multiple tempdb data files during a new instance installation. However, you can accomplish this task by using the new UI input control in the Database Engine Configuration section and the command-line parameter /SQLTEMPDBFILECOUNT.
- Then, when there are multiple tempdb data files, all files autogrow at the same time and by the same amount, depending on growth settings. Trace flag 1117 is no longer required.
- After that, all allocations in tempdb use uniform extents. Trace flag 1118 is no longer required.
- Lastly, for the primary filegroup, the AUTOGROW_ALL_FILES property is turned on and the property can’t be modified.
Manage the size of the transaction log file
Monitor log space use by using sys.dm_db_log_space_usage. This DMV returns information about the amount of log space currently used, and indicates when the transaction log needs truncation.
Shrink log file size
To reduce the physical size of a physical log file, you must shrink the log file. This is useful when you know that a transaction log file contains unused space. Further, you can shrink a log file only while the database is online. And here, at least one virtual log file (VLF) is free. In some cases, shrinking the log may not be possible until after the next log truncation.
Add or enlarge a log file
You can gain space by enlarging the existing log file (if disk space permits) or by adding a log file to the database, typically on a different disk. However, one transaction log file is sufficient unless log space is running out. And disk space is also running out on the volume that holds the log file.
- Firstly, to add a log file to the database, use the ADD LOG FILE clause of the ALTER DATABASE statement. Adding a log file allows the log to grow.
- Secondly, to enlarge the log file, use the MODIFY FILE clause of the ALTER DATABASE statement, specifying the SIZE and MAXSIZE syntax.
Control transaction log file growth
Use the ALTER DATABASE (Transact-SQL) File and Filegroup options statement to manage the growth of a transaction log file. Note the following:
- Firstly, to change the current file size in KB, MB, GB, and TB units, use the SIZE option.
- Secondly, to change the growth increment, use the FILEGROWTH option. A value of 0 indicates that automatic growth is set to off and no additional space is permitted.
- Lastly, to control the maximum the size of a log file in KB, MB, GB, and TB units. Or further, to set growth to UNLIMITED, use the MAXSIZE option.
Reference: Microsoft Documentation, Documentation 2, Documentation 3