Cost-Effective Reliable Storage

Overview

Storage Spaces Direct uses industry-standard servers with local-attached drives to create highly available, highly scalable software-defined storage at a fraction of the cost of traditional SAN or NAS arrays. Its converged or hyper-converged architecture radically simplifies procurement and deployment, while features like caching, storage tiers, and erasure coding, together with the latest hardware innovation like RDMA networking and NVMe drives, deliver unrivaled efficiency and performance. Storage Spaces Direct is part of Windows Server 2016 Datacenter.

Storage Spaces Direct enables service providers and enterprises to use industry standard servers with local storage to build highly available and scalable software defined storage. Using servers with local storage decreases complexity, increases scalability, and enables use of storage devices that were not previously possible, such as SATA solid state disks to lower cost of flash storage, or NVMe solid state disks for better performance.

Storage Spaces Direct removes the need for a shared SAS fabric, simplifying deployment and configuration. Instead it uses the network as a storage fabric, leveraging industry standards like SMB3 and SMB Direct (RDMA) for high-speed, low-latency CPU efficient storage. To scale out, simply add more servers to increase storage capacity and I/O performance.

Considerations

For Cost-Effective Reliable Storage scenario, it is important to have a moderate understanding of:

  • PowerShell
  • Networking
  • Hyper-V
  • Storage
  • Clustering

Cost-Effective Reliable Storage Features

Storage Spaces Direct removes the need for a shared SAS fabric, simplifying deployment and configuration. Instead it uses the network as a storage fabric, leveraging industry standards like SMB3 and SMB Direct (RDMA) for high-speed, low-latency CPU efficient storage. To scale out, you can simply add more servers to increase storage capacity and I/O performance.

Additionally, Storage Spaces Direct can be enhanced features like Storage Replica and Storage QoS Control.

Simple to Setup

Go from industry-standard servers running Windows Server 2016 to your first Storage Spaces Direct cluster in under 15 minutes with several PowerShell commands.

Unrivaled Performance

Whether all-flash or hybrid, Storage Spaces Direct easily exceeds 150,000 mixed 4k random IOPS per server with consistent, low latency thanks to its hypervisor-embedded architecture, its built-in read/write cache, and support for cutting-edge NVMe drives mounted directly on the PCIe bus.

For more information see: Storage IOPS update with Storage Spaces Direct

Fault Tolerance

Built-in resiliency handles drive, server, or component failures with continuous availability. Larger deployments can also be configured for chassis and rack fault tolerance. When hardware fails, just swap it out; the software heals itself, with no complicated management steps. Windows Server 2016 Storage Spaces Direct enhances the resiliency of virtual disks to enable resiliency to node failures. This is in addition to the existing disk and enclosure resiliency.

Resource Efficiency

Erasure coding delivers up to 2.4x greater storage efficiency, with unique innovations like Local Reconstruction Codes and real-time tiering to extend these gains to hard disk drives and mixed hot/cold workloads, all while minimizing CPU consumption to give resources back to where they're needed most - the VMs. Windows Server 2016 Storage Spaces Direct can optimize a storage pool to balance data equally across the set of physical disks that comprise the pool.

Manageability

Use Storage QoS Controls to keep overly busy VMs in check with minimum and maximum per-VM IOPS limits. Storage Quality of Service (QoS) in Windows Server 2016 provides a way to centrally monitor and manage storage performance for virtual machines using Hyper-V and the Scale-Out File Server roles. The feature automatically improves storage resource fairness between multiple virtual machines using the same file server cluster and allows policy-based minimum and maximum performance goals to be configured in units of normalized IOPs.

You can use Storage QoS in Windows Server 2016 to accomplish the following:

  • Mitigate noisy neighbor issues - By default, Storage QoS ensures that a single virtual machine cannot consume all storage resources and starve other virtual machines of storage bandwidth.
  • Monitor end to end storage performance - As soon as virtual machines stored on a Scale-Out File Server are started, their performance is monitored. Performance details of all running virtual machines and the configuration of the Scale-Out File Server cluster can be viewed from a single location
  • Manage Storage I/O per workload business needs - Storage QoS policies define performance minimums and maximums for virtual machines and ensures that they are met. This provides consistent performance to virtual machines, even in dense and overprovisioned environments. If policies cannot be met, alerts are available to track when VMs are out of policy or have invalid policies assigned.

The Health Service provides continuous built-in monitoring and alerting, and new APIs make it easy to collect rich, cluster-wide performance and capacity metrics. It is a new feature in Windows Server 2016 that improves the day-to-day monitoring, operations, and maintenance experience of cluster resources on a Storage Spaces Direct cluster. It helps reduces the work required to get live performance and capacity information from your Storage Spaces Direct cluster. One new cmdlet provides a curated list of essential metrics, which are collected efficiently and aggregated dynamically across nodes, with built-in logic to detect cluster membership. All values are real-time and point-in-time only.

Scalability

Go up to 16 servers and over 400 drives, for multiple petabytes of storage per cluster. To scale out, simply add drives or add more servers; Storage Spaces Direct will automatically onboard new drives and begin using them. Storage efficiency and performance improve predictably at scale.

Disaster Recovery

Storage Replica is Windows Server technology that enables synchronous replication of volumes between servers or clusters for disaster recovery. It also enables you to use asynchronous replication to create failover clusters that span two sites, with all nodes staying in sync.

Storage Replica supports synchronous and asynchronous replication:

  • Synchronous replication mirrors data within a low-latency network site with crash-consistent volumes to ensure zero data loss at the file-system level during a failure.
  • Asynchronous replication mirrors data across sites beyond metropolitan ranges over network links with higher latencies, but without a guarantee that both sites have identical copies of the data at the time of a failure.

Storage Replica offers new disaster recovery and preparedness capabilities in Windows Server 2016. For the first time, Windows Server offers the peace of mind of zero data loss, with the ability to synchronously protect data on different racks, floors, buildings, campuses, counties, and cities. After a disaster strikes, all data will exist elsewhere without any possibility of loss. The same applies before a disaster strikes; Storage Replica offers you the ability to switch workloads to safe locations prior to catastrophes when granted a few moments warning - again, with no data loss.

Storage Replica allows more efficient use of multiple datacenters. By stretching clusters or replicating clusters, workloads can be run in multiple datacenters for quicker data access by local proximity users and applications, as well as better load distribution and use of compute resources. If a disaster takes one datacenter offline, you can move its typical workloads to the other site temporarily.

Storage Replica may allow you to decommission existing file replication systems such as DFS Replication that were pressed into duty as low-end disaster recovery solutions. While DFS Replication works well over extremely low bandwidth networks, its latency is very high - often measured in hours or days. This is caused by its requirement for files to close and its artificial throttles meant to prevent network congestion. With those design characteristics, the newest and hottest files in a DFS Replication replica are the least likely to replicate. Storage Replica operates below the file level and has none of these restrictions.

Storage Replica also supports asynchronous replication for longer ranges and higher latency networks. Because it is not checkpoint-based, and instead continuously replicates, the delta of changes will tend to be far lower than snapshot-based products. Furthermore, Storage Replica operates at the partition layer and therefore replicates all VSS snapshots created by Windows Server or backup software; this allows use of application-consistent data snapshots for point in time recovery, especially unstructured user data replicated asynchronously.

Cost-Effective Reliable Storage Deployment Options

Storage Spaces Direct has two distinct deployment options that you can choose from.

Converged

Converged is also known as disaggregated, layers a Scale-out File Server (SoFS) atop Storage Spaces Direct to provide network-attached storage over SMB3 file shares. This allows for scaling compute/workload independently from the storage cluster.

Hyper-Converged

Runs Hyper-V virtual machines or SQL Server databases directly on the servers providing the storage, storing their files on the local volumes. This eliminates the need to configure file server access and permissions, and reduces hardware costs.

Cost-Effective Reliable Storage Components

Storage Spaces Direct is the evolution of Storage Spaces, first introduced in Windows Server 2012. It leverages many of the features you know today in Windows Server, such as Failover Clustering, the Cluster Shared Volume (CSV) file system, Server Message Block (SMB) 3, and of course Storage Spaces. It also introduces new technology, most notably the Software Storage Bus. The Storage Spaces Direct Stack consists of the following components.

Network Hardware

Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet to communicate between servers. Using 10+ GbE with remote-direct memory access (RDMA), either iWARP or RoCE is the recommended setup.

Storage Hardware

From 2 to 16 servers with local-attached SATA, SAS, or NVMe drives. Each server must have at least 2 solid-state drives, and at least 4 additional drives. The SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander. It is strongly recommended to deploy Storage Spaces Direct on a hardware certified for it.

Failover Clustering

The built-in clustering feature of Windows Server is used to connect the servers.

Software Storage Bus

The Software Storage Bus is new in Storage Spaces Direct. It spans the cluster and establishes a software-defined storage fabric whereby all the servers can see all of each other's local drives. You can think of it as replacing costly and restrictive Fiber Channel or Shared SAS cabling.

Storage Bus Layer Cache

The Software Storage Bus dynamically binds the fastest drives present (e.g. SSD) to slower drives (e.g. HDDs) to provide server-side read/write caching that accelerates IO and boosts throughput.

Storage Pool

The collection of drives that will form the basis of Storage Spaces is called the storage pool. It is automatically created, and all eligible drives are automatically discovered and added to it. We strongly recommend you use one pool per cluster, with the default settings.

Storage Spaces

Storage Spaces provides fault tolerance to virtual "disks" using mirroring, erasure coding, or both. You can think of it as distributed, software-defined RAID using the drives in the pool. In Storage Spaces Direct, these virtual disks typically have resiliency to two simultaneous drive or server failures (e.g. 3-way mirroring, with each data copy in a different server) though chassis and rack fault tolerance is also available.

Resilient File System (ReFS)

ReFS is the premier filesystem purpose-built for virtualization. It includes dramatic accelerations for .vhdx file operations such as creation, expansion, and checkpoint merging, and built-in checksums to detect and correct bit errors. It also introduces real-time tiers that rotate data between so-called "hot" and "cold" storage tiers in real-time based on usage.

Cluster Shared Volumes

The CSV file system unifies all the ReFS volumes into a single namespace accessible through any server, so that to each server, every volume looks and acts like it's mounted locally.

Scale-Out File Server

This final layer is necessary in converged deployments only. It provides remote file access using the SMB3 access protocol to clients, such as another cluster running Hyper-V, over the network, effectively turning Storage Spaces Direct into network-attached storage (NAS).

Example Scenario

Contoso Hosting is a managed service provider that provides Infrastructure as a Service offering. Contoso Hosting is looking to replace their SAN storage with modern cost-effective reliable storage solution that will host VMs data. The solution must provide the following capabilities:

  • Software Defined Storage / Software Defined Data Center
  • Predictable performance for tenants
  • Prevent noisy neighbors' problem
  • Fault tolerance on all levels – disk, server, chassis, rack and site
  • Easy management
  • Easy scale-out and scale-up
  • Flexible tiering
  • Automatic health remediation
  • Health aggregation
  • Disaster Recovery capabilities with low Recovery Time Objective (RTO) and Recovery Point Objective (RPO)

Lab Requirements

In order to successfully execute the scenarios in the next sections the following requirements needs to be met:

  • Physical Server with minimum 8 processors, 80 GBs RAM, 4TB storage , Nested Virtualization capable, Hyper-V role installed and vSwitch created
  • Windows Server 2016 Datacenter RTM iso
  • Existing or new Windows Server Active Directory Domain
  • Server running Windows Server 2016 (Desktop experience) with the same updates as the servers it's managing, and also joined to the same domain or a fully trusted domain. Remote Server Administration Tools (RSAT) and PowerShell modules for Hyper-V and Failover Clustering. This server will serve the purpose of being Management node.

Note that for production environments we recommend acquiring a Windows Server Software-Defined hardware/software offering, which includes production deployment tools and procedures. The labs in the next sections use nested virtualization in order to show Software Defined Storage features and capabilities with minimum hardware requirements.

Install and Setup Storage Spaces Direct (Hyper-Converged)

You can use Storage Spaces Direct to deploy software-defined storage (SDS) for virtual machines, and host the virtual machines on the same cluster in a hyper-converged solution.

In the hyper-converged configuration described in these labs, Storage Spaces Direct seamlessly integrates with the features you know today that make up the Windows Server software defined storage stack, including Clustered Shared Volume File System (CSVFS), Storage Spaces and Failover Clustering.

The hyper-converged deployment scenario has the Hyper-V (compute) and Storage Spaces Direct (storage) components on the same cluster. Virtual machine files are stored on local CSVs. This allows for scaling Hyper-V compute clusters together with the storage it is using. Once Storage Spaces Direct is configured and the CSV volumes are available, configuring and provisioning Hyper-V is the same process and uses the same tools that you would use with any other Hyper-V deployment on a failover cluster. The figure below illustrates the hyper-converged deployment scenario.

Deploy and Configure Storage Spaces Nodes

Create VMs and deploy Windows Server 2016 Servers

Create 4 Virtual Machines. These VMs will serve the purpose of being Storage Spaces Direct nodes in a cluster. The script is executed on the Hyper-V host.

# Executed on the Hyper-V host

# VM names

$VMNames = ('S2D01', 'S2D02', 'S2D03', 'S2D04');

# Prompt for vSwitch Name

$vSwitchName = Read-Host `

-Prompt 'Enter vSwitchName.';

# Prompt for storage path where the VMs will be stored

$StoragePath = Read-Host `

-Prompt "Enter storage Path. Example 'C:\ClusterStorage\Volume1'";

Foreach($VMName in $VMNames )

{

# Create VM

New-VM -Name $VMName `

-Path "$StoragePath\$VMName" `

-Generation 2 `

-SwitchName $vSwitchName.ToString() `

-MemoryStartupBytes 16GB ;

# Set Proc number

Set-VM -Name $VMName `

-ProcessorCount 4;

# Create OS VHDx

New-VHD -Dynamic `

-SizeBytes 127GB `

-Path "$StoragePath\$VMName\OSDisk.vhdx";

# Add VHDx to VM

Add-VMHardDiskDrive -VMName $VMName `

-Path "$StoragePath\$VMName\OSDisk.vhdx";

# Add DVD drive without configuration

Add-VMDvdDrive -VMName $VMName;

};

The VMs will be created without OS installed. Use the Windows Server 2016 RTM iso to install Windows Server 2016 Datacenter edition with either Desktop Experience or Server Core option. It is recommended to create syspreped vhdx image and use it for easier deployment.

If you need to set VLAN ID to these VMs use the cmdlet below:

# Set Vlan on a VM

Set-VMNetworkAdapterVlan -VMName <VMName> `

-VlanId <VlanID> `

-Access;

Configure IP Settings

In this step, you need to configure IP settings on the Storage Spaces Direct servers. You will setup static, IP address, mask, gateway and DNS server. Login on each server trough Virtual Machine Connection Console with the local administrator account created during deployment of Windows Server 2016. Execute the next commands on each Storage Spaces Direct node.

# Execute on Storage Spaces Direct servers

# Get Available interfaces

Get-NetIPAddress `

-ErrorAction Stop | select InterfaceAlias, InterfaceIndex, IPAddress;

# Prompt for new IP Address. Example: 192.168.1.20

$Interface = Read-Host `

-Prompt "Enter Interface Alias to configure:";

# Remove Interface configuration

Remove-NetIPAddress -InterfaceAlias $Interface.ToString() `

-Confirm:$false ;

# Prompt for new IP Address. Example: 192.168.1.20

$IPAddress = Read-Host `

-Prompt "Enter new IP address for $($Interface.ToString())";

# Prompt for IP Address Prefix (mask). Example: 24

$IPAddressPrefix = Read-Host `

-Prompt "Enter IP address prefix for $($Interface.ToString())";

# Prompt for Gateway IP Address. Example: 192.168.1.1

$Gateway = Read-Host `

-Prompt "Enter Gateway for $($Interface.ToString())";

# Prompt for DNS IP Address. Example: 192.168.1.4

$DNSServer = Read-Host `

-Prompt "Enter DNS Server for $($Interface.ToString())";

# Configure IP settings on interface

New-NetIPAddress –InterfaceAlias $Interface.ToString() `

–IPAddress $IPAddress `

–PrefixLength $IPAddressPrefix `

-DefaultGateway $Gateway;

# Set DNS

Set-DnsClientServerAddress -InterfaceAlias $Interface.ToString() `

-ServerAddresses $DNSServer;

Set Time Zone

Setting the correct time zone is good practice to avoid any issues. Use the command below to set the correct time zone in your case:

# List Time Zones

tzutil /l | more

# Set Time Zone

tzutil /s "<your time zone>"

Join servers to domain

Login on each server trough Virtual Machine Connection Console with the local administrator account created during deployment of Windows Server 2016. And join to domain each one of the Storage Spaces Direct nodes by using the PowerShell commands below. When prompted for credentials provide domain user account that has privileges to join servers to the domain. The servers will be automatically rebooted when joined to the domain.

# Execute on Storage Spaces Direct servers

# Prompt for new computer name

$NewName = Read-Host `

-Prompt 'Enter new Computer Name';

# Prompt for domain name. Example: contoso.local

$DomainName = Read-Host `

-Prompt 'Enter domain Name';

# Join computer to domain and restart

Add-Computer -DomainName $DomainName `

-Force `

-Restart `

-NewName $NewName;

Add Domain Group to Local Administrators

In order to be able to login on all servers with PowerShell session Domain Group needs to be added to Local Administrator Group on all servers. That domain group will give administrator access on the Storage Spaces Direct nodes. Login on each server trough Virtual Machine Connection Console with the local administrator account created during deployment of Windows Server 2016. Execute the PowerShell code on each server to add domain group to local Administrators group:

# Execute on Storage Spaces Direct servers

# Prompt for domain group

$DomainGroup = Read-Host `

-Prompt 'Enter domain group for local administrators on the sserver in the following format <Domain\Account>';

# Add domain group to local administrators group

Net localgroup Administrators $DomainGroup.ToString() /add

Enable Nested Virtualization

In order to be able to install Hyper-V on Storage Spaces Direct VMs and test the Hyper-Converged scenario the VMs needs to be enabled for nested virtualization. Also, turning MAC Address spoofing is needed for creating vSwitch on each of the VM. Use the code below to stop the VMs, enable nested virtualization and start them again.

# Execute on Hyper-V host

# VM names

$VMNames = ('S2D01', 'S2D02', 'S2D03', 'S2D04');

Foreach($VMName in $VMNames )

{

# Shut Down the VM

Stop-VM –Name $VMName;

# Enable Nested Virtualization

Set-VMProcessor -VMName $VMName `

-ExposeVirtualizationExtensions $true;

# Enable MAC Address Spoofing

Get-VMNetworkAdapter -VMName $VMName | Set-VMNetworkAdapter -MacAddressSpoofing On;

# Start VM

Start-VM -Name $VMName;

};

Add Disks

In this step, we will add 5 disks (vhdx) to each Storage Spaces Direct server. At later step, we will change the media type to SSD on two disks on each node in order to simulate having two different tiers for Storage Spaces Direct. Execute the script below on the Hyper-V host to create the disks on the servers:

# Execute on Hyper-V host

$VMNames = ('S2D01', 'S2D02', 'S2D03', 'S2D04');

Foreach ($VMName in $VMNames){

# SSD Disk Names

$SSDDiskNames = ("SSD01.vhdx", "SSD02.vhdx");

# Create and attach SDD disks

foreach ($SSDDiskName in $SSDDiskNames )

{

$diskName = $SSDDiskName;

# Get the VM

$VM = Get-VM -Name $VMName;

# Get VM Location

$VMLocation = $VM.Path;

# Set Disk Size in GB

$Disksize = 256;

$DisksizeinBytes = $Disksize*1024*1024*1024;

# Create Disk

$VHD = New-VHD -Path "$VMLocation\$diskName" `

-Dynamic `

-SizeBytes $DisksizeinBytes;

# Atach the disk

$AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `

-Path "$VMLocation\$diskName" `

-ControllerType SCSI `

-ControllerNumber 0;

};

# HHD Disk Names

$HDDDiskNames = ("HDD01.vhdx", "HDD02.vhdx", "HDD03.vhdx");

# Create and attach HDD disks

foreach ($HDDDiskName in $HDDDiskNames )

{

$diskName = $HDDDiskName;

# Get the VM

$VM = Get-VM -Name $VMName;

# Get VM Location

$VMLocation = $VM.Path;

# Set Disk Size in GB

$Disksize = 512;

$DisksizeinBytes = $Disksize*1024*1024*1024;

# Create Disk

$VHD = New-VHD -Path "$VMLocation\$diskName" `

-Dynamic `

-SizeBytes $DisksizeinBytes;

# Atach the disk

$AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `

-Path "$VMLocation\$diskName" `

-ControllerType SCSI `

-ControllerNumber 0;

};

};

Configuring Prerequisites

Update Windows Server 2016

Before proceeding further make sure you have the latest Windows Server 2016 updates installed. If you were using image that already has all the updates applied, you can skip this step. Use the script below to apply updates with Windows Update. This script requires that the servers are have direct Internet connection.

# Execute on Storage Spaces Direct Servers

# Check for Available Updates

$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `

-ClassName MSFT_WUOperationsSession;

# Scan for Updates

$result = $ci | Invoke-CimMethod -MethodName ScanForUpdates `

-Arguments @{SearchCriteria="IsInstalled=0";OnlineScan=$true};

# Show Updates found for install

$result.Updates;

# Initiate Update and restart

$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `

-ClassName MSFT_WUOperationsSession;

# Apply Updates

Invoke-CimMethod -InputObject $ci `

-MethodName ApplyApplicableUpdates;

# Restart Server

Restart-Computer; exit;

# Show Installed Updates

$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `

-ClassName MSFT_WUOperationsSession;

$result = $ci | Invoke-CimMethod -MethodName ScanForUpdates `

-Arguments @{SearchCriteria="IsInstalled=1";OnlineScan=$true};

$result.Updates;

Install Roles

The Hyper-Converged deployment requires installing File Services, Failover Clustering and Hyper-V roles installed. Use the script below to install them. After installing Hyper-V reboot is required and will be performed automatically.

Install-WindowsFeature -Name File-Services

Install-WindowsFeature -Name Failover-clustering -IncludeManagementTools

Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart

Configure Network QoS

Configuring Network QoS is required when you are doing production deployment with RDMA capable adapters (either RoCE or iWARP). The PowerShell code below is for reference and does not need to be executed for the purposes of this lab.

# Execute on Storage Spaces Direct Servers

# Turn on DCB in case you are using RDMA RoCE type

Install-WindowsFeature Data-Center-Bridging

# QoS policy for SMB-Direct

New-NetQosPolicy -Name "SMB" `

–NetDirectPortMatchCondition 445 `

–PriorityValue8021Action 3;

# If you are using RoCEv2 turn on Flow Control for SMB as follows (not required for iWarp)

Enable-NetQosFlowControl –Priority 3;

# Disable flow control for other traffic as follows (optional for iWarp)

Disable-NetQosFlowControl –Priority 0,1,2,4,5,6,7;

# Get a list of the network adapters to identify the target adapters (RDMA adapters) as follows

Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeed;

# Apply network QoS policy to the target adapters

Enable-NetAdapterQos –InterfaceAlias "<adapter1>","<adapter2>";

# Create a Traffic class and give SMB Direct 30% of the bandwidth minimum.

# The name of the class will be "SMB"

New-NetQosTrafficClass -Name "SMB" `

–Priority 3 `

–BandwidthPercentage 30 `

–Algorithm ETS;

Along with Network QoS configuration on the Storage Spaces Direct server's configuration on Top of Rack (TOR) switches may be needed. Network QoS and reliable flow of data for RoCEv2 type RDMA requires that the TOR switches have specific capabilities set for the network ports that the NICs are connected to. If you are deploying with iWARP, the TOR switches may not need any configuration.

Create Hyper-V virtual switch

Storage Spaces Direct incorporates storage and compute at the same node thus Hyper-V vSwitch needs to be created. Execute the steps below to create vSwitch on each of the Storage Spaces Direct nodes. You can connect to each node by establishing PowerShell session to it. For Example:

Enter-PSSession -ComputerName S2D01

Identify the adapters that will be part of the vSwitch creation.

# Identify the network adapters

Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeed;

In our case, we have only one adapter and we will use to create vSwitch. Replace <adapter1> value with name of the adapter.

# Create the virtual switch connected to the adapters of your choice,

# and enable the Switch Embedded Teaming (SET). You may notice a message that

# your PSSession lost connection. This is expected and your session will reconnect

New-VMSwitch –Name SETswitch `

–NetAdapterName "<adapter1>" `

–EnableEmbeddedTeaming $true;

In a scenario where you are using RDMA there are additional networking configuration that have to be implemented. The code below should not be implemented for the purpose of this lab but it is available for reference.

Create two virtual network adapters where the RDMA traffic will pass through.

# Add host vNICs to the virtual switch. This configures a virtual NIC (vNIC)

# from the virtual switch that you just configured for the management OS to use

Add-VMNetworkAdapter –SwitchName SETswitch `

–Name SMB_1 `

–managementOS;

Add-VMNetworkAdapter –SwitchName SETswitch `

–Name SMB_2 `

–managementOS;

In case these networks are located in specific VLAN(s) configure it.

# Configure the host vNIC to use a Vlan. They can be on the same or different Vlans.

Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_1" `

-VlanId <vlan number> `

-Access `

-ManagementOS;

Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_2" `

-VlanId <vlan number> `

-Access `

-ManagementOS;

The configuration can be verified with the below command:

# Verify that the VLANID is set

Get-VMNetworkAdapterVlan –ManagementOS;

Reset the created virtual network adapters to activate the VLAN:

# Restart each host vNIC adapter so that the Vlan is active.

Restart-NetAdapter -Name "vEthernet (SMB_1)";

Restart-NetAdapter -Name "vEthernet (SMB_2)";

Enable RDMA on the RDMA designated adapters:

# Enable RDMA on the host vNIC adapters

Enable-NetAdapterRDMA -Name "vEthernet (SMB_1)","vEthernet (SMB_2)";

The below commands associate the virtual network adapters for RDMA with physical network adapter. The example below is for vSwitch that was created on top of two physical adapters.

# Associate each of the vNICs configured for RDMA to a physical adapter that is connected to the virtual switch

Set-VMNetworkAdapterTeamMapping -VMNetworkAdapterName 'SMB_1' `

–ManagementOS `

–PhysicalNetAdapterName '<physical adapter name 1>';

Set-VMNetworkAdapterTeamMapping -VMNetworkAdapterName 'SMB_2' `

–ManagementOS `

–PhysicalNetAdapterName '<physical adapter name 1>';

The RDMA capabilities can be verified with the command below:

# Verify RDMA capabilities

Get-SmbClientNetworkInterface;

Create Cluster

All Storage Spaces Direct nodes are now setup with the same configuration and they can be put into a cluster.

Validate Cluster Configuration

Before proceeding with cluster creation, we will first validate it. On the Management node execute the commands below to validate the cluster configuration:

$nodes = ("S2D01", "S2D02", "S2D03", "S2D04");

# Validate Cluster Configuration

Test-Cluster –Node $nodes `

–Include "Storage Spaces Direct","Inventory","Network","System Configuration";

Create Cluster

You can proceed with cluster creation if there are no failed tests. When you are creating the cluster, you will need to provide IP address for the cluster name:

$nodes = ("S2D01", "S2D02", "S2D03", "S2D04");

# Create Cluster with no storage

New-Cluster –Name S2D-CLU `

–Node $nodes `

–NoStorage `

–StaticAddress <cluster IP Address>;

Setup Quorum

We will use Cloud Witness as quorum for our Storage Spaces Direct cluster. Cloud Witness is a new type of Failover Cluster quorum witness in Windows Server 2016 that leverages Microsoft Azure as the arbitration point. The Cloud Witness, like any other quorum witness, gets a vote and can participate in the quorum calculations. You can configure cloud witness as a quorum witness using the Configure a Cluster Quorum Wizard.

Using Cloud Witness as a Failover Cluster quorum witness provides the following advantages:

  • Leverages Microsoft Azure and eliminates the need for a third separate datacenter.
  • Uses the standard publicly available Microsoft Azure Blob Storage which eliminates the extra maintenance overhead of VMs hosted in a public cloud.
  • Same Microsoft Azure Storage Account can be used for multiple clusters (one blob file per cluster; cluster unique id used as blob file name).
  • Provides a very low on-going cost to the Storage Account (very small data written per blob file, blob file updated only once when cluster nodes' state changes).

Firs you will need to create Azure Storage Account. Login to Azure Portal with credentials that have access to subscription where you can create resources. Create Storage Account with a name of your choice:

Once created browse to the Storage account resource and copy the storage account name and the Storage account Access Key. On the Management node execute the following command by changing the values that reflect to your setup:

# Set Cloud Witness

Set-ClusterQuorum -CloudWitness `

-AccountName <StorageAccountName> `

-AccessKey <StorageAccountAccessKey> `

-Cluster S2D-CLU;

In case you do not have access to Azure Subscription you can configure file share on the Management node and configure File Share Witness.

# Set File Share Witness

Set-ClusterQuorum –FileShareWitness <File Share Witness Path>

Note: Permissions may need to be set on the file share.

Setup Storage Spaces Direct

Enable Storage Spaces Direct

When the cluster is created Storage Spaces Direct can be enabled. This is done through a single command which you can execute from the Management mode remotely to the cluster:

# Enable Storage Spaces Direct

Enable-ClusterStorageSpacesDirect -CimSession S2D-CLU `

-Confirm:$false;

By default, Storage Spaces direct will create tier(s) based on the physical disks that are on the nodes. In our case we will remove those and we will create new ones in later step.

# Remove all Storage Tiers

Get-StorageTier -CimSession S2D-CLU | Remove-StorageTier -Confirm:$false `

-CimSession S2D-CLU;

Change Media Type

In order to create two different storage tiers, we will change the Media Type from HDD to SSD to the physical disks that are below 300GB. All commands are executed on the Management node remotely to the cluster.

# List Physical Disks

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

Get-PhysicalDisk -CimSession S2D-CLU

# Change Media Type

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

Get-PhysicalDisk -CimSession S2D-CLU | `

? Size -lt 300GB ` | `

Set-PhysicalDisk -CimSession S2D-CLU `

–MediaType SSD;

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

Get-PhysicalDisk -CimSession S2D-CLU | `

? Size -gt 300GB | `

Set-PhysicalDisk -CimSession S2D-CLU `

–MediaType HDD;

# List Physical Disks

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

Get-PhysicalDisk -CimSession S2D-CLU | `

FT FriendlyName,CanPool,OperationalStatus,HealthStatus,Usage,Size,MediaType;

Create Storage Tiers

In this step, we will create two tiers one called Performance and the other one Capacity. Capacity will have Parity resiliency and Performance will have Mirror. Performance will be made of disks with Media Type SSD and Capacity with disks made of Media Type HDD.

# Create Tiers

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

New-StorageTier –FriendlyName Performance `

–MediaType SSD `

-ResiliencySettingName Mirror `

-CimSession S2D-CLU;

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

New-StorageTier –FriendlyName Capacity `

–MediaType HDD `

-ResiliencySettingName Parity `

-CimSession S2D-CLU;

Depending on the number of nodes, type of disks and number of disks different Storage Tier configurations can be made. These different configurations provide different level of performance, resiliency and capacity which fit into different scenarios. For more information take a look at NVMe, SSD and HDD storage configurations in Storage Spaces Direct. We recommend setting up configurations provided by the vendor of your choice who has provided you with certified Storage Spaces Direct solution. The table below summarizes the different volume types and when to use what type.

 

Mirror

Parity

Multi-Resilient

Optimized for

Performance

Efficiency

Balanced performance and efficiency

Use case

All data is hot

All data is cold

Mix of hot and cold data

Efficiency

Least (33%)

Most (50+%)

Medium (~50%)

File System

ReFS or NTFS

ReFS or NTFS

ReFS only

Minimum nodes

3+

4+

4+

Create Volume

The last step of the Storage Spaces Direct setup is to create a volume based on the created storage tiers from the previous step. We will create 1TB volume where VMs can be stored. ReFS will automatically move data between the two tiers. The command below is executed on the Management node by remotely connecting to the Storage Spaces Direct cluster.

# Create Volume

New-Volume -StoragePoolFriendlyName "S2D*" `

-FriendlyName Volume1 `

-FileSystem CSVFS_ReFS `

-StorageTierfriendlyNames Capacity,Performance `

-StorageTierSizes 1500GB,500GB `

-CimSession S2D-CLU;

If we connect remotely with the Failover Cluster console to the cluster, we can see the new Volume:

Test Storage Spaces Direct Fault Tolerance

Windows Server 2016 Storage Spaces Direct enhances the resiliency of virtual disks to enable resiliency to node failures. This is in addition to the existing disk and enclosure resiliency.

When using Storage Spaces Direct, storage pools and virtual disks will, by default, be resilient to node failures. When a storage pool is created the "FaultDomainAwarenessDefault" property is set to "StorageScaleUnit". This controls the default for virtual disk creation. You can inspect the storage pool property by running the following command on the Management node:

# Inspect Storage Pool FaultDomainAwarenessDefault

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | FL FriendlyName, Size, FaultDomainAwarenessDefault

Subsequently, when a virtual disk is created in a pool, it inherits the default value of the pool.

Let us examine some basics about virtual disks. A virtual disk consists of extents, each of which are 1GB in size. A 100GB virtual disk will therefore consist of 100 1GB extents. If the virtual disk is mirrored (using ResiliencySettingName) there will be multiple copies of each extent. The number of copies of the extent (using NumberOfDataCopes) can be two or three. All in all, a 100GB mirrored virtual disk with three data copes will consume 300 extents. The placement of extents is governed by the fault domain, which in Storage Spaces Direct is nodes (StorageScaleUnit), so the three copies of an extent (A) will be placed on three different storage nodes e.g. node 1, 2 and 3 in the diagram below. Another extent (B) of the same virtual disk might have its three copies placed on different nodes, e.g. 1, 3, and 4 and so on. This means that a virtual disk might have its extents distributed all storage nodes and the copies of each extent is placed on different nodes. The figure below illustrates a four-node deployment with a mirrored virtual disk with 3 copies and an example layout of extents:

Test node failure

In this lab, we will test node failure by stopping two of the nodes.

Stop First Node

Simulate node failure by stopping one node. Execute the command on the Hyper-V host:

# Execute on Hyper-V host

# Stop VM

Stop-VM -Name S2D04

After the node is down access the volume by opening the admin share on the Management Node:

\\s2d01\c$\ClusterStorage\Volume1

Create and copy a couple of files to see that the volume is still functioning properly:

Stop Second Node

Simulate second node failure by stopping another node. Execute the command on the Hyper-V host:

# Execute on Hyper-V host

# Stop VM

Stop-VM -Name S2D02

After the node is down access the volume by opening the admin share on the Management Node:

\\s2d01\c$\ClusterStorage\Volume1

Create and copy a couple of files to see that the volume is still functioning properly:

As you can see 4-node cluster can withstand 2 node failure on a and the Cloud Witness prevents split brain. Start the nodes again to return Storage Spaces Direct to full health.

# Execute on Hyper-V host

# Start VMs

Start-VM -Name S2D04

Start-VM -Name S2D02

Test Disk failure

In this lab, we will remove two disks, one from each tier and inspect how Storage Spaces Direct handles these kinds of failures.

Remove disk from Capacity tier

On the Management node start copying some files:

Execute the command below on the Hyper-V host to remove hard disk from Capacity tier:

# Get, Save and List HHD drive information

$RemovedHDD = Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"}

$RemovedHDD

# Detach Hard drive from VM

Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"} | Remove-VMHardDiskDrive

On the Management node start Failover Cluster console, expand Storage and click Pools. Select the pool and click on physical disks tab below. You will notice one of the 512GB disks is with status missing.

You will also notice that the copy operation hasn't been interrupted and it is still running:

Remove disk from Performance tier

Execute the command below on the Hyper-V host to remove hard disk from Performance tier:

# Get, Save and List SSD drive information

$RemovedSSD = Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*SSD01*"}

$RemovedSSD

# Detach Hard drive from VM

Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*SSD01*"} | Remove-VMHardDiskDrive

On the Management node start Failover Cluster console, expand Storage and click Pools. Select the pool and click on physical disks tab below. You will notice one of the 256GB disks is with status missing.

You will also notice that the copy operation hasn't been interrupted and it is still running:

Add Back Drives

Use the commands below to add the drives back to the Storage Spaces Direct nodes by executing it on the Hyper-V host:

# Attach Hard drives

Add-VMHardDiskDrive -VMName $RemovedHDD.VMName `

-ControllerType SCSI `

-ControllerNumber $RemovedHDD.ControllerNumber `

-ControllerLocation $RemovedHDD.ControllerLocation `

-Path $RemovedHDD.Path

Add-VMHardDiskDrive -VMName $RemovedSSD.VMName `

-ControllerType SCSI `

-ControllerNumber $RemovedSSD.ControllerNumber `

-ControllerLocation $RemovedSSD.ControllerLocation `

-Path $RemovedSSD.Path

After they are executed successful Failover Cluster should report the status of the drives as OK.

Additional Tests

Try executing the same steps as before but removing 3 disks from each tier. Watch what happens with the Virtual Disk after removing the third disk from each tier. The commands below will help you make those additional tests:

# Get, Save and List HHD drive information

$RemovedHDD1 = Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"}

$RemovedHDD1

# Detach Hard drive from VM

Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"} | Remove-VMHardDiskDrive

# Get, Save and List HHD drive information

$RemovedHDD2 = Get-VMHardDiskDrive -VMName S2D02 | where {$_.Path -like "*HDD01*"}

$RemovedHDD2

# Detach Hard drive from VM

Get-VMHardDiskDrive -VMName S2D02 | where {$_.Path -like "*HDD01*"} | Remove-VMHardDiskDrive

# Get, Save and List HHD drive information

$RemovedHDD3 = Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*HDD01*"}

$RemovedHDD3

# Detach Hard drive from VM

Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*HDD01*"} | Remove-VMHardDiskDrive

# Attach Hard drives

Add-VMHardDiskDrive -VMName $RemovedHDD1.VMName `

-ControllerType SCSI `

-ControllerNumber $RemovedHDD1.ControllerNumber `

-ControllerLocation $RemovedHDD1.ControllerLocation `

-Path $RemovedHDD1.Path

# Attach Hard drives

Add-VMHardDiskDrive -VMName $RemovedHDD2.VMName `

-ControllerType SCSI `

-ControllerNumber $RemovedHDD2.ControllerNumber `

-ControllerLocation $RemovedHDD2.ControllerLocation `

-Path $RemovedHDD2.Path

# Attach Hard drives

Add-VMHardDiskDrive -VMName $RemovedHDD3.VMName `

-ControllerType SCSI `

-ControllerNumber $RemovedHDD3.ControllerNumber `

-ControllerLocation $RemovedHDD3.ControllerLocation `

-Path $RemovedHDD3.Path

# Get, Save and List SSD drive information

$RemovedSSD1 = Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*SSD01*"}

$RemovedSSD1

# Detach Hard drive from VM

Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*SSD01*"} | Remove-VMHardDiskDrive

# Get, Save and List SSD drive information

$RemovedSSD2 = Get-VMHardDiskDrive -VMName S2D02 | where {$_.Path -like "*SSD01*"}

$RemovedSSD2

# Detach Hard drive from VM

Get-VMHardDiskDrive -VMName S2D02 | where {$_.Path -like "*SSD01*"} | Remove-VMHardDiskDrive

# Get, Save and List SSD drive information

$RemovedSSD3 = Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*SSD01*"}

$RemovedSSD3

# Detach Hard drive from VM

Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*SSD01*"} | Remove-VMHardDiskDrive

Add-VMHardDiskDrive -VMName $RemovedSSD1.VMName `

-ControllerType SCSI `

-ControllerNumber $RemovedSSD1.ControllerNumber `

-ControllerLocation $RemovedSSD1.ControllerLocation `

-Path $RemovedSSD1.Path

Add-VMHardDiskDrive -VMName $RemovedSSD2.VMName `

-ControllerType SCSI `

-ControllerNumber $RemovedSSD2.ControllerNumber `

-ControllerLocation $RemovedSSD2.ControllerLocation `

-Path $RemovedSSD2.Path

Add-VMHardDiskDrive -VMName $RemovedSSD3.VMName `

-ControllerType SCSI `

-ControllerNumber $RemovedSSD3.ControllerNumber `

-ControllerLocation $RemovedSSD3.ControllerLocation `

-Path $RemovedSSD3.Path

Managing Storage Spaces Direct

Adding Nodes to Storage Spaces Direct Cluster

Adding nodes (also known as scaling out) adds storage capacity, unlocks greater storage efficiency, and improves storage performance. If you're running a hyper-converged cluster, adding nodes also provides more compute resources for your workload. In this lab, we will add an additional node to the cluster and see how this changes the overall capacity of the tiers.

Get Storage Spaces Direct Tier Sizes

After creating virtual disk Volume1 let's see how much available free space we have on each tier by executing the following commands on the Management node.

# Get Capacity Tier Max and Min Size

Get-StorageTierSupportedSize -FriendlyName Capacity `

-ResiliencySettingName Parity `

-CimSession S2D-CLU | `

Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}}

# Get Performance Tier Max and Min Size

Get-StorageTierSupportedSize -FriendlyName Performance `

-ResiliencySettingName Mirror `

-CimSession S2D-CLU | `

Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}}

Note that Maximum Tier Size for Capacity Tier is 1563 GBs and for Performance is 176 GBs.

Create Fifth Storage Spaces Direct Node

This step we will create a fifth node for our Storage Spaces Direct cluster with the same specifications as all other four. Use the commands below to create another node:

# Executed on the Hyper-V host

# Create fifth node VM

# VM names

$VMNames = ('S2D05');

# Prompt for vSwitch Name

$vSwitchName = Read-Host `

-Prompt 'Enter vSwitchName.';

# Prompt for storage path where the VMs will be stored

$StoragePath = Read-Host `

-Prompt "Enter storage Path. Example 'C:\ClusterStorage\Volume1'";

Foreach($VMName in $VMNames )

{

# Create VM

New-VM -Name $VMName `

-Path "$StoragePath\$VMName" `

-Generation 2 `

-SwitchName $vSwitchName.ToString() `

-MemoryStartupBytes 16GB ;

# Set Proc number

Set-VM -Name $VMName `

-ProcessorCount 4;

# Create OS VHDx

New-VHD -Dynamic `

-SizeBytes 127GB `

-Path "$StoragePath\$VMName\OSDisk.vhdx";

# Add VHDx to VM

Add-VMHardDiskDrive -VMName $VMName `

-Path "$StoragePath\$VMName\OSDisk.vhdx";

# Add DVD drive without configuration

Add-VMDvdDrive -VMName $VMName;

};

# Execute on Storage Spaces Direct servers

# Configure IP Settings

# Get Available interfaces

Get-NetIPAddress `

-ErrorAction Stop | select InterfaceAlias, InterfaceIndex, IPAddress;

# Prompt for new IP Address. Example: 192.168.1.20

$Interface = Read-Host `

-Prompt "Enter Interface Alias to configure:";

# Remove Interface configuration

Remove-NetIPAddress -InterfaceAlias $Interface.ToString() `

-Confirm:$false ;

# Prompt for new IP Address. Example: 192.168.1.20

$IPAddress = Read-Host `

-Prompt "Enter new IP address for $($Interface.ToString())";

# Prompt for IP Address Prefix (mask). Example: 24

$IPAddressPrefix = Read-Host `

-Prompt "Enter IP address prefix for $($Interface.ToString())";

# Prompt for Gateway IP Address. Example: 192.168.1.1

$Gateway = Read-Host `

-Prompt "Enter Gateway for $($Interface.ToString())";

# Prompt for DNS IP Address. Example: 192.168.1.4

$DNSServer = Read-Host `

-Prompt "Enter DNS Server for $($Interface.ToString())";

# Configure IP settings on interface

New-NetIPAddress –InterfaceAlias $Interface.ToString() `

–IPAddress $IPAddress `

–PrefixLength $IPAddressPrefix `

-DefaultGateway $Gateway;

# Set DNS

Set-DnsClientServerAddress -InterfaceAlias $Interface.ToString() `

-ServerAddresses $DNSServer;

# Configure Time zone

# List Time Zones

tzutil /l | more

# Set Time Zone

tzutil /s "<time zone>"

# Add to domain

# Prompt for new computer name

$NewName = Read-Host `

-Prompt 'Enter new Computer Name';

# Prompt for domain name. Example: contoso.local

$DomainName = Read-Host `

-Prompt 'Enter domain Name';

# Join computer to domain and restart

Add-Computer -DomainName $DomainName `

-Force `

-Restart `

-NewName $NewName;

# Execute on Storage Spaces Direct servers

# Prompt for domain group

$DomainGroup = Read-Host `

-Prompt 'Enter domain group for local administrators on the sserver in the following format <Domain\Account>';

# Add domain group to local administrators group

Net localgroup Administrators $DomainGroup.ToString() /add

# Execute on Hyper-V host

# Configure Nested virtualization

# VM names

$VMNames = ('S2D05');

Foreach($VMName in $VMNames )

{

# Shut Down the VM

Stop-VM –Name $VMName;

# Enable Nested Virtualization

Set-VMProcessor -VMName $VMName `

-ExposeVirtualizationExtensions $true;

# Enable MAC Address Spoofing

Get-VMNetworkAdapter -VMName $VMName | Set-VMNetworkAdapter -MacAddressSpoofing On;

# Start VM

Start-VM -Name $VMName;

};

# Execute on Hyper-V host

# Add disks to the fith node

$VMNames = ('S2D05');

Foreach ($VMName in $VMNames){

# SSD Disk Names

$SSDDiskNames = ("SSD01.vhdx", "SSD02.vhdx");

# Create and attach SDD disks

foreach ($SSDDiskName in $SSDDiskNames )

{

$diskName = $SSDDiskName;

# Get the VM

$VM = Get-VM -Name $VMName;

# Get VM Location

$VMLocation = $VM.Path;

# Set Disk Size in GB

$Disksize = 256;

$DisksizeinBytes = $Disksize*1024*1024*1024;

# Create Disk

$VHD = New-VHD -Path "$VMLocation\$diskName" `

-Dynamic `

-SizeBytes $DisksizeinBytes;

# Atach the disk

$AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `

-Path "$VMLocation\$diskName" `

-ControllerType SCSI `

-ControllerNumber 0;

};

# HHD Disk Names

$HDDDiskNames = ("HDD01.vhdx", "HDD02.vhdx", "HDD03.vhdx");

# Create and attach HDD disks

foreach ($HDDDiskName in $HDDDiskNames )

{

$diskName = $HDDDiskName;

# Get the VM

$VM = Get-VM -Name $VMName;

# Get VM Location

$VMLocation = $VM.Path;

# Set Disk Size in GB

$Disksize = 512;

$DisksizeinBytes = $Disksize*1024*1024*1024;

# Create Disk

$VHD = New-VHD -Path "$VMLocation\$diskName" `

-Dynamic `

-SizeBytes $DisksizeinBytes;

# Atach the disk

$AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `

-Path "$VMLocation\$diskName" `

-ControllerType SCSI `

-ControllerNumber 0;

};

};

# Execute on Storage Spaces direct nodes

# Install roles

Install-WindowsFeature -Name File-Services;

Install-WindowsFeature -Name Failover-clustering -IncludeManagementTools;

Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart;

# Execute on Storage Spaces direct nodes

# Create vSwitch

New-VMSwitch –Name SETswitch `

–NetAdapterName "Ethernet" `

–EnableEmbeddedTeaming $true;

Validate Fifth Node in a Cluster

Before proceeding with adding the node we need to test the configuration of the node in the cluster. Use the commands below to validate the cluster with the new node by executing them on the Management node:

$nodes = ("S2D01", "S2D02", "S2D03", "S2D04", "S2D05");

# Validate Cluster Configuration

Test-Cluster –Node $nodes `

–Include "Storage Spaces Direct","Inventory","Network","System Configuration";

Add Node to Storage Spaces Direct Cluster

When a node is added to a cluster, Storage Spaces Direct will automatically add the physical disk to the pool.

Note that automatic pooling depends on you having only one pool. If you manually created multiple pools, add new drives to your preferred pool manually by using the Add-PhysicalDisk cmdlet.

Use the commands below on the Management node to add the node to the cluster:

# Add Node

Add-ClusterNode -Cluster S2D-CLU `

-Name S2D05 `

-NoStorage

After adding the node to the cluster wait a couple of minutes before changing the media type of the disks below 300 GBs. This time is needed so all new physical disks are added to the Storage Spaces Direct Pool.

# Change Media Type

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

Get-PhysicalDisk `

-CimSession S2D-CLU | `

? Size -lt 300GB ` | `

Set-PhysicalDisk -CimSession S2D-CLU `

–MediaType SSD;

# List Physical Disks

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

Get-PhysicalDisk -CimSession S2D-CLU | `

FT FriendlyName,CanPool,OperationalStatus,HealthStatus,Usage,Size,MediaType;

When Media Type is changed, there will be 15 HDDs and 10 SSDs.

# SSD and HDD Count

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

Get-PhysicalDisk -CimSession S2D-CLU | Group-Object MediaType

At the end, we can see that Capacity tier has increased maximum size to 2078 GBs and Performance tier to 225 GBs.

# Get Capacity Tier Max and Min Size

Get-StorageTierSupportedSize -FriendlyName Capacity `

-ResiliencySettingName Parity `

-CimSession S2D-CLU | `

Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}}

# Get Performance Tier Max and Min Size

Get-StorageTierSupportedSize -FriendlyName Performance `

-ResiliencySettingName Mirror `

-CimSession S2D-CLU | `

Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}}

As you scale beyond four nodes, new volumes can benefit from ever-greater parity encoding efficiency. For example, between six and seven nodes, efficiency improves from 50.0% to 66.7% as it becomes possible to use Reed-Solomon 4+2 (rather than 2+2). There are no steps you need to take to begin enjoying this new efficiency; the best possible encoding is determined automatically each time you create a volume.

However, any pre-existing volumes will not be "converted" to the new, wider encoding. One good reason is that to do so would require a massive calculation affecting literally every single bit in the entire deployment. If you would like pre-existing data to become encoded at the higher efficiency, you can migrate it to new volume(s).

Remove Node from Cluster

In this step, we will remove the node that we've added safely. Execute the commands below on the Management node.

First we will retire the disks on S2D05 node.

# Retire Disks on node S2D05

Get-PhysicalDisk -CimSession S2D-CLU `

-StorageEnclosure (Get-StorageEnclosure -CimSession S2D-CLU `

-SerialNumber `

(Get-StorageNode -Name S2D05* `

-CimSession S2D-CLU).SerialNumber) | `

Set-PhysicalDisk -CimSession S2D-CLU `

-Usage Retired;

Second step is to repair the virtual disk so the data can be distributed to healthy disks:

# Repair Volume1

Repair-VirtualDisk -FriendlyName Volume1 `

-CimSession S2D-CLU;

Let's list all physical disks on node S2D05:

# Get Physical disks on node S2D05

$PhysicalDisks = Get-PhysicalDisk -CimSession S2D-CLU `

-StorageEnclosure `

(Get-StorageEnclosure -CimSession S2D-CLU `

-SerialNumber `

(Get-StorageNode -Name S2D05* `

-CimSession S2D-CLU).SerialNumber);

$PhysicalDisks;

Remove the physical disks on node S2D05 from the Storage Spaces Direct pool. Confirm the command with Yes to All when prompted.

# Remove Physical disks on node S2D05 from Storage SPaces Direct Pool

Remove-PhysicalDisk -PhysicalDisks $PhysicalDisks `

-StoragePoolFriendlyName "S2D on S2D-CLU" `

-CimSession S2D-CLU;

Now that the disks are removed from the pool we can evict the node from the cluster as well. Respond with Yes when prompted.

# Remove Node from cluster

Remove-ClusterNode -Name S2D05 `

-Confirm:$false `

-Cluster S2D-CLU;

Additionally, you may want to clean up the data on the disks.

# Reset Physical Disk

Get-PhysicalDisk -CimSession S2D05 | Reset-PhysicalDisk -ErrorAction SilentlyContinue `

-CimSession S2D05;

# Remove Disk Data

Get-Disk -CimSession S2D05 | ? Number -ne $null | `

? IsBoot -ne $true | `

? IsSystem -ne $true | `

? PartitionStyle -ne RAW | `

% {

$_ | Set-Disk -CimSession S2D05 `

-isoffline:$false;

$_ | Set-Disk -CimSession S2D05 `

-isreadonly:$false

$_ | Clear-Disk -CimSession S2D05 `

-RemoveData `

-RemoveOEM `

-Confirm:$false;

$_ | Set-Disk -CimSession S2D05 `

-isreadonly:$true;

$_ | Set-Disk -CimSession S2D05 `

-isoffline:$true;

};

List the disks after data removal.

# List disks

Get-Disk -CimSession S2D05 | ? Number -ne $null | `

? IsBoot -ne $true | `

? IsSystem -ne $true | `

? PartitionStyle -eq RAW

Adding Drives to Storage Spaces Direct Cluster

Adding drives (also known as scaling up) adds storage capacity and can also improve performance. For example, adding more SSDs to the Performance tier will result having more space for hot data thus improving the performance of the Virtual Disk. In this lab, we will add additional HDD disks to the Capacity tier to increase the Storage tier and pool capacity.

Get Storage Spaces Direct Tier Sizes

Execute the commands below on the Management node to get the free space available on each of the tiers:

# Get Capacity Tier Max and Min Size

Get-StorageTierSupportedSize -FriendlyName Capacity `

-ResiliencySettingName Parity `

-CimSession S2D-CLU | `

Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}};

# Get Performance Tier Max and Min Size

Get-StorageTierSupportedSize -FriendlyName Performance `

-ResiliencySettingName Mirror `

-CimSession S2D-CLU | `

Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}};

Add Disk on each Storage Spaces Direct Node

In this step, we will add one 512 GB HDD to each Storage Spaces Direct node. Execute the commands below on the Hyper-V hosts to add disks:

# Execute on Hyper-V host

$VMNames = ('S2D01', 'S2D02', 'S2D03', 'S2D04');

Foreach ($VMName in $VMNames){

# HHD Disk Names

$HDDDiskNames = ("HDD04.vhdx");

# Create and attach HDD disks

foreach ($HDDDiskName in $HDDDiskNames )

{

$diskName = $HDDDiskName;

# Get the VM

$VM = Get-VM -Name $VMName;

# Get VM Location

$VMLocation = $VM.Path;

# Set Disk Size in GB

$Disksize = 512;

$DisksizeinBytes = $Disksize*1024*1024*1024;

# Create Disk

$VHD = New-VHD -Path "$VMLocation\$diskName" `

-Dynamic `

-SizeBytes $DisksizeinBytes;

# Atach the disk

$AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `

-Path "$VMLocation\$diskName" `

-ControllerType SCSI `

-ControllerNumber 0;

};

};

You may need a few minutes until the newly added HDDs are added to the Storage Spaces Direct pool. Execute the commands below on the Management node.

List physical disks in the pool:

# List Physical Disks

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

Get-PhysicalDisk -CimSession S2D-CLU | `

FT FriendlyName,CanPool,OperationalStatus,HealthStatus,Usage,Size,MediaType;

Count SSDs and HDDs.

# SSD and HDD Count

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU | `

Get-PhysicalDisk -CimSession S2D-CLU | Group-Object MediaType;

There are now 4 more HDDs in the pool. Investigating the tier sizes show that Capacity tier has increased with around 1000 GBs. Performance tier stays unchanged because we haven't added disks to it.

# Get Capacity Tier Max and Min Size

Get-StorageTierSupportedSize -FriendlyName Capacity `

-ResiliencySettingName Parity `

-CimSession S2D-CLU | `

Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}};

# Get Performance Tier Max and Min Size

Get-StorageTierSupportedSize -FriendlyName Performance `

-ResiliencySettingName Mirror `

-CimSession S2D-CLU | `

Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}};

Optimize Storage Spaces Pool

Windows Server 2016 Storage Spaces Direct can optimize a storage pool to balance data equally across the set of physical disks that comprise the pool.

Over time, as physical disks are added or removed or as data is written or deleted, the distribution of data among the set of physical disks that comprise the pool may become uneven. In some cases, this may result in certain physical disks becoming full while other disks in the same pool have much lower consumption.

Similarly, if new storage is added to the pool, optimizing the existing data to utilize the new storage will result in better storage efficiency across the pool and, potentially, improved performance from the newly available additional physical storage throughput. Optimizing the pool is a maintenance task which is performed by the administrator.

Execute the command below on the Management Node to start optimization job:

# Start Optimization Job

Optimize-StoragePool -FriendlyName "S2D on S2D-CLU" `

-CimSession S2D-CLU;

To get the output of the job execute the following command:

# Get Storage Optimization Job result

Get-StorageJob -CimSession S2D-CLU `

-Name Optimize `

-StoragePool (Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU )

Use Storage QoS

Storage Quality of Service is built into the Microsoft software-defined storage solution provided by Scale-Out File Server and Hyper-V. The Scale-Out File Server exposes file shares to the Hyper-V servers using the SMB3 protocol. A new Policy Manager has been added to the File Server cluster, which provides the central storage performance monitoring.

As Hyper-V servers launch virtual machines, they are monitored by the Policy Manager. The Policy Manager communicates the Storage QoS policy and any limits or reservations back to the Hyper-V server, which controls the performance of the virtual machine as appropriate.

When there are changes to Storage QoS policies or to the performance demands by virtual machines, the Policy Manager notifies the Hyper-V servers to adjust their behavior. This feedback loop ensures that all virtual machines VHDs perform consistently according to the Storage QoS policies as defined. Before proceeding further there are a few terms related to Storage QoS that needs to be explained:

  • Normalized IOPs - All of the storage usage is measured in "Normalized IOPs." This is a count of the storage input/output operations per second. Any IO that is 8KB or smaller is considered as one normalized IO. Any IO that is larger than 8KB is treated as multiple normalized IOs. For example, a 256KB request is treated as 32 normalized IOPs. Windows Server 2016 includes the ability to specify the size used to normalize IOs. On the storage cluster, the normalized size can be specified and take effect on the normalization calculations cluster wide. The default remains 8KB.
  • Flow - Each file handle opened by a Hyper-V server to a VHD or VHDX file is considered a "flow". If a virtual machine has two virtual hard disks attached, it will have 1 flow to the file server cluster per file. If a VHDX is shared with multiple virtual machines, it will have 1 flow per virtual machine.
  • InitiatorName - Name of the virtual machine that is reported to the Scale-Out File Server for each flow.
  • InitiatorID - An identifier matching the virtual machine ID. This can always be used to uniquely identify individual flows virtual machines even if the virtual machines have the same InitiatorName.
  • Policy - Storage QoS policies are stored in the cluster database, and have the following properties: PolicyId, MinimumIOPS, MaximumIOPS, ParentPolicy, and PolicyType.
  • PolicyId - Unique identifier for a policy. It is generated by default, but can be specified if desired.
  • MinimumIOPS - Minimum normalized IOPS that will be provided by a policy. Also, known as "Reservation".
  • MaximumIOPS - Maximum normalized IOPS that will be limited by a policy. Also, known as "Limit".
  • Aggregated - A policy type where the specified MinimumIOPS & MaximumIOPS and Bandwidth are shared among all flows assigned to the policy. All VHD's assigned the policy on that storage system have a single allocation of I/O bandwidth for them to all share.
  • Dedicated - A policy type where the specified Minimum & MaximumIOPs and Bandwidth are managed for individual VHD/VHDx.

Setup VMs for Storage QoS

In order to be able to use Storage QoS policies we will need to create several VMs on the Storage Spaces Direct cluster.

Sysprep VHD

Create a VM manually (Generation 2) and install Windows Server 2016 Datacenter (core). Sysprep the VM by executing the following command on it:

C:\Windows\system32\sysprep\sysprep.exe /oobe /generalize /shutdown

When the VM is syspreped and shutdown copy the VHDx manually to Volume 1 (for example \\s2d01\c$\ClusterStorage\Volume1) on the Storage Spaces Direct cluster. Rename the VHDx to OSDisk.vhdx.

Copy unattended.xml file to Volume 1 as well.

Download Diskspd. Copy diskspd.exe from amd64fre folder in the archive to Volume1 as well.

Configure Answer File

We will use OSDisk.vhdx to create multiple VMs out of it but in order to achieve unattended deployment we need to place unattend.xml file inside the VHD so it will be used during VM OOBE setup process. The answer file (unattend.xml) contains a password for the local administrator account. Execute the commands below on the Management node.

# Prompt for Syspreped VHD path

$VHDPath = Read-Host `

-Prompt 'Enter VHD path (example \\s2d01\c$\ClusterStorage\Volume1\OSDisk.vhdx) ';

# Prompt for Answer File path

$AFilePath = Read-Host `

-Prompt 'Enter Answer file path (example \\s2d01\c$\ClusterStorage\Volume1\unattend.xml) ';

# Create dir if not exists

$destPath = "C:\img"

If(!(Test-Path $destPath))

{

New-Item -Path $destPath `

-ItemType Directory

}

# Mount VHD

Mount-WindowsImage -ImagePath $VHDPath.ToString() `

-Path $destPath `

-Index 1

# Copy answer file

$AnwerFileDest = $destPath + "\Windows\Panther"

Copy-Item -Path $AFilePath.ToString() `

-Destination $AnwerFileDest

# Save VHD

Dismount-WindowsImage -Path $destPath `

-Save

Create and Setup VMs

In this step, we will create 4 VMs with one additional disk beside the OS disk. Those additional disks will be formatted and will be used later for generating IOPS on them. File diskspd.exe will be copied to each VM so it can be used later. To connect to the VMs we will use a new technology in Windows Server 2016 named PowerShell Direct. Execute the commands below on the Management node.

Create PowerShell session to S2D01:

Enter-PSSession -ComputerName S2D01;

Create and start the VMs:

# VM names

$VMNames = ('VM01', 'VM02', 'VM03', 'VM04');

# Storage Path

$StoragePath = "C:\ClusterStorage\Volume1";

# vSwitch Name

$vSwitchName = "SETswitch";

# OS VHD location

$osVHDPath = "C:\ClusterStorage\Volume1\OSDisk.vhdx";

Foreach($VMName in $VMNames)

{

# Create VM

New-VM -Name $VMName `

-Path "$StoragePath\" `

-Generation 2 `

-SwitchName $vSwitchName.ToString() `

-MemoryStartupBytes 2GB;

# Set Proc number

Set-VM -Name $VMName `

-ProcessorCount 2;

# Copy OS VHD

Copy-Item -Path $osVHDPath `

-Destination "$StoragePath\$VMName\" `

-Force;

# Add data VHDx to VM

Add-VMHardDiskDrive -VMName $VMName `

-Path "$StoragePath\$VMName\OSDisk.vhdx";

# Get OS VHD

$vhd = Get-VMHardDiskDrive -VMName $VMName;

# Set boot order

Set-VMFirmware -VMName $VMName `

-FirstBootDevice $vhd;

# Set Disk Size in GB

$diskName = "HDD01.vhdx";

# Create Disk

New-VHD -Path "$StoragePath\$VMName\$diskName" `

-Dynamic `

-SizeBytes 512GB | Out-Null;

# Atach the disk

Add-VMHardDiskDrive -VMName $VMName `

-Path "$StoragePath\$VMName\$diskName" `

-ControllerType SCSI `

-ControllerNumber 0 | Out-Null;

# Add VM to Cluster

Get-VM –Name $VMName | Add-ClusterVirtualMachineRole -WarningAction SilentlyContinue;

# Start VM

Start-VM -Name $VMName;

};

After they are created you should be able to see them in Failover Cluster console by connecting to S2D-CLU cluster.

Copy diskspd.exe and format drive on each VM:

# VM names

$VMNames = ('VM01', 'VM02', 'VM03', 'VM04');

# Create credentials for local account

$LocalPassword = "P@ssw0rd123!";

$secLocalPassword = ConvertTo-SecureString $LocalPassword -AsPlainText -Force;

$LocalCreds = New-Object System.Management.Automation.PSCredential ("Administrator", $secLocalPassword);

# Path to diskspd

$DiskspdPath = "C:\ClusterStorage\Volume1\diskspd.exe";

Foreach($VMName in $VMNames)

{

# Waiting for VM to be running

while ((Invoke-Command -VMName $VMName -Credential $LocalCreds {"Test"} -ea SilentlyContinue) -ne "Test")

{

Sleep -Seconds 1

}

# Create PS session to the VM

$VMSession = New-PSSession -VMName $VMName `

-Credential $LocalCreds;

# Format Disk

Invoke-Command -Session $VMSession `

-ScriptBlock {

$RawDisks = Get-Disk | where partitionstyle -eq 'raw'| Sort-Object Number;

$InitializedDisk = Initialize-Disk -PartitionStyle GPT `

-Number $RawDisks[0].Number `

-ErrorAction Stop;

$FormatedVol = New-Partition -DiskNumber $RawDisks[0].Number `

-UseMaximumSize `

-DriveLetter T `

-ErrorAction Stop | `

Format-Volume -FileSystem NTFS `

-AllocationUnitSize 65536 `

-NewFileSystemLabel "Test" `

-Force `

-Confirm:$false `

-ErrorAction Stop;

# Create Directory for diskspd

If(!(Test-Path "C:\test"))

{

New-Item -Path "C:\test" `

-ItemType Directory

}

} `

-ArgumentList $VMName `

-ErrorAction Stop;

# Copy Diskspd to VM

Copy-Item -ToSession $VMSession `

-Path $DiskspdPath `

-Destination "C:\test\"

# Remove PSSession

Remove-PSSession -Session $VMSession

};

Verify Storage QoS installation

After you have created a Storage Spaces Direct Cluster and virtual disk, Storage QoS Resource is displayed as a Cluster Core Resource and visible in both Failover Cluster Manager and Windows PowerShell. The intent is that the failover cluster system will manage this resource and you should not have to do any actions against this resource. It is displayed it in both Failover Cluster Manager and PowerShell to be consistent with the other failover cluster system resources like the new Health Service. Connect with Failover Cluster Console to the cluster from the Management node.

On the management node execute the following command:

Get-ClusterResource -Name "Storage Qos Resource" `

-Cluster S2D-CLU;

Create Storage QoS Policies

Storage QoS policies are defined and managed in the Scale-Out File Server (in this case Storage Spaces Direct) cluster. You can create as many policies as needed for flexible deployments (up to 10,000 per storage cluster).

Each VHD/VHDX file assigned to a virtual machine may be configured with a policy. Different files and virtual machines can use the same policy or they can each be configured with separate policies. If multiple VHD/VHDX files or multiple virtual machines are configured with the same policy, they will be aggregated together and will share the MinimumIOPS and MaximumIOPS fairly. If you use separate policies for multiple VHD/VHDX files or virtual machines, the minimum and maximums are tracked separately for each.

If you create multiple similar policies for different virtual machines and the virtual machines have equal storage demand, they will receive a similar share of IOPs. If one VM demands more and the other less, then IOPs will follow that demand.

There are two types of policies: Aggregated and Dedicated. Aggregated policies apply maximums and minimum for the combined set of VHD/VHDX files and virtual machines where they apply. In effect, they share a specified set of IOPS and bandwidth. Dedicated policies apply the minimum and maximum values for each VHD/VHDx, separately. This makes it easy to create a single policy that applies similar limits to multiple VHD/VHDx files.

For instance, if you create an Aggregated policy with a minimum of 300 IOPs and a maximum of 500 IOPs. If you apply this policy to 5 different VHD/VHDx files, you are making sure that the 5 VHD/VHDx files combined will be guaranteed at least 300 IOPs (if there is demand and the storage system can provide that performance) and no more than 500 IOPs. If the VHD/VHDx files have similar high demand for IOPs and the storage system can keep up, each VHD/VHDx files will get about 100 IOPs.

However, if you create a Dedicated policy with similar limits and apply it to VHD/VHDx files on 5 different virtual machines, each virtual machine will get at least 300 IOPs and no more than 500 IOPs. If the virtual machines have similar high demand for IOPs and the storage system can keep up, each virtual machine will get about 500 IOPs. If one of the virtual machines has multiple VHD/VHDx files with the same MulitInstance policy configured, they will share the limit so that the total IO from the VM from files with that policy will not exceed the limits.

Hence, if you have a group of VHD/VHDx files that you want to exhibit the same performance characteristics and you don't want the trouble of creating multiple, similar policies, you can use a single Dedicated policy and apply to the files of each virtual machine.

Create and Apply Dedicated Policy

In this step, we will create dedicated policy and see how it works. Execute the next commands on the Management node.

Get all Storage QoS policies:

# Get Storage QoS Policies

Get-StorageQosPolicy -CimSession S2D-CLU;

There is only one Default policy that does not have any limits because MinimumIops and MaximumIops are with zero value.

Create a new policy with name DedicatedPolicy1 that has MinimumIops of 300 and MaximumIops of 400.

Get Storage QoS policies.

# Get Storage QoS Policies

Get-StorageQosPolicy -CimSession S2D-CLU;

Status can change over time based on how the system is performing.

  • Ok - All flows using that policy are receiving their requested MinimumIOPs.
  • InsufficientThroughput - One or more of the flows using this policy are not receiving the Minimum IOPs

Assign the newly created dedicated policy to the VHDs on VM01 and VM02.

# Assign the Policy to all vhd(x) files on VM01

Get-VMHardDiskDrive -CimSession S2D-CLU `

-VMName VM01 | `

Set-VMHardDiskDrive `

-QoSPolicyID $DedicatedPolicy1.PolicyId `

-CimSession S2D-CLU;

# Assign the Policy to all vhd(x) files on VM02

Get-VMHardDiskDrive -CimSession S2D-CLU `

-VMName VM02 | `

Set-VMHardDiskDrive `

-QoSPolicyID $DedicatedPolicy1.PolicyId `

-CimSession S2D-CLU;

Show Policies assigned to VM01 and VM02 VHDs.

# Get VMName, VHD and QoS Policy ID for VM01 and VM02

Get-VMHardDiskDrive -CimSession S2D-CLU `

-VMName VM01,VM02 | select VMName,Path,QoSPolicyID

Get all current flows for all policies.

# Get all current flows - Virtual machine name, status, IOPs, and VHD file name sorted by VM

Get-StorageQoSflow -CimSession S2d-CLU | `

Sort-Object InitiatorName | `

ft InitiatorName, Status, MinimumIOPs, MaximumIOPs, StorageNodeIOPs, Status, `

@{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} `

-AutoSize

Get current flows for policy DedicatedPolicy1.

# Get current flows for DedicatedPolicy1

Get-StorageQosPolicy -Name DedicatedPolicy1 `

-CimSession S2D-CLU | Get-StorageQosFlow `

-CimSession S2D-CLU | `

ft InitiatorName, *IOPS, Status, FilePath -AutoSize

To fully tests the dedicated Storage QoS policy we will need to generate some storage load on VM01 and VM02.

On the management node create PowerShell Session to S2D01 where VM01 and VM02 are located:

Enter-PSSession -ComputerName S2D01;

Execute the script below to start diskspd tests on VM01 and VM02. The tests will run for around 20 minutes.

# VM names

$VMNames = ('VM01', 'VM02');

# Create credentials for local account

$LocalPassword = "P@ssw0rd123!";

$secLocalPassword = ConvertTo-SecureString $LocalPassword -AsPlainText -Force;

$LocalCreds = New-Object System.Management.Automation.PSCredential ("Administrator", $secLocalPassword);

Foreach($VMName in $VMNames)

{

# Waiting for VM to be running

while ((Invoke-Command -VMName $VMName -Credential $LocalCreds {"Test"} -ea SilentlyContinue) -ne "Test")

{

Sleep -Seconds 1

}

# Create PS session to the VM

$VMSession = New-PSSession -VMName $VMName `

-Credential $LocalCreds;

# Format Disk

Invoke-Command -Session $VMSession `

-ScriptBlock {

# Kill diskspd process if exists

Stop-Process -Name diskspd `

-ErrorAction SilentlyContinue `

-WarningAction SilentlyContinue;

# Start Diskspd tests

$diskspdarguments = "-r -w30 -d1200 -W120 -b8k -t2 -o6 -h -L -Z1M -c64G T:\testfile.dat"

Start-Process `

-FilePath "C:\test\diskspd.exe" `

-ArgumentList $diskspdarguments `

-NoNewWindow `

-PassThru `

-ErrorAction Stop | Out-Null

} `

-ArgumentList $VMName `

-ErrorAction Stop;

# Remove PSSession

Remove-PSSession -Session $VMSession;

};

By executing the command below on the Management node, we can see the IOPSfor HDD01 disks on VM01 and VM02 increasing:

# Get all current flows - Virtual machine name, Hyper-V host name, IOPs, and VHD file name, sorted by IOPS.

Get-StorageQosFlow -CimSession S2D-CLU | `

Sort-Object StorageNodeIOP -Descending |`

ft InitiatorName, `

@{Expression={$_.InitiatorNodeName.Substring(0,$_.InitiatorNodeName.IndexOf('.'))};Label="InitiatorNodeName"}, `

StorageNodeIOPs, `

Status, `

@{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} `

-AutoSize

If we execute the command again after a couple of minutes we will see that both VMs get around 400 IOPS for their HDD01 disk. StorageNodeIOPs shows the average IOPS generated for the last 5 minutes so the value should not be used as real time metric.

We can also list the flow information for specific VM.

# Get current flows for VM01

Get-StorageQosFlow -InitiatorName VM01 `

-CimSession S2d-CLU | Format-List;

The data returned by the Get-StorageQosFlow cmdlet includes:

  • The Hyper-V hostname (InitiatorNodeName).
  • The virtual machine's name and its Id (InitiatorName and InitiatorId)
  • Recent average performance as observed by the Hyper-V host for the virtual disk (InitiatorIOPS, InitiatorLatency)
  • Recent average performance as observed by the Storage cluster for the virtual disk (StorageNodeIOPS, StorageNodeLatency)
  • Current policy being applied to the file, if any, and the resulting configuration (PolicyId, Reservation, Limit)
  • Status of the policy
    • Ok - No issues exist.
    • InsufficientThroughput - A policy is applied, but the Minimum IOPs cannot be delivered. This can happen if the minimum for a VM, or all VMs together, are more than the storage volume can deliver.
    • UnknownPolicyId - A policy was assigned to the virtual machine on the Hyper-V host, but is missing from the file server. This policy should be removed from the virtual machine configuration, or a matching policy should be created on the file server cluster.

Storage performance metrics are also collected on a per-storage volume level, in addition to the per-flow performance metrics. This makes it easy to see the average total utilization in normalized IOPs, latency, and aggregate limits and reservations applied to a volume. Execute the command below on the Management node.

# View volume performance data

Get-StorageQosVolume -CimSession S2D-CLU | Format-List

Storage load generator can be stopped by executing the commands below.

Create PowerShell session to S2D01.

Enter-PSSession -ComputerName S2D01;

Stop diskspdspd process.

# VM names

$VMNames = ('VM01', 'VM02');

# Create credentials for local account

$LocalPassword = "P@ssw0rd123!";

$secLocalPassword = ConvertTo-SecureString $LocalPassword -AsPlainText -Force;

$LocalCreds = New-Object System.Management.Automation.PSCredential ("Administrator", $secLocalPassword);

Foreach($VMName in $VMNames)

{

# Waiting for VM to be running

while ((Invoke-Command -VMName $VMName -Credential $LocalCreds {"Test"} -ea SilentlyContinue) -ne "Test")

{

Sleep -Seconds 1

}

# Create PS session to the VM

$VMSession = New-PSSession -VMName $VMName `

-Credential $LocalCreds;

# Format Disk

Invoke-Command -Session $VMSession `

-ScriptBlock {

# Kill diskspd process if exists

Stop-Process -Name diskspd `

-ErrorAction SilentlyContinue `

-WarningAction SilentlyContinue;

} `

-ArgumentList $VMName `

-ErrorAction Stop;

# Remove PSSession

Remove-PSSession -Session $VMSession;

};

Modify Storage QoS Policy

The properties of Name, MinimumIOPs, MaximumIOPs, and MaximumIoBandwidthcan be changed after a policy is created. However, the Policy Type (Aggregated/Dedicated) cannot be changed once the policy is created.

Execute the commands below on the Management node to change policy DedicatedPolicy1.

# Modify Policy DedicatedPolicy1

Get-StorageQosPolicy -Name DedicatedPolicy1 `

-CimSession S2D-CLU | `

Set-StorageQosPolicy -MaximumIops 500 `

-CimSession S2D-CLU;

Get-StorageQosPolicy -Name DedicatedPolicy1 `

-CimSession S2D-CLU;

Delete Storage QoS Policy

If a policy is deleted from the file server before it's removed from a virtual machine, the virtual machine will keep running as if no policy were applied. Execute the command below on the Management node to delete a policy.

# Delte Polocy DedicatedPolicy1

Get-StorageQosPolicy -Name DedicatedPolicy1 `

-CimSession S2D-CLU | `

Remove-StorageQosPolicy -CimSession S2D-CLU `

-Confirm:$false

Get flow information to see that status will be "UnknownPolicyId" for the deleted policy.

Get-StorageQoSflow -CimSession S2D-CLU | `

Sort-Object InitiatorName | `

ft InitiatorName, `

Status, `

MinimumIOPs, `

MaximumIOPs, `

StorageNodeIOPs, `

Status, `

@{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} `

-AutoSize;

Virtual Machines with invalid Storage QoS policies will also appear when we use the Health Service.

# Check health

Get-StorageSubSystem -FriendlyName Clustered* `

-CimSession S2D-CLU | Debug-StorageSubSystem -CimSession S2D-CLU;

If a policy was unintentionally removed, you can create a new one using the old PolicyId. First, get the needed PolicyId.

# Get missing policies

Get-StorageQosFlow -Status UnknownPolicyId `

-CimSession S2D-CLU | `

ft InitiatorName, PolicyId -AutoSize

Next, create a new policy using that PolicyId.

# Recreate Policy

New-StorageQosPolicy -PolicyId <policy id> `

-PolicyType Dedicated `

-Name DedicatedPolicy1 `

-MinimumIops 300 `

-MaximumIops 500 `

-CimSession S2D-CLU;

Get-StorageQoSflow -CimSession S2D-CLU | `

Sort-Object InitiatorName | `

ft InitiatorName, `

Status, `

MinimumIOPs, `

MaximumIOPs, `

StorageNodeIOPs, `

Status, `

@{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} `

-AutoSize;

Create and Apply Aggregated Policy

Aggregated policies may be used if you want multiple virtual hard disks to share a single pool of IOPs and bandwidth. For example, if you apply the same Aggregated policy to hard disks from two virtual machines, the minimum will be split between them according to demand. Both disks will be guaranteed a combined minimum, and together they will not exceed the specified maximum IOPs or bandwidth.

The same approach could also be used to provide a single allocation to all VHD/VHDx files for the virtual machines comprising a service or belonging to a tenant in a multihosted environment.

There is no difference in the process to create Dedicated and Aggregated policies other than the PolicyType that is specified.

Create aggregated policy by using the commands below executing them on the Management Node.

# Create an aggregated Storage QoS policy

$AggregatedPolicy1 = New-StorageQosPolicy -Name AggregatedPolicy1 `

-PolicyType Aggregated `

-MinimumIops 300 `

-MaximumIops 400 `

-CimSession S2D-CLU;

# Show policy object

$AggregatedPolicy1 | select * ;

Assign the policy to VM03 and VM04.

# Assign the Policy to all vhd(x) files on VM03 and VM04

Get-VMHardDiskDrive -CimSession S2D-CLU `

-VMName VM03,VM04 | `

Set-VMHardDiskDrive `

-QoSPolicyID $AggregatedPolicy1.PolicyId `

-CimSession S2D-CLU;

# Get VMName, VHD and QoS Policy ID for VM01 and VM02

Get-VMHardDiskDrive -CimSession S2D-CLU `

-VMName VM03,VM04 | select VMName,Path,QoSPolicyID

To fully tests the dedicated Storage QoS policy we will need to generate some storage load on VM03 and VM04.

On the management node create PowerShell Session to S2D01 where VM03 and VM04 are located:

Enter-PSSession -ComputerName S2D01;

Execute the script below to start diskspd tests on VM03 and VM04. The tests will run for around 20 minutes.

# VM names

$VMNames = ('VM03', 'VM04');

# Create credentials for local account

$LocalPassword = "P@ssw0rd123!";

$secLocalPassword = ConvertTo-SecureString $LocalPassword -AsPlainText -Force;

$LocalCreds = New-Object System.Management.Automation.PSCredential ("Administrator", $secLocalPassword);

Foreach($VMName in $VMNames)

{

# Waiting for VM to be running

while ((Invoke-Command -VMName $VMName -Credential $LocalCreds {"Test"} -ea SilentlyContinue) -ne "Test")

{

Sleep -Seconds 1

}

# Create PS session to the VM

$VMSession = New-PSSession -VMName $VMName `

-Credential $LocalCreds;

# Format Disk

Invoke-Command -Session $VMSession `

-ScriptBlock {

# Kill diskspd process if exists

Stop-Process -Name diskspd `

-ErrorAction SilentlyContinue `

-WarningAction SilentlyContinue;

# Start Diskspd tests

$diskspdarguments = "-r -w30 -d1200 -W120 -b8k -t2 -o6 -h -L -Z1M -c64G T:\testfile.dat"

Start-Process `

-FilePath "C:\test\diskspd.exe" `

-ArgumentList $diskspdarguments `

-NoNewWindow `

-PassThru `

-ErrorAction Stop | Out-Null

} `

-ArgumentList $VMName `

-ErrorAction Stop;

# Remove PSSession

Remove-PSSession -Session $VMSession;

};

After several minutes, you will see that both VMs have around 200 IOPS for their HDD01 vhdx file by executing the command below on the Management node.

# Get all current flows - Virtual machine name, Hyper-V host name, IOPs, and VHD file name, sorted by IOPS.

Get-StorageQosFlow -CimSession S2D-CLU | `

Sort-Object StorageNodeIOP -Descending |`

ft InitiatorName, `

@{Expression={$_.InitiatorNodeName.Substring(0,$_.InitiatorNodeName.IndexOf('.'))};Label="InitiatorNodeName"}, `

StorageNodeIOPs, `

Status, `

@{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} `

-AutoSize

If more VM VHDs are added to that policy the IOPS would be smaller as it will be spread between all of them.

We can also get the flow information for VM03.

# Get current flows for VM03

Get-StorageQosFlow -InitiatorName VM03 `

-CimSession S2d-CLU | Format-List;

Volume information is also shown.

# View volume performance data

Get-StorageQosVolume -CimSession S2D-CLU | Format-List

If you want to stop the storage load tests on VM03 and VM04 connect with PowerShell Session to S2D01.

Enter-PSSession -ComputerName S2D01;

And execute the code below.

# VM names

$VMNames = ('VM03', 'VM04');

# Create credentials for local account

$LocalPassword = "P@ssw0rd123!";

$secLocalPassword = ConvertTo-SecureString $LocalPassword -AsPlainText -Force;

$LocalCreds = New-Object System.Management.Automation.PSCredential ("Administrator", $secLocalPassword);

Foreach($VMName in $VMNames)

{

# Waiting for VM to be running

while ((Invoke-Command -VMName $VMName -Credential $LocalCreds {"Test"} -ea SilentlyContinue) -ne "Test")

{

Sleep -Seconds 1

}

# Create PS session to the VM

$VMSession = New-PSSession -VMName $VMName `

-Credential $LocalCreds;

# Format Disk

Invoke-Command -Session $VMSession `

-ScriptBlock {

# Kill diskspd process if exists

Stop-Process -Name diskspd `

-ErrorAction SilentlyContinue `

-WarningAction SilentlyContinue;

} `

-ArgumentList $VMName `

-ErrorAction Stop;

# Remove PSSession

Remove-PSSession -Session $VMSession;

};

Storage Spaces Direct Health

The Health Service is a new feature in Windows Server 2016 that improves the day-to-day monitoring, operations, and maintenance experience of cluster resources on a Storage Spaces Direct cluster. It is enabled by default with Storage Spaces Direct. No additional action is required to set it up or start it. The Health Service reduces the work required to get live performance and capacity information from your Storage Spaces Direct cluster.

Get Storage Spaces Direct Metrics

One new cmdlet provides a curated list of essential metrics, which are collected efficiently and aggregated dynamically across nodes, with built-in logic to detect cluster membership. All values are real-time and point-in-time only. In Windows Server 2016, the Health Service provides the following metrics:

  • IOPS (Read, Write, Total)
  • IO Throughput (Read, Write, Total)
  • IO Latency (Read, Write)
  • Physical Capacity (Total, Remaining)
  • Pool Capacity (Total, Remaining)
  • Volume Capacity (Total, Remaining)
  • CPU Utilization %, All Machines Average
  • Memory, All Machines (Total, Available)

On the management node execute the command below to get Storage Spaces Direct cluster metrics:

# Get Storage Soaces Direct metrics

Get-StorageSubSystem -FriendlyName clus* `

-CimSession S2D-CLU | `

Get-StorageHealthReport -Count 1 `

-CimSession S2d-CLU;

The <Count> parameter indicates how many sets of values to return, at one second intervals.

You can also get metrics for one specific volume.

# Get Storage Spaces Direct metrics for Volume

Get-Volume -FileSystemLabel Volume1 `

-CimSession S2D-CLU | `

Get-StorageHealthReport -Count 1 `

-CimSession S2D-CLU;

Or node.

# Get Storage Spaces Direct metrics for node S2D01

Get-StorageNode -CimSession S2D-CLU | `

Where-Object {$_.Name -like "S2D01*"} | `

Get-StorageHealthReport -Count 1 `

-CimSession S2D-CLU;

Note that the metrics returned in each case will be the subset applicable to that scope.

The notion of available capacity in Storage Spaces is nuanced. To help you plan effectively, the Health Service provides six distinct metrics for capacity. Here is what each represents:

  • Physical Capacity Total (CapacityPhysicalTotal) - The sum of the raw capacity of all physical storage devices managed by the cluster.
  • Physical Capacity Available (CapacityPhysicalUnpooled) - The physical capacity which is not in any non-primordial storage pool.
  • Pool Capacity Total (CapacityPhysicalPooledTotal) - The amount of raw capacity in storage pools.
  • Pool Capacity Available (CapacityPhysicalPooledAvailable) - The pool capacity which is not allocated to the footprint of volumes.
  • Volume Capacity Total (CapacityVolumesTotal) - The total usable ("inside") capacity of existing volumes.
  • Volume Capacity Available (CapacityVolumesAvailable) - The amount of additional data which can be stored in existing volumes.

The following diagram illustrates the relationship between these quantities.

Get Storage Spaces Direct Faults

The Health Service constantly monitors your Storage Spaces Direct cluster to detect problems and generate "Faults". One new cmdlet displays any current Faults, allowing you to easily verify the health of your deployment without looking at every entity or feature in turn. Faults are designed to be precise, easy to understand, and actionable.

Each Fault contains five important fields:

  • Severity
  • Description of the problem
  • Recommended next step(s) to address the problem
  • Identifying information for the faulting entity
  • Its physical location (if applicable)

On the Hyper-V hosts execute the following command to detach VHD from S2D01 and simulate failure.

# Get, Save and List HHD drive information

$RemovedHDD = Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"}

$RemovedHDD

# Detach Hard drive from VM

Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"} | Remove-VMHardDiskDrive

This returns any Faults which affect the overall Storage Spaces Direct cluster. Most often, these Faults relate to hardware or configuration. If there are no Faults, this cmdlet will return nothing.

On the Management node execute the command below to get Storage Spaces Direct Cluster Faults:

# Return S2D Cluster Faults

Get-StorageSubSystem -FriendlyName clus* `

-CimSession S2D-CLU | `

Debug-StorageSubSystem -CimSession S2D-CLU;

We can also get faults for specific Volume.

# Return S2D Volume Faults

Get-Volume -FileSystemLabel Volume1 `

-CimSession S2D-CLU | `

Debug-Volume -CimSession S2D-CLU;

Faults can be retrieved for File shares as well but in our case we do not have such.

# Return S2D Share Faults

Get-FileShare -Name <File Share Name> `

-CimSession S2D-CLU | `

Debug-FileShare -CimSession S2D-CLU

Most often, Volume or File share faults relate to data resiliency or features like Storage QoS or Storage Replica.

In Windows Server 2016, the Health Service provides the following Fault coverage:

  • Essential cluster hardware:
    • Node down, quarantined, or isolated
    • Node network adapter failure, disabled, or disconnected
    • Node missing one or more cluster networks
    • Node temperature sensor
  • Essential storage hardware:
    • Physical disk media failure, lost connectivity, or unresponsive
    • Storage enclosure lost connectivity
    • Storage enclosure fan failure or power supply failure
    • Storage enclosure current, voltage, or temperature sensors triggered
  • The Storage Spaces software stack:
    • Storage pool unrecognized metadata
    • Data not fully resilient, or detached
    • Volume low capacity - Indicates the volume has reached 80% full (minor severity) or 90% full (major severity).
  • Storage Quality of Service (Storage QoS)
    • Storage QoS malformed policy
    • Storage QoS policy breach - Indicates some .vhd(s) on the volume have not met their Minimum IOPS for over 10% (minor), 30% (major), or 50% (critical) of rolling 24-hour window.
  • Storage Replica
    • Replication failed to sync, write, start, or stop
    • Target or source replication group failure or lost communication
    • Unable to meet configured recovery point objective
    • Log or metadata corruption
  • Health Service
    • Any issues with automation, described in later sections
    • Quarantined physical disk device

The health of storage enclosure components such as fans, power supplies, and sensors is derived from SCSI Enclosure Services (SES). If your vendor does not provide this information, the Health Service cannot display it.

The Health Service can assess the potential causality among faulting entities to identify and combine faults which are consequences of the same underlying problem. By recognizing chains of effect, this makes for less chatty reporting. For now, this functionality is limited to nodes, enclosures, and physical disks in the event of lost connectivity.

For example, if an enclosure has lost connectivity, it follows that those physical disk devices within the enclosure will also be without connectivity. Therefore, only one Fault will be raised for the root cause - in this case, the enclosure.

Attach the vhd back to S2D01 node by executing he command below on the Hyper-V host:

# Attach Hard drives

Add-VMHardDiskDrive -VMName $RemovedHDD.VMName `

-ControllerType SCSI `

-ControllerNumber $RemovedHDD.ControllerNumber `

-ControllerLocation $RemovedHDD.ControllerLocation `

-Path $RemovedHDD.Path

Get Storage Spaces Direct Health Actions

Actions are workflows which are automated by the Health Service. To verify that an action is indeed being taken autonomously, or to track its progress or outcome, the Health Service generates "Actions". Unlike logs, Actions disappear shortly after they have completed, and are intended primarily to provide insight into ongoing activity which may impact performance or capacity (e.g. restoring resiliency or rebalancing data).

Execute the command below on the Management node.

# Get S2D Health Actions

Get-StorageHealthAction -CimSession S2D-CLU

In Windows Server 2016, the Get-StorageHealthAction cmdlet can return any of the following information:

  • Retiring failed, lost connectivity, or unresponsive physical disk
  • Switching storage pool to use replacement physical disk
  • Restoring full resiliency to data
  • Rebalancing storage pool

Disaster Recovery with Storage Replica

Storage Replica can be deployed in 3 configurations:

  • Stretch Cluster - allows configuration of computers and storage in a single cluster, where some nodes share one set of asymmetric storage and some nodes share another, then synchronously or asynchronously replicate with site awareness. This scenario can utilize Storage Spaces with shared SAS storage, SAN and iSCSI-attached LUNs. It is managed with PowerShell and the Failover Cluster Manager graphical tool, and allows for automated workload failover.

  • Cluster to Cluster - allows replication between two separate clusters, where one cluster synchronously or asynchronously replicates with another cluster. This scenario can utilize Storage Spaces Direct, Storage Spaces with shared SAS storage, SAN and iSCSI-attached LUNs. It is managed with PowerShell and Azure Site Recovery, and requires manual intervention for failover. This will be the scenario used in our example.

  • Server to server - allows synchronous and asynchronous replication between two standalone servers, using Storage Spaces with shared SAS storage, SAN and iSCSI-attached LUNs, and local drives. It is managed with PowerShell and the Server Manager Tool, and requires manual intervention for failover.

For our scenario, we will create another Storage Spaces Direct Cluster, enable Storage Replica replication between those S2D clusters and failover our VMs.

Setup Prerequisites

Create VMs and deploy Windows Server 2016 Servers

Create 4 more VMs. These VMs will serve the purpose of being Storage Spaces Direct nodes in a new cluster. You can use the script below by entering the additional details needed when prompted:

# Executed on the Hyper-V host

# VM names

$VMNames = ('S2D11', 'S2D12', 'S2D13', 'S2D14');

# Prompt for vSwitch Name

$vSwitchName = Read-Host `

-Prompt 'Enter vSwitchName.';

# Prompt for storage path where the VMs will be stored

$StoragePath = Read-Host `

-Prompt "Enter storage Path. Example 'C:\ClusterStorage\Volume1'";

Foreach($VMName in $VMNames )

{

# Create VM

New-VM -Name $VMName `

-Path "$StoragePath\$VMName" `

-Generation 2 `

-SwitchName $vSwitchName.ToString() `

-MemoryStartupBytes 16GB ;

# Set Proc number

Set-VM -Name $VMName `

-ProcessorCount 4;

# Create OS VHDx

New-VHD -Dynamic `

-SizeBytes 127GB `

-Path "$StoragePath\$VMName\OSDisk.vhdx";

# Add VHDx to VM

Add-VMHardDiskDrive -VMName $VMName `

-Path "$StoragePath\$VMName\OSDisk.vhdx";

# Add DVD drive without configuration

Add-VMDvdDrive -VMName $VMName;

};

The VMs will be created without OS installed. Use the Windows Server 2016 RTM iso to install Windows Server 2016 Datacenter edition with either Desktop Experience or Server Core option. It is recommended to create syspreped vhdx image and use it for easier deployment.

If you need to set VLAN ID to these VMs use the cmdlet below:

# Set Vlan on a VM

Set-VMNetworkAdapterVlan -VMName <VMName> `

-VlanId <VlanID> `

-Access;

Configure IP Settings

In this step, you need to configure IP settings on the Storage Spaces Direct servers. You will setup static, IP address, mask, gateway and DNS server. Login on each server trough Virtual Machine Connection Console with the local administrator account created during deployment of Windows Server 2016. Execute the next commands on each Storage Spaces Direct node.

# Execute on Storage Spaces Direct servers

# Get Available interfaces

Get-NetIPAddress `

-ErrorAction Stop | select InterfaceAlias, InterfaceIndex, IPAddress;

# Prompt for new IP Address. Example: 192.168.1.20

$Interface = Read-Host `

-Prompt "Enter Interface Alias to configure:";

# Remove Interface configuration

Remove-NetIPAddress -InterfaceAlias $Interface.ToString() `

-Confirm:$false ;

# Prompt for new IP Address. Example: 192.168.1.20

$IPAddress = Read-Host `

-Prompt "Enter new IP address for $($Interface.ToString())";

# Prompt for IP Address Prefix (mask). Example: 24

$IPAddressPrefix = Read-Host `

-Prompt "Enter IP address prefix for $($Interface.ToString())";

# Prompt for Gateway IP Address. Example: 192.168.1.1

$Gateway = Read-Host `

-Prompt "Enter Gateway for $($Interface.ToString())";

# Prompt for DNS IP Address. Example: 192.168.1.4

$DNSServer = Read-Host `

-Prompt "Enter DNS Server for $($Interface.ToString())";

# Configure IP settings on interface

New-NetIPAddress –InterfaceAlias $Interface.ToString() `

–IPAddress $IPAddress `

–PrefixLength $IPAddressPrefix `

-DefaultGateway $Gateway;

# Set DNS

Set-DnsClientServerAddress -InterfaceAlias $Interface.ToString() `

-ServerAddresses $DNSServer;

Set Time Zone

Setting the correct time zone is good practice to avoid any issues. Use the command below to set the correct time zone in your case:

# List Time Zones

tzutil /l | more

# Set Time Zone

tzutil /s "<your time zone>"

Join servers to domain

Login on each server trough Virtual Machine Connection Console with the local administrator account created during deployment of Windows Server 2016. And join to domain each one of the Storage Spaces Direct nodes by using the PowerShell commands below. When prompted for credentials provide domain user account that has privileges to join servers to the domain. The servers will be automatically rebooted when joined to the domain.

# Execute on Storage Spaces Direct servers

# Prompt for new computer name

$NewName = Read-Host `

-Prompt 'Enter new Computer Name';

# Prompt for domain name. Example: contoso.local

$DomainName = Read-Host `

-Prompt 'Enter domain Name';

# Join computer to domain and restart

Add-Computer -DomainName $DomainName `

-Force `

-Restart `

-NewName $NewName;

Add Domain Group to Local Administrators

In order to be able to login on all servers with PowerShell session Domain Group needs to be added to Local Administrator Group on all servers. That domain group will give administrator access on the Storage Spaces Direct nodes. Login on each server trough Virtual Machine Connection Console with the local administrator account created during deployment of Windows Server 2016. Execute the PowerShell code on each server to add domain group to local Administrators group:

# Execute on Storage Spaces Direct servers

# Prompt for domain group

$DomainGroup = Read-Host `

-Prompt 'Enter domain group for local administrators on the sserver in the following format <Domain\Account>';

# Add domain group to local administrators group

Net localgroup Administrators $DomainGroup.ToString() /add

Enable Nested Virtualization

In order to be able to install Hyper-V on Storage Spaces Direct VMs and test the Hyper-Converged scenario the VMs needs to be enabled for nested virtualization. Also, turning MAC Address spoofing is needed for creating vSwitch on each of the VM. Use the code below to stop the VMs, enable nested virtualization and start them again.

# Execute on Hyper-V host

# VM names

$VMNames = ('S2D11', 'S2D12', 'S2D13', 'S2D14');

Foreach($VMName in $VMNames )

{

# Shut Down the VM

Stop-VM –Name $VMName;

# Enable Nested Virtualization

Set-VMProcessor -VMName $VMName `

-ExposeVirtualizationExtensions $true;

# Enable MAC Address Spoofing

Get-VMNetworkAdapter -VMName $VMName | Set-VMNetworkAdapter -MacAddressSpoofing On;

# Start VM

Start-VM -Name $VMName;

};

Add Disks

In this step, we will add 5 disks (vhdx) to each Storage Spaces Direct server. Execute the script below on the Hyper-V host to create the disks on the servers:

# Execute on Hyper-V host

$VMNames = ('S2D11', 'S2D12', 'S2D13', 'S2D14');

Foreach ($VMName in $VMNames){

# SSD Disk Names

$SSDDiskNames = ("SSD01.vhdx", "SSD02.vhdx");

# Create and attach SDD disks

foreach ($SSDDiskName in $SSDDiskNames )

{

$diskName = $SSDDiskName;

# Get the VM

$VM = Get-VM -Name $VMName;

# Get VM Location

$VMLocation = $VM.Path;

# Set Disk Size in GB

$Disksize = 256;

$DisksizeinBytes = $Disksize*1024*1024*1024;

# Create Disk

$VHD = New-VHD -Path "$VMLocation\$diskName" `

-Dynamic `

-SizeBytes $DisksizeinBytes;

# Atach the disk

$AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `

-Path "$VMLocation\$diskName" `

-ControllerType SCSI `

-ControllerNumber 0;

};

# HHD Disk Names

$HDDDiskNames = ("HDD01.vhdx", "HDD02.vhdx", "HDD03.vhdx");

# Create and attach HDD disks

foreach ($HDDDiskName in $HDDDiskNames )

{

$diskName = $HDDDiskName;

# Get the VM

$VM = Get-VM -Name $VMName;

# Get VM Location

$VMLocation = $VM.Path;

# Set Disk Size in GB

$Disksize = 512;

$DisksizeinBytes = $Disksize*1024*1024*1024;

# Create Disk

$VHD = New-VHD -Path "$VMLocation\$diskName" `

-Dynamic `

-SizeBytes $DisksizeinBytes;

# Atach the disk

$AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `

-Path "$VMLocation\$diskName" `

-ControllerType SCSI `

-ControllerNumber 0;

};

};

Update Windows Server 2016

Before proceeding further make sure you have the latest Windows Server 2016 updates installed. If you were using image that already has all the updates applied, you can skip this step. Use the script below to apply updates with Windows Update. This script requires that the servers are have direct Internet connection.

# Execute on Storage Spaces Direct Servers

# Check for Available Updates

$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `

-ClassName MSFT_WUOperationsSession;

# Scan for Updates

$result = $ci | Invoke-CimMethod -MethodName ScanForUpdates `

-Arguments @{SearchCriteria="IsInstalled=0";OnlineScan=$true};

# Show Updates found for install

$result.Updates;

# Initiate Update and restart

$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `

-ClassName MSFT_WUOperationsSession;

# Apply Updates

Invoke-CimMethod -InputObject $ci `

-MethodName ApplyApplicableUpdates;

# Restart Server

Restart-Computer; exit;

# Show Installed Updates

$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `

-ClassName MSFT_WUOperationsSession;

$result = $ci | Invoke-CimMethod -MethodName ScanForUpdates `

-Arguments @{SearchCriteria="IsInstalled=1";OnlineScan=$true};

$result.Updates;

Install Roles

The Hyper-Converged deployment requires installing File Services, Failover Clustering and Hyper-V roles installed. Use the script below to install them. After installing Hyper-V reboot is required and will be performed automatically.

Install-WindowsFeature -Name File-Services

Install-WindowsFeature -Name Failover-clustering -IncludeManagementTools

Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart

Create Hyper-V virtual switch

Storage Spaces Direct incorporates storage and compute at the same node thus Hyper-V vSwitch needs to be created. Execute the steps below to create vSwitch on each of the Storage Spaces Direct nodes. You can connect to each node by establishing PowerShell session to it. For Example:

Enter-PSSession -ComputerName S2D11

Create Hyper-V Switch.

# Create the virtual switch connected to the adapters of your choice,

# and enable the Switch Embedded Teaming (SET). You may notice a message that

# your PSSession lost connection. This is expected and your session will reconnect

New-VMSwitch –Name SETswitch `

–NetAdapterName "Ethernet" `

–EnableEmbeddedTeaming $true;

Create Cluster

You can proceed with cluster creation. When you are creating the cluster, you will need to provide IP address for the cluster name:

$nodes = ("S2D11", "S2D12", "S2D13", "S2D14");

# Create Cluster with no storage

New-Cluster –Name S2D-CLU2 `

–Node $nodes `

–NoStorage `

–StaticAddress <cluster IP Address>;

Setup Quorum

On the Management node execute the following command to set Cloud Witness by changing the values that reflect to your setup:

# Set Cloud Witness

Set-ClusterQuorum -CloudWitness `

-AccountName <StorageAccountName> `

-AccessKey <StorageAccountAccessKey> `

-Cluster S2D-CLU2;

In case you do not have access to Azure Subscription you can configure file share on the Management node and configure File Share Witness.

# Set File Share Witness

Set-ClusterQuorum –FileShareWitness <File Share Witness Path>

Note: Permissions may need to be set on the file share.

Enable Storage Spaces Direct

When the cluster is created Storage Spaces Direct can be enabled. This is done through a single command which you can execute from the Management mode remotely to the cluster:

# Enable Storage Spaces Direct

Enable-ClusterStorageSpacesDirect -CimSession S2D-CLU2 `

-Confirm:$false;

By default, Storage Spaces direct will create tier(s) based on the physical disks that are on the nodes. In our we will remove those and we will create new ones in later step.

# Remove all Storage Tiers

Get-StorageTier -CimSession S2D-CLU2 | Remove-StorageTier -Confirm:$false `

-CimSession S2D-CLU2;

Change Media Type

In order to create two different storage tiers, we will change the Media Type from HDD to SSD to the physical disks that are below 300GB. All commands are executed on the Management node remotely to the cluster.

# Change Media Type

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU2 | `

Get-PhysicalDisk -CimSession S2D-CLU2 | `

? Size -lt 300GB ` | `

Set-PhysicalDisk -CimSession S2D-CLU2 `

–MediaType SSD;

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU2 | `

Get-PhysicalDisk -CimSession S2D-CLU2 | `

? Size -gt 300GB | `

Set-PhysicalDisk -CimSession S2D-CLU2 `

–MediaType HDD;

Create Storage Tiers

In this step, we will create two tiers one called Performance and the other one Capacity. Capacity will have Parity resiliency and Performance will have Mirror. Performance will be made of disks with Media Type SSD and Capacity with disks made of Media Type HDD.

# Create Tiers

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU2 | `

New-StorageTier –FriendlyName Performance `

–MediaType SSD `

-ResiliencySettingName Mirror `

-CimSession S2D-CLU2;

Get-StoragePool -FriendlyName "S2D*" `

-CimSession S2D-CLU2 | `

New-StorageTier –FriendlyName Capacity `

–MediaType HDD `

-ResiliencySettingName Parity `

-CimSession S2D-CLU2;

Create Volume

The last step of the Storage Spaces Direct setup is to create a volume based on the created storage tiers from the previous step.

# Create Volume

New-Volume -StoragePoolFriendlyName "S2D*" `

-FriendlyName Volume1 `

-FileSystem CSVFS_ReFS `

-StorageTierfriendlyNames Capacity,Performance `

-StorageTierSizes 1500GB,500GB `

-CimSession S2D-CLU2;

Setup Storage Replica on Storage Spaces Direct Clusters

In this section, we will set Storage Replica between cluster S2D-CLU and S2D-CLU2.

Install Storage Replica Role

Install Storage Replica feature on every node from S2D-CLU and S2D-CLU2 clusters. Use the commands below by executing them on the Management node.

# Install Storage Replica

$S2DServers = 'S2D01','S2D02','S2D03','S2D04','S2D11','S2D12','S2D13','S2D14'

$S2DServers | ForEach `

{

Install-WindowsFeature -ComputerName $_ `

-Name Storage-Replica `

-IncludeManagementTools `

-Restart;

# Wait for restart before proceeding to next node

Start-Sleep -Seconds 120

};

Create Volumes for Storage Replica Logs

Storage replica relies on storing logs on volumes. Create volumes for Storage Replica Logs on both clusters by executing the commands below on the Management node:

# Create Volume for Storage Replica Logs on S2D-CLU

New-Volume -StoragePoolFriendlyName "S2D*" `

-FriendlyName ReplicaLogs `

-FileSystem ReFS `

-StorageTierFriendlyNames Performance `

-StorageTierSizes 100GB `

-DriveLetter L `

-CimSession S2D-CLU;

# Create Volume for Storage Replica Logs on S2D-CLU2

New-Volume -StoragePoolFriendlyName "S2D*" `

-FriendlyName ReplicaLogs `

-FileSystem ReFS `

-StorageTierfriendlyNames Performance `

-DriveLetter L `

-StorageTierSizes 100GB `

-CimSession S2D-CLU2;

Notice that the file system for these volumes is ReFS unlike Volume1 which was CSVFS_ReFS. Also, Storage Replica Logs should be located on a fast SSDs.

Test Storage Replica Setup

Before creating the replication between the two cluster we can do initial tests to verify that all requirements are met.

Create temporary cluster group on both clusters with ReplicaLogs volume attached with the commands below by executing them on the Management node.

# Create Empty Cluster Role

Add-ClusterGroup -Cluster S2D-CLU `

-Name ReplicaLogs;

# Move ReplicaLogs volume to the cluster Group

Move-ClusterResource -Cluster S2D-CLU `

–Name "Cluster Virtual Disk (ReplicaLogs)" `

–Group ReplicaLogs;

# Move the role to S2D01

Move-ClusterGroup -Cluster S2D-CLU `

-Name ReplicaLogs `

-Node S2D01;

# Create Empty Cluster Role

Add-ClusterGroup -Cluster S2D-CLU2 `

-Name ReplicaLogs;

# Move ReplicaLogs volume to the cluster Group

Move-ClusterResource -Cluster S2D-CLU2 `

–Name "Cluster Virtual Disk (ReplicaLogs)" `

–Group ReplicaLogs;

# Move the role to S2D11

Move-ClusterGroup -Cluster S2D-CLU2 `

-Name ReplicaLogs `

-Node S2D11;

Connect to node S2D01 with Virtual Machine connection from the Hyper-V host. Login with domain credentials with administer rights on node S2D01.

Execute the commands below in PowerShell on S2D01.

# Test Storage Replica

Test-SRTopology -SourceComputerName "S2D01" `

-SourceVolumeName "C:\ClusterStorage\Volume1" `

-SourceLogVolumeName "L:\" `

-DestinationComputerName "S2D11" `

-DestinationVolumeName "C:\ClusterStorage\Volume1" `

-DestinationLogVolumeName "L:\" `

-DurationInMinutes 1 `

-ResultPath "C:\ClusterStorage\Volume1" `

-IgnorePerfTests;

You can test Storage Replica live on production servers and get performance data by removing -IngorePerfTests switch and providing higher duration for -DurationInMinutes. Such tests are recommended when you want to enable Storage replica on production data where the source and destination endpoints are located in different sites or racks. DISKSPD can also be used to simulate workload instead of production workloads.

When tests are complete open the report from \\S2D01\C$\ClusterStorage\Volume1.

If all tests are successful remove the temporary cluster groups by executing the commands below in the Management node.

# Remove Cluster Group ReplicaLogs

Remove-ClusterGroup -Cluster S2D-CLU `

-Name ReplicaLogs `

-RemoveResources `

-Force `

-Confirm:$false;

# Remove Cluster Group ReplicaLogs

Remove-ClusterGroup -Cluster S2D-CLU2 `

-Name ReplicaLogs `

-RemoveResources `

-Force `

-Confirm:$false;

Create Storage Replication between Storage Spaces Direct Clusters

First we need to grant permissions between the two clusters. Grant permissions by executing the commands below on the Management Node.

# Grant S2D-CLU cluster full access to S2D-CLU2 cluster

Grant-SRAccess -Cluster S2D-CLU2 `

-ComputerName S2D01;

# Grant S2D-CLU2 cluster full access to S2D-CLU cluster

Grant-SRAccess -Cluster S2D-CLU `

-ComputerName S2D11;

Now we can setup the replication.

# Configure the cluster-to-cluster replication

New-SRPartnership -SourceComputerName S2D-CLU `

-SourceRGName ReplicaGroupSource `

-SourceVolumeName "c:\ClusterStorage\Volume1" `

-SourceLogVolumeName L: `

-DestinationComputerName S2D-CLU2 `

-DestinationRGName ReplicaGroupDest `

-DestinationVolumeName "c:\ClusterStorage\Volume1" `

-DestinationLogVolumeName L:;

After creation, we can get Source and Destination Group replication information.

# Get Source And Destination Group state

Get-SRGroup -SourceComputerName S2D-CLU `

-SourceRGName ReplicaGroupSource `

-DestinationComputerName S2D-CLU2 `

-DestinationRGName ReplicaGroupDest;

We can get replicas information.

# Get Source And Destination replicas state

(Get-SRGroup -SourceComputerName S2D-CLU `

-SourceRGName ReplicaGroupSource `

-DestinationComputerName S2D-CLU2 `

-DestinationRGName ReplicaGroupDest).Replicas;

We can query only the remaining replication bytes to see when the replication will complete.

# Get remaining replicating bytes

(Get-SRGroup -SourceComputerName S2D-CLU `

-SourceRGName ReplicaGroupSource `

-DestinationComputerName S2D-CLU2 `

-DestinationRGName ReplicaGroupDest).Replicas | Select-Object numofbytesremaining

When replication is done, there should be no bytes remaining in replication.

Failover with Storage Replica

In a cluster to cluster replication scenario there is no out of the box automatic failover of the virtual machines but PowerShell can be used to automate the failover.

Azure Site Recovery will also provide orchestration and automation of Storage Replica in a cluster to cluster scenario.

Failover

In this scenario, we will shut down the VMs on the first cluster, reverse the replication, import their configurations on the second cluster and start them there. Execute the script below on the Management Node to perform failover from S2D-CLU to S2D-CLU2.

# Move Replication

# Get VM configuration and shutdown VMs

$ClusterNodes = Get-ClusterNode -Cluster S2D-CLU | select Name;

$VMs = @();

foreach ($ClusterNode in $ClusterNodes)

{

$VMs += Get-VM -CimSession $ClusterNode.Name;

Get-VM -CimSession $ClusterNode.Name | Where-Object {$_.State -eq "Running" } | `

Foreach-Object { Stop-VM $_.Name -CimSession $ClusterNode.Name; };

};

# Move replication

Set-SRPartnership -NewSourceComputerName S2D-CLU2 `

-SourceRGName ReplicaGroupDest `

-DestinationComputerName S2D-CLU `

-DestinationRGName ReplicaGroupSource `

-Confirm:$false;

# Wait for disk to come online

While ((Get-ClusterSharedVolume -Cluster S2D-CLU2 | where {$_.Name -like "*Volume1*"}).State -ne "Online")

{

Sleep -Seconds 1;

};

# Import VMs

foreach ($VM in $VMS)

{

Try

{

$VMConf = Invoke-Command -ComputerName S2D11 `

-ScriptBlock { Get-Childitem $args[0] -Recurse *.vmcx } `

-ArgumentList $VM.Path

Import-VM -Path $VMConf.FullName `

-Register `

-CimSession S2D11;

Start-sleep 5

}

Catch

{

Write-Warning -Message "VM is already imported.";

};

};

# Start VMs

$ClusterNodes2 = Get-ClusterNode -Cluster S2D-CLU2 | select Name;

Foreach ($ClusterNode2 in $ClusterNodes2)

{

# Add VM to Cluster

Get-VM -CimSession $ClusterNode2.Name | `

Foreach-Object { Add-ClusterVirtualMachineRole -VMName $_.Name `

-Cluster S2D-CLU2 `

-WarningAction SilentlyContinue};

Get-VM -CimSession $ClusterNode2.Name | `

Foreach-Object { Start-VM $_.Name -CimSession $ClusterNode2.Name; };

};

If failover is successful, the VMs should be now running on S2D-CLU2 cluster.

Failback

Failback is easy by shutting down the VMs on S2D-CLU2, reverting the replication and starting them back on S2D-CLU. Execute the code below on the Management node to perform failback.

# Shutdown VMs

$ClusterNodes = Get-ClusterNode -Cluster S2D-CLU2 | select Name;

foreach ($ClusterNode in $ClusterNodes)

{

Get-VM -CimSession $ClusterNode.Name | Where-Object {$_.State -eq "Running" } | `

Foreach-Object { Stop-VM $_.Name -CimSession $ClusterNode.Name; };

};

# Move Replication

Set-SRPartnership -NewSourceComputerName S2D-CLU `

-SourceRGName ReplicaGroupSource `

-DestinationComputerName S2D-CLU2 `

-DestinationRGName ReplicaGroupDest `

-Confirm:$false;

# Wait for disk to come online

While ((Get-ClusterSharedVolume -Cluster S2D-CLU | where {$_.Name -like "*Volume1*"}).State -ne "Online")

{

Sleep -Seconds 1;

};

# Start VMs

$ClusterNodes = Get-ClusterNode -Cluster S2D-CLU | select Name;

Foreach ($ClusterNode in $ClusterNodes)

{

Get-VM -CimSession $ClusterNode.Name | `

Foreach-Object { Start-VM $_.Name -CimSession $ClusterNode.Name; };

};

After that the VMs should be running again on S2D-CLU.

References

Storage Spaces Direct Documentation

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview

Hyper-converged solution using Storage Spaces Direct in Windows Server 2016

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/hyper-converged-solution-using-storage-spaces-direct

Storage Spaces Direct Configurations

https://blogs.technet.microsoft.com/filecab/2016/04/28/s2dtp5nvmessdhdd/

Configuring Storage Spaces with a NVDIMM-N write-back cache

https://msdn.microsoft.com/en-us/library/mt650885.aspx

Testing Storage Spaces Direct using Windows Server 2016 virtual machines

https://blogs.msdn.microsoft.com/clustering/2015/05/27/testing-storage-spaces-direct-using-windows-server-2016-virtual-machines/

Run Hyper-V in a Virtual Machine with Nested Virtualization

https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/user_guide/nesting?f=255&MSPPError=-2147217396

Windows ADK

https://developer.microsoft.com/en-us/windows/hardware/windows-assessment-deployment-kit#winADK

Health Service in Windows Server 2016

https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/health-service-overview

Storage Replica overview

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-replica/storage-replica-overview?f=255&MSPPError=-2147217396

Cluster to cluster Storage Replication

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-replica/cluster-to-cluster-storage-replication

Storage Quality of Service

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-qos/storage-qos-overview

Storage IOPS update with Storage Spaces Direct

https://blogs.technet.microsoft.com/filecab/2016/07/26/storage-iops-update-with-storage-spaces-direct/

Fault tolerance and storage efficiency in Storage Spaces Direct

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-fault-tolerance

Adding nodes or drives to Storage Spaces Direct

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/add-nodes

Fault tolerance and storage efficiency in Storage Spaces Direct

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-fault-tolerance

Storage Spaces Optimize Pool

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-optimize-pool

Fault domain awareness in Windows Server 2016

https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/fault-domains

Deploy a cloud witness for a Failover Cluster

https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/deploy-cloud-witness?f=255&MSPPError=-2147217396

Discover Storage Spaces Direct, the ultimate software-defined storage for Hyper-V

https://myignite.microsoft.com/videos/2771

Optimize your software-defined storage investment with Windows Server 2016

https://myignite.microsoft.com/videos/2757

Drill into Storage Replica in Windows Server 2016

https://myignite.microsoft.com/videos/2689


Cost-Effective Reliable Storage Scripts

2.2.1 - Deploy and Configure Storage Spaces Nodes.ps1

#region Create VMs and deploy Windows Server 2016 Servers
# Executed on the Hyper-V host
# VM names
$VMNames = ('S2D01', 'S2D02', 'S2D03', 'S2D04');

# Prompt for vSwitch Name
$vSwitchName = Read-Host `
                        -Prompt 'Enter vSwitchName.';

# Prompt for storage path where the VMs will be stored
$StoragePath = Read-Host `
                        -Prompt "Enter storage Path. Example 'C:\ClusterStorage\Volume1'";

Foreach($VMName in $VMNames )
{
    
    # Create VM
    New-VM -Name $VMName `
           -Path "$StoragePath\$VMName" `
           -Generation 2 `
           -SwitchName $vSwitchName.ToString() `
           -MemoryStartupBytes 16GB ;
    
    # Set Proc number
    Set-VM -Name $VMName `
           -ProcessorCount 4;
    
    # Create OS VHDx
    New-VHD -Dynamic `
            -SizeBytes 127GB `
            -Path "$StoragePath\$VMName\OSDisk.vhdx";
    
    # Add VHDx to VM
    Add-VMHardDiskDrive -VMName $VMName `
                        -Path "$StoragePath\$VMName\OSDisk.vhdx";
    
    # Add DVD drive without configuration
    Add-VMDvdDrive -VMName $VMName;
};

# Set Vlan on a VM
Set-VMNetworkAdapterVlan -VMName  `
                         -VlanId  `
                         -Access;
#endregion

#region Configure IP Settings
# Execute on Storage Spaces Direct servers
# Get Available interfaces
Get-NetIPAddress `
        -ErrorAction Stop | select InterfaceAlias, InterfaceIndex, IPAddress;

# Prompt for new IP Address. Example: 192.168.1.20
$Interface = Read-Host `
                        -Prompt "Enter Interface Alias to configure:";

# Remove Interface configuration
Remove-NetIPAddress -InterfaceAlias $Interface.ToString() `
                    -Confirm:$false ;

# Prompt for new IP Address. Example: 192.168.1.20
$IPAddress = Read-Host `
                        -Prompt "Enter new IP address for $($Interface.ToString())";

# Prompt for IP Address Prefix (mask). Example: 24
$IPAddressPrefix = Read-Host `
                        -Prompt "Enter IP address prefix for $($Interface.ToString())";

# Prompt for Gateway IP Address. Example: 192.168.1.1
$Gateway = Read-Host `
                        -Prompt "Enter Gateway for $($Interface.ToString())";

# Prompt for DNS IP Address. Example: 192.168.1.4
$DNSServer = Read-Host `
                        -Prompt "Enter DNS Server for $($Interface.ToString())";

# Configure IP settings on interface
New-NetIPAddress –InterfaceAlias $Interface.ToString() `
                 –IPAddress $IPAddress `
                 –PrefixLength $IPAddressPrefix `
                 -DefaultGateway $Gateway;

# Set DNS
Set-DnsClientServerAddress -InterfaceAlias $Interface.ToString() `
                           -ServerAddresses $DNSServer;
#endregion

#region Set Time Zone
# List Time Zones
tzutil /l | more

# Set Time Zone
tzutil /s "" 

#endregion

#region Join servers to domain
# Execute on Storage Spaces Direct servers
# Prompt for new computer name
$NewName = Read-Host `
                        -Prompt 'Enter new Computer Name';

# Prompt for domain name. Example: contoso.local
$DomainName = Read-Host `
                        -Prompt 'Enter domain Name';

# Join computer to domain and restart
Add-Computer -DomainName $DomainName `
             -Force `
             -Restart `
             -NewName $NewName;
#endregion

#region Add Domain Group to Local Administrators
# Execute on Storage Spaces Direct servers
# Prompt for domain group
$DomainGroup = Read-Host `
                        -Prompt 'Enter domain group for local administrators on the server in the following format ';

# Add domain group to local administrators group
Net localgroup Administrators $DomainGroup.ToString() /add
#endregion

#region Enable Nested Virtualization
# Execute on Hyper-V host
# VM names
$VMNames = ('S2D01', 'S2D02', 'S2D03', 'S2D04');
Foreach($VMName in $VMNames )
{
    # Shut Down the VM
    Stop-VM –Name $VMName;
    
    # Enable Nested Virtualization
    Set-VMProcessor -VMName $VMName `
                    -ExposeVirtualizationExtensions $true;
    
    # Enable MAC Address Spoofing
    Get-VMNetworkAdapter -VMName $VMName | Set-VMNetworkAdapter -MacAddressSpoofing On;
    
    # Start VM
    Start-VM -Name $VMName;
};
#endregion

#region Add Disks
# Execute on Hyper-V host
$VMNames = ('S2D01', 'S2D02', 'S2D03', 'S2D04');
Foreach ($VMName in $VMNames){
   
   # SSD Disk Names
   $SSDDiskNames =  ("SSD01.vhdx", "SSD02.vhdx");

   # Create and attach SDD disks
   foreach ($SSDDiskName in $SSDDiskNames )
   {
        $diskName = $SSDDiskName;
        
        # Get the VM
        $VM = Get-VM -Name $VMName;
        
        # Get VM Location
        $VMLocation = $VM.Path;
        
        # Set Disk Size in GB
        $Disksize = 256;
        $DisksizeinBytes = $Disksize*1024*1024*1024;

        # Create Disk
        $VHD = New-VHD -Path  "$VMLocation\$diskName" `
                       -Dynamic `
                       -SizeBytes $DisksizeinBytes;
        
        # Atach the disk
        $AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `
                                               -Path             "$VMLocation\$diskName" `
                                               -ControllerType   SCSI `
                                               -ControllerNumber 0;
       
   };
   # HHD Disk Names
   $HDDDiskNames =  ("HDD01.vhdx", "HDD02.vhdx", "HDD03.vhdx");
   
   # Create and attach HDD disks
   foreach ($HDDDiskName in $HDDDiskNames )
   {
        $diskName = $HDDDiskName;

        # Get the VM
        $VM = Get-VM -Name $VMName;
        
        # Get VM Location
        $VMLocation = $VM.Path;
        
        # Set Disk Size in GB
        $Disksize = 512;
        $DisksizeinBytes = $Disksize*1024*1024*1024;
        
        # Create Disk
        $VHD = New-VHD -Path  "$VMLocation\$diskName" `
                       -Dynamic `
                       -SizeBytes $DisksizeinBytes;
        
        # Atach the disk
        $AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `
                                               -Path             "$VMLocation\$diskName" `
                                               -ControllerType   SCSI `
                                               -ControllerNumber 0;
       
   };
   
};

#endregion

2.2.2 - Configuring Prerequisites.ps1

#region Update Windows Server 2016
# Execute on Storage Spaces Direct Servers
# Check for Available Updates
$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `
                      -ClassName MSFT_WUOperationsSession;  
# Scan for Updates
$result = $ci | Invoke-CimMethod -MethodName ScanForUpdates `
                                 -Arguments @{SearchCriteria="IsInstalled=0";OnlineScan=$true};

# Show Updates found for install
$result.Updates;

# Initiate Update and restart
$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `
                      -ClassName MSFT_WUOperationsSession;
# Apply Updates
Invoke-CimMethod -InputObject $ci `
                 -MethodName ApplyApplicableUpdates;

# Restart Server
Restart-Computer; exit;

# Show Installed Updates
$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `
                      -ClassName MSFT_WUOperationsSession;
$result = $ci | Invoke-CimMethod -MethodName ScanForUpdates `
                                 -Arguments @{SearchCriteria="IsInstalled=1";OnlineScan=$true};
$result.Updates;
#endregion

#region Install Roles
Install-WindowsFeature -Name File-Services;
Install-WindowsFeature -Name Failover-clustering -IncludeManagementTools;
Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart;
#endregion

#region Configure Network QoS
# Execute on Storage Spaces Direct Servers

# Turn on DCB in case you are using RDMA RoCE type
Install-WindowsFeature Data-Center-Bridging;

# QoS policy for SMB-Direct
New-NetQosPolicy -Name "SMB" `
                 –NetDirectPortMatchCondition 445 `
                 –PriorityValue8021Action 3;

# If you are using RoCEv2 turn on Flow Control for SMB as follows (not required for iWarp)
Enable-NetQosFlowControl –Priority 3;

# Disable flow control for other traffic as follows (optional for iWarp)
Disable-NetQosFlowControl –Priority 0,1,2,4,5,6,7;

# Get a list of the network adapters to identify the target adapters (RDMA adapters) as follows
Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeed;

# Apply network QoS policy to the target adapters
Enable-NetAdapterQos –InterfaceAlias "","";

# Create a Traffic class and give SMB Direct 30% of the bandwidth minimum. 
# The name of the class will be "SMB"
New-NetQosTrafficClass -Name "SMB" `
                       –Priority 3 `
                       –BandwidthPercentage 30 `
                       –Algorithm ETS;
#endregion

#region Create Hyper-V virtual switch

# Identify the network adapters
Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeed;

# Create the virtual switch connected to the adapters of your choice,
# and enable the Switch Embedded Teaming (SET). You may notice a message that 
# your PSSession lost connection. This is expected and your session will reconnect
New-VMSwitch –Name SETswitch `
             –NetAdapterName "" `
             –EnableEmbeddedTeaming $true;

# Add host vNICs to the virtual switch. This configures a virtual NIC (vNIC)
# from the virtual switch that you just configured for the management OS to use
Add-VMNetworkAdapter –SwitchName SETswitch `
                     –Name SMB_1 `
                     –managementOS;

Add-VMNetworkAdapter –SwitchName SETswitch `
                     –Name SMB_2 `
                     –managementOS;

# Configure the host vNIC to use a Vlan. They can be on the same or different Vlans.
Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_1" `
                         -VlanId  `
                         -Access `
                         -ManagementOS;
Set-VMNetworkAdapterVlan -VMNetworkAdapterName "SMB_2" `
                         -VlanId  `
                         -Access `
                         -ManagementOS;

# Verify that the VLANID is set
Get-VMNetworkAdapterVlan –ManagementOS;

# Restart each host vNIC adapter so that the Vlan is active.
Restart-NetAdapter -Name "vEthernet (SMB_1)";
Restart-NetAdapter -Name "vEthernet (SMB_2)";

# Enable RDMA on the host vNIC adapters
Enable-NetAdapterRDMA -Name "vEthernet (SMB_1)","vEthernet (SMB_2)";

# Associate each of the vNICs configured for RDMA to a physical adapter that is connected to the virtual switch
Set-VMNetworkAdapterTeamMapping -VMNetworkAdapterName 'SMB_1' `
                                –ManagementOS `
                                –PhysicalNetAdapterName '';
Set-VMNetworkAdapterTeamMapping -VMNetworkAdapterName 'SMB_2' `
                                –ManagementOS `
                                –PhysicalNetAdapterName '';

# Verify RDMA capabilities
Get-SmbClientNetworkInterface;

#endregion

2.2.3 - Create Cluster.ps1

#region Validate Cluster Configuration
$nodes = ("S2D01", "S2D02", "S2D03", "S2D04");

# Validate Cluster Configuration
Test-Cluster –Node $nodes `
             –Include "Storage Spaces Direct","Inventory","Network","System Configuration";
#endregion

#region Create Cluster
$nodes = ("S2D01", "S2D02", "S2D03", "S2D04");

# Create Cluster with no storage
New-Cluster –Name S2D-CLU `
            –Node $nodes  `
            –NoStorage `
            –StaticAddress ;
#endregion

#region Setup Quorum
# Set Cloud Witness
Set-ClusterQuorum -CloudWitness `
                  -AccountName  `
                  -AccessKey  `
                  -Cluster S2D-CLU;

# Set File Share Witness
Set-ClusterQuorum –FileShareWitness 
#endregion

2.2.4 - Setup Storage Spaces Direct.ps1

#region Enable Storage Spaces Direct
# Enable Storage Spaces Direct
Enable-ClusterStorageSpacesDirect -CimSession S2D-CLU `
                                  -Confirm:$false;
# Remove all Storage Tiers
Get-StorageTier -CimSession S2D-CLU | Remove-StorageTier -Confirm:$false `
                                                         -CimSession S2D-CLU;
#endregion

#region Change Media Type
# List Physical Disks
Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU | `
                Get-PhysicalDisk -CimSession S2D-CLU

# Change Media Type
Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU | `
                Get-PhysicalDisk -CimSession S2D-CLU | `
                ? Size           -lt 300GB ` | `
                Set-PhysicalDisk -CimSession S2D-CLU `
                                 –MediaType SSD;

Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU  | `
                Get-PhysicalDisk -CimSession S2D-CLU | `
                ? Size           -gt 300GB | `
                Set-PhysicalDisk -CimSession S2D-CLU `
                                 –MediaType HDD;

# List Physical Disks
Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU  | `
                Get-PhysicalDisk -CimSession S2D-CLU | `
                FT FriendlyName,CanPool,OperationalStatus,HealthStatus,Usage,Size,MediaType;

#endregion

#region Create Storage Tiers
# Create Tiers
Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU  | `
                New-StorageTier –FriendlyName Performance `
                                –MediaType SSD `
                                -ResiliencySettingName Mirror `
                                -CimSession S2D-CLU;

Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU | `
                New-StorageTier –FriendlyName Capacity `
                                –MediaType HDD `
                                -ResiliencySettingName Parity `
                                -CimSession S2D-CLU;

#endregion

#region Create Volume
# Create Volume
New-Volume -StoragePoolFriendlyName "S2D*" `
           -FriendlyName Volume1 `
           -FileSystem CSVFS_ReFS `
           -StorageTierfriendlyNames Capacity,Performance `
           -StorageTierSizes 1500GB,500GB `
           -CimSession S2D-CLU;
#endregion

2.3 - Test Storage Spaces Direct Fault Tolerance.ps1

#region Test Storage Spaces Direct Fault Tolerance
# Inspect Storage Pool FaultDomainAwarenessDefault
Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU | FL FriendlyName, Size, FaultDomainAwarenessDefault
#endregion

2.3.1 - Test node failure.ps1

#region Stop First Node
# Execute on Hyper-V host
# Stop VM
Stop-VM -Name S2D04
#endregion

#region Stop Second Node
# Execute on Hyper-V host
# Stop VM
Stop-VM -Name S2D02
#endregion

#region Start Nodes
# Execute on Hyper-V host
# Start VMs
Start-VM -Name S2D04
Start-VM -Name S2D02
#endregion

2.3.2 - Test Disk failure.ps1

#region Remove disk from Capacity tier
# Get, Save and List HHD drive information
$RemovedHDD = Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"};
$RemovedHDD

# Detach Hard drive from VM
Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"} | Remove-VMHardDiskDrive;
#endregion

#region Remove disk from Performance tier
# Get, Save and List SSD drive information
$RemovedSSD = Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*SSD01*"};
$RemovedSSD;

# Detach Hard drive from VM
Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*SSD01*"} | Remove-VMHardDiskDrive;
#endregion

#region Add Back Drives
# Attach Hard drives
Add-VMHardDiskDrive -VMName $RemovedHDD.VMName `
                    -ControllerType SCSI `
                    -ControllerNumber $RemovedHDD.ControllerNumber `
                    -ControllerLocation $RemovedHDD.ControllerLocation `
                    -Path $RemovedHDD.Path;

Add-VMHardDiskDrive -VMName $RemovedSSD.VMName `
                    -ControllerType SCSI `
                    -ControllerNumber $RemovedSSD.ControllerNumber `
                    -ControllerLocation $RemovedSSD.ControllerLocation `
                    -Path $RemovedSSD.Path;
#endregion

#region Additional Tests
# Get, Save and List HHD drive information
$RemovedHDD1 = Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"}
$RemovedHDD1

# Detach Hard drive from VM
Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"} | Remove-VMHardDiskDrive


# Get, Save and List HHD drive information
$RemovedHDD2 = Get-VMHardDiskDrive -VMName S2D02 | where {$_.Path -like "*HDD01*"}
$RemovedHDD2

# Detach Hard drive from VM
Get-VMHardDiskDrive -VMName S2D02 | where {$_.Path -like "*HDD01*"} | Remove-VMHardDiskDrive


# Get, Save and List HHD drive information
$RemovedHDD3 = Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*HDD01*"}
$RemovedHDD3

# Detach Hard drive from VM
Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*HDD01*"} | Remove-VMHardDiskDrive


# Attach Hard drives
Add-VMHardDiskDrive -VMName $RemovedHDD1.VMName `
                    -ControllerType SCSI `
                    -ControllerNumber $RemovedHDD1.ControllerNumber `
                    -ControllerLocation $RemovedHDD1.ControllerLocation `
                    -Path $RemovedHDD1.Path

# Attach Hard drives
Add-VMHardDiskDrive -VMName $RemovedHDD2.VMName `
                    -ControllerType SCSI `
                    -ControllerNumber $RemovedHDD2.ControllerNumber `
                    -ControllerLocation $RemovedHDD2.ControllerLocation `
                    -Path $RemovedHDD2.Path

# Attach Hard drives
Add-VMHardDiskDrive -VMName $RemovedHDD3.VMName `
                    -ControllerType SCSI `
                    -ControllerNumber $RemovedHDD3.ControllerNumber `
                    -ControllerLocation $RemovedHDD3.ControllerLocation `
                    -Path $RemovedHDD3.Path



# Get, Save and List SSD drive information
$RemovedSSD1 = Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*SSD01*"}
$RemovedSSD1

# Detach Hard drive from VM
Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*SSD01*"} | Remove-VMHardDiskDrive


# Get, Save and List SSD drive information
$RemovedSSD2 = Get-VMHardDiskDrive -VMName S2D02 | where {$_.Path -like "*SSD01*"}
$RemovedSSD2

# Detach Hard drive from VM
Get-VMHardDiskDrive -VMName S2D02 | where {$_.Path -like "*SSD01*"} | Remove-VMHardDiskDrive


# Get, Save and List SSD drive information
$RemovedSSD3 = Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*SSD01*"}
$RemovedSSD3

# Detach Hard drive from VM
Get-VMHardDiskDrive -VMName S2D03 | where {$_.Path -like "*SSD01*"} | Remove-VMHardDiskDrive



Add-VMHardDiskDrive -VMName $RemovedSSD1.VMName `
                    -ControllerType SCSI `
                    -ControllerNumber $RemovedSSD1.ControllerNumber `
                    -ControllerLocation $RemovedSSD1.ControllerLocation `
                    -Path $RemovedSSD1.Path

Add-VMHardDiskDrive -VMName $RemovedSSD2.VMName `
                    -ControllerType SCSI `
                    -ControllerNumber $RemovedSSD2.ControllerNumber `
                    -ControllerLocation $RemovedSSD2.ControllerLocation `
                    -Path $RemovedSSD2.Path

Add-VMHardDiskDrive -VMName $RemovedSSD3.VMName `
                    -ControllerType SCSI `
                    -ControllerNumber $RemovedSSD3.ControllerNumber `
                    -ControllerLocation $RemovedSSD3.ControllerLocation `
                    -Path $RemovedSSD3.Path

#endregion

2.4.1 - Adding Nodes to Storage Spaces Direct Cluster.ps1

#region Get Storage Spaces Direct Tier Sizes
# Get Capacity Tier Max and Min Size
Get-StorageTierSupportedSize -FriendlyName Capacity `
                             -ResiliencySettingName Parity `
                             -CimSession S2D-CLU | `
                             Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}};

# Get Performance Tier Max and Min Size
Get-StorageTierSupportedSize -FriendlyName Performance `
                             -ResiliencySettingName Mirror `
                             -CimSession S2D-CLU | `
                             Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}};
#endregion

#region Create Fifth Storage Spaces Direct Node
# Executed on the Hyper-V host
# Create fifth node VM
# VM names
$VMNames = ('S2D05');

# Prompt for vSwitch Name
$vSwitchName = Read-Host `
                        -Prompt 'Enter vSwitchName.';

# Prompt for storage path where the VMs will be stored
$StoragePath = Read-Host `
                        -Prompt "Enter storage Path. Example 'C:\ClusterStorage\Volume1'";

Foreach($VMName in $VMNames )
{
    
    # Create VM
    New-VM -Name $VMName `
           -Path "$StoragePath\$VMName" `
           -Generation 2 `
           -SwitchName $vSwitchName.ToString() `
           -MemoryStartupBytes 16GB ;
    
    # Set Proc number
    Set-VM -Name $VMName `
           -ProcessorCount 4;
    
    # Create OS VHDx
    New-VHD -Dynamic `
            -SizeBytes 127GB `
            -Path "$StoragePath\$VMName\OSDisk.vhdx";
    
    # Add VHDx to VM
    Add-VMHardDiskDrive -VMName $VMName `
                        -Path "$StoragePath\$VMName\OSDisk.vhdx";
    
    # Add DVD drive without configuration
    Add-VMDvdDrive -VMName $VMName;
};




# Execute on Storage Spaces Direct servers
# Configure IP Settings
# Get Available interfaces
Get-NetIPAddress `
        -ErrorAction Stop | select InterfaceAlias, InterfaceIndex, IPAddress;

# Prompt for new IP Address. Example: 192.168.1.20
$Interface = Read-Host `
                        -Prompt "Enter Interface Alias to configure:";

# Remove Interface configuration
Remove-NetIPAddress -InterfaceAlias $Interface.ToString() `
                    -Confirm:$false ;

# Prompt for new IP Address. Example: 192.168.1.20
$IPAddress = Read-Host `
                        -Prompt "Enter new IP address for $($Interface.ToString())";

# Prompt for IP Address Prefix (mask). Example: 24
$IPAddressPrefix = Read-Host `
                        -Prompt "Enter IP address prefix for $($Interface.ToString())";

# Prompt for Gateway IP Address. Example: 192.168.1.1
$Gateway = Read-Host `
                        -Prompt "Enter Gateway for $($Interface.ToString())";

# Prompt for DNS IP Address. Example: 192.168.1.4
$DNSServer = Read-Host `
                        -Prompt "Enter DNS Server for $($Interface.ToString())";

# Configure IP settings on interface
New-NetIPAddress –InterfaceAlias $Interface.ToString() `
                 –IPAddress $IPAddress `
                 –PrefixLength $IPAddressPrefix `
                 -DefaultGateway $Gateway;

# Set DNS
Set-DnsClientServerAddress -InterfaceAlias $Interface.ToString() `
                           -ServerAddresses $DNSServer;


# Configure Time zone
# List Time Zones
tzutil /l | more

# Set Time Zone
tzutil /s "

2.4.2 - Adding Drives to Storage Spaces Direct Cluster.ps1

#region Get Storage Spaces Direct Tier Sizes
# Get Capacity Tier Max and Min Size
Get-StorageTierSupportedSize -FriendlyName Capacity `
                             -ResiliencySettingName Parity `
                             -CimSession S2D-CLU | `
                             Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}};

# Get Performance Tier Max and Min Size
Get-StorageTierSupportedSize -FriendlyName Performance `
                             -ResiliencySettingName Mirror `
                             -CimSession S2D-CLU | `
                             Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}};
#endregion

#region Add Disk on each Storage Spaces Direct Node
# Execute on Hyper-V host
$VMNames = ('S2D01', 'S2D02', 'S2D03', 'S2D04');
Foreach ($VMName in $VMNames){
   
   # HHD Disk Names
   $HDDDiskNames =  ("HDD04.vhdx");
   
   # Create and attach HDD disks
   foreach ($HDDDiskName in $HDDDiskNames )
   {
        $diskName = $HDDDiskName;

        # Get the VM
        $VM = Get-VM -Name $VMName;
        
        # Get VM Location
        $VMLocation = $VM.Path;
        
        # Set Disk Size in GB
        $Disksize = 512;
        $DisksizeinBytes = $Disksize*1024*1024*1024;
        
        # Create Disk
        $VHD = New-VHD -Path  "$VMLocation\$diskName" `
                       -Dynamic `
                       -SizeBytes $DisksizeinBytes;
        
        # Atach the disk
        $AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `
                                               -Path             "$VMLocation\$diskName" `
                                               -ControllerType   SCSI `
                                               -ControllerNumber 0;
       
   };
   
};


# List Physical Disks
Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU  | `
                Get-PhysicalDisk -CimSession S2D-CLU | `
                FT FriendlyName,CanPool,OperationalStatus,HealthStatus,Usage,Size,MediaType;

# SSD and HDD Count
Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU  | `
                Get-PhysicalDisk -CimSession S2D-CLU | Group-Object MediaType;


# Get Capacity Tier Max and Min Size
Get-StorageTierSupportedSize -FriendlyName Capacity `
                             -ResiliencySettingName Parity `
                             -CimSession S2D-CLU | `
                             Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}};

# Get Performance Tier Max and Min Size
Get-StorageTierSupportedSize -FriendlyName Performance `
                             -ResiliencySettingName Mirror `
                             -CimSession S2D-CLU | `
                             Select-Object @{Name="TierSizeMinInGBs";Expression={$_.TierSizeMin / 1GB}}, @{Name="TierSizeMaxInGBs";Expression={$_.TierSizeMax / 1024/1024/1024}};
#endregion



2.4.3 - Optimize Storage Spaces Pool.ps1

#region Optimize Storage Spaces Pool
# Start Optimization Job
Optimize-StoragePool -FriendlyName "S2D on S2D-CLU" `
                     -CimSession S2D-CLU;

# Get Storage Optimization Job result
Get-StorageJob -CimSession S2D-CLU `
               -Name Optimize `
               -StoragePool (Get-StoragePool -FriendlyName "S2D*" `
                                             -CimSession S2D-CLU );
#endregion

2.5.1 - Setup VMs for Storage QoS.ps1

#region Sysprep VHD
C:\Windows\system32\sysprep\sysprep.exe /oobe /generalize /shutdown
#endregion

#region Configure Answer File
# Prompt for Syspreped VHD path
$VHDPath = Read-Host `
             -Prompt 'Enter VHD path (example \\s2d01\c$\ClusterStorage\Volume1\OSDisk.vhdx) ';

# Prompt for Answer File path
$AFilePath = Read-Host `
               -Prompt 'Enter Answer file path (example \\s2d01\c$\ClusterStorage\Volume1\unattend.xml) ';

# Create dir if not exists
$destPath = "C:\img"
If(!(Test-Path $destPath)) 
{
    New-Item -Path $destPath `
             -ItemType Directory
}

# Mount VHD
Mount-WindowsImage -ImagePath $VHDPath.ToString() `
                   -Path $destPath `
                   -Index 1
# Copy answer file
$AnwerFileDest = $destPath + "\Windows\Panther"
Copy-Item -Path $AFilePath.ToString() `
          -Destination $AnwerFileDest

# Save VHD
Dismount-WindowsImage -Path $destPath `
                      -Save
#endregion

#region Create and Setup VMs
Enter-PSSession -ComputerName S2D01;

# VM names
$VMNames = ('VM01', 'VM02', 'VM03', 'VM04');

# Storage Path
$StoragePath = "C:\ClusterStorage\Volume1";

# vSwitch Name
$vSwitchName = "SETswitch";

# OS VHD location
$osVHDPath = "C:\ClusterStorage\Volume1\OSDisk.vhdx";

Foreach($VMName in $VMNames)
{
    # Create VM
    New-VM -Name $VMName `
           -Path "$StoragePath\" `
           -Generation 2 `
           -SwitchName $vSwitchName.ToString() `
           -MemoryStartupBytes 2GB;
    
    # Set Proc number
    Set-VM -Name $VMName `
           -ProcessorCount 2;

    # Copy OS VHD
    Copy-Item -Path $osVHDPath `
              -Destination "$StoragePath\$VMName\" `
              -Force;

    # Add data VHDx to VM
    Add-VMHardDiskDrive -VMName $VMName `
                        -Path "$StoragePath\$VMName\OSDisk.vhdx";

    # Get OS VHD
    $vhd = Get-VMHardDiskDrive -VMName $VMName;

    # Set boot order
    Set-VMFirmware -VMName $VMName `
                   -FirstBootDevice $vhd;

    # Set Disk Size in GB
    $diskName = "HDD01.vhdx";
    
    # Create Disk
    New-VHD -Path "$StoragePath\$VMName\$diskName" `
                   -Dynamic `
                   -SizeBytes 512GB | Out-Null;
    
    # Atach the disk
    Add-VMHardDiskDrive -VMName $VMName `
                                     -Path "$StoragePath\$VMName\$diskName" `
                                     -ControllerType SCSI `
                                     -ControllerNumber 0 | Out-Null;
    # Add VM to Cluster
    Get-VM –Name $VMName | Add-ClusterVirtualMachineRole -WarningAction SilentlyContinue;

    # Start VM
    Start-VM -Name $VMName;

};

# VM names
$VMNames = ('VM01', 'VM02', 'VM03', 'VM04');
# Create credentials for local account
$LocalPassword = "P@ssw0rd123!";
$secLocalPassword = ConvertTo-SecureString $LocalPassword -AsPlainText -Force;
$LocalCreds = New-Object System.Management.Automation.PSCredential ("Administrator", $secLocalPassword);

# Path to diskspd
$DiskspdPath = "C:\ClusterStorage\Volume1\diskspd.exe";

Foreach($VMName in $VMNames)
{
    
    # Waiting for VM to be running
    while ((Invoke-Command -VMName $VMName -Credential $LocalCreds {"Test"} -ea SilentlyContinue) -ne "Test")
    {
        Sleep -Seconds 1
    }
    
    # Create PS session to the VM
    $VMSession = New-PSSession -VMName $VMName `
                               -Credential $LocalCreds;

    # Format Disk
    Invoke-Command -Session $VMSession `
                   -ScriptBlock {
                   $RawDisks = Get-Disk | where partitionstyle -eq 'raw'| Sort-Object Number;
                   $InitializedDisk = Initialize-Disk -PartitionStyle GPT `
                                                      -Number         $RawDisks[0].Number `
                                                      -ErrorAction Stop;
                   
                   $FormatedVol = New-Partition -DiskNumber     $RawDisks[0].Number `
                                                -UseMaximumSize `
                                                -DriveLetter T `
                                                -ErrorAction Stop | `
                                  Format-Volume -FileSystem         NTFS `
                                                -AllocationUnitSize 65536 `
                                                -NewFileSystemLabel "Test" `
                                                -Force `
                                                -Confirm:$false `
                                                -ErrorAction Stop;
                   
                   # Create Directory for diskspd
                   If(!(Test-Path "C:\test")) 
                   {
                       New-Item -Path "C:\test" `
                                -ItemType Directory;
                   };
                   } `
                   -ArgumentList $VMName `
                   -ErrorAction Stop;
    
    # Copy Diskspd to VM
    Copy-Item -ToSession $VMSession `
              -Path $DiskspdPath `
              -Destination "C:\test\";
    
    # Remove PSSession
    Remove-PSSession -Session $VMSession;
};
#endregion

#region Verify Storage QoS installation
Get-ClusterResource -Name "Storage Qos Resource" `
                    -Cluster S2D-CLU;
#endregion

2.5.2 - Create Storage QoS Policies.ps1

#region Create and Apply Dedicated Policy
# Get Storage QoS Policies
Get-StorageQosPolicy -CimSession S2D-CLU;

# Create a dedicate Storage QoS policy
$DedicatedPolicy1 = New-StorageQosPolicy -Name DedicatedPolicy1 `
                                         -PolicyType Dedicated `
                                         -MinimumIops 300 `
                                         -MaximumIops 400 `
                                         -CimSession S2D-CLU;
# Show policy object
$DedicatedPolicy1 | select * ;

# Get Storage QoS Policies
Get-StorageQosPolicy -CimSession S2D-CLU;

# Assign the Policy to all vhd(x) files on VM01
Get-VMHardDiskDrive -CimSession S2D-CLU `
                    -VMName VM01 | `
             Set-VMHardDiskDrive `
                    -QoSPolicyID $DedicatedPolicy1.PolicyId  `
                    -CimSession S2D-CLU;

# Assign the Policy to all vhd(x) files on VM02
Get-VMHardDiskDrive -CimSession S2D-CLU `
                    -VMName VM02 | `
             Set-VMHardDiskDrive `
                    -QoSPolicyID $DedicatedPolicy1.PolicyId  `
                    -CimSession S2D-CLU;

# Get VMName, VHD and QoS Policy ID for VM01 and VM02
Get-VMHardDiskDrive -CimSession S2D-CLU `
                    -VMName VM01,VM02 | select VMName,Path,QoSPolicyID

# Get all current flows - Virtual machine name, status, IOPs, and VHD file name sorted by VM
Get-StorageQoSflow -CimSession S2d-CLU | `
                   Sort-Object InitiatorName |  `
                   ft InitiatorName, Status, MinimumIOPs, MaximumIOPs, StorageNodeIOPs, Status, `
                   @{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} `
                   -AutoSize 

# Get current flows for DedicatedPolicy1
Get-StorageQosPolicy -Name DedicatedPolicy1 `
                     -CimSession S2D-CLU | Get-StorageQosFlow `
                                              -CimSession S2D-CLU | `
                                                ft InitiatorName, *IOPS, Status, FilePath -AutoSize




#Start Diskspd Tests for Dedicated Policy

Enter-PSSession -ComputerName S2D01;

# VM names
$VMNames = ('VM01', 'VM02');
# Create credentials for local account
$LocalPassword = "P@ssw0rd123!";
$secLocalPassword = ConvertTo-SecureString $LocalPassword -AsPlainText -Force;
$LocalCreds = New-Object System.Management.Automation.PSCredential ("Administrator", $secLocalPassword);

Foreach($VMName in $VMNames)
{
    
    # Waiting for VM to be running
    while ((Invoke-Command -VMName $VMName -Credential $LocalCreds {"Test"} -ea SilentlyContinue) -ne "Test")
    {
        Sleep -Seconds 1
    }
    
    # Create PS session to the VM
    $VMSession = New-PSSession -VMName $VMName `
                               -Credential $LocalCreds;

    # Format Disk
    Invoke-Command -Session $VMSession `
                   -ScriptBlock {
                   
                   # Kill diskspd process if exists
                   Stop-Process -Name diskspd `
                                -ErrorAction SilentlyContinue `
                                -WarningAction SilentlyContinue;
                   
                   # Start Diskspd tests
                   $diskspdarguments = "-r -w30 -d1200 -W120 -b8k -t2 -o6 -h -L -Z1M -c64G T:\testfile.dat"
                   Start-Process `
                                -FilePath "C:\test\diskspd.exe" `
                                -ArgumentList $diskspdarguments  `
                                -NoNewWindow `
                                -PassThru `
                                -ErrorAction Stop | Out-Null
                   } `
                   -ArgumentList $VMName `
                   -ErrorAction Stop;
    
    # Remove PSSession
    Remove-PSSession -Session $VMSession;
};


# Get all current flows - Virtual machine name, Hyper-V host name, IOPs, and VHD file name, sorted by IOPS.
Get-StorageQosFlow -CimSession S2D-CLU | `
                   Sort-Object StorageNodeIOP -Descending |`
                    ft InitiatorName, `
                       @{Expression={$_.InitiatorNodeName.Substring(0,$_.InitiatorNodeName.IndexOf('.'))};Label="InitiatorNodeName"}, `
                       StorageNodeIOPs, `
                       Status, `
                       @{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} `
                       -AutoSize  

# Get current flows for VM01
Get-StorageQosFlow -InitiatorName VM01 `
                   -CimSession S2d-CLU | Format-List;

# View volume performance data
Get-StorageQosVolume -CimSession S2D-CLU | Format-List  



# Stop Didskspd Tests for Dedicated Policy

Enter-PSSession -ComputerName S2D01;

# VM names
$VMNames = ('VM01', 'VM02');
# Create credentials for local account
$LocalPassword = "P@ssw0rd123!";
$secLocalPassword = ConvertTo-SecureString $LocalPassword -AsPlainText -Force;
$LocalCreds = New-Object System.Management.Automation.PSCredential ("Administrator", $secLocalPassword);

Foreach($VMName in $VMNames)
{
    
    # Waiting for VM to be running
    while ((Invoke-Command -VMName $VMName -Credential $LocalCreds {"Test"} -ea SilentlyContinue) -ne "Test")
    {
        Sleep -Seconds 1
    }
    
    # Create PS session to the VM
    $VMSession = New-PSSession -VMName $VMName `
                               -Credential $LocalCreds;

    # Format Disk
    Invoke-Command -Session $VMSession `
                   -ScriptBlock {
                   
                   # Kill diskspd process if exists
                   Stop-Process -Name diskspd `
                                -ErrorAction SilentlyContinue `
                                -WarningAction SilentlyContinue;
                   } `
                   -ArgumentList $VMName `
                   -ErrorAction Stop;
    
    # Remove PSSession
    Remove-PSSession -Session $VMSession;
};
#endregion

#region Modify Storage QoS Policy
# Modify Policy DedicatedPolicy1
Get-StorageQosPolicy -Name DedicatedPolicy1  `
                     -CimSession S2D-CLU | `
                         Set-StorageQosPolicy -MaximumIops 500 `
                                              -CimSession S2D-CLU;

Get-StorageQosPolicy -Name DedicatedPolicy1 `
                     -CimSession S2D-CLU;
#endregion

#region Delete Storage QoS Policy
# Delte Polocy DedicatedPolicy1
Get-StorageQosPolicy -Name DedicatedPolicy1 `
                     -CimSession S2D-CLU  | `
                         Remove-StorageQosPolicy  -CimSession S2D-CLU `
                                                  -Confirm:$false;

Get-StorageQoSflow -CimSession S2D-CLU  | `
                  Sort-Object InitiatorName | `
                       ft InitiatorName, `
                          Status, `
                          MinimumIOPs, `
                          MaximumIOPs, `
                          StorageNodeIOPs, `
                          Status, `
                          @{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} `
                             -AutoSize;

# Check health
Get-StorageSubSystem -FriendlyName Clustered* `
                     -CimSession S2D-CLU | Debug-StorageSubSystem -CimSession S2D-CLU;

# Get missing policies
Get-StorageQosFlow -Status UnknownPolicyId `
                   -CimSession S2D-CLU | `
                       ft InitiatorName, PolicyId -AutoSize;

# Recreate Policy
New-StorageQosPolicy -PolicyId  `
                      -PolicyType Dedicated `
                      -Name DedicatedPolicy1 `
                      -MinimumIops 300 `
                      -MaximumIops 500 `
                      -CimSession S2D-CLU;


Get-StorageQoSflow -CimSession S2D-CLU  | `
                  Sort-Object InitiatorName | `
                       ft InitiatorName, `
                          Status, `
                          MinimumIOPs, `
                          MaximumIOPs, `
                          StorageNodeIOPs, `
                          Status, `
                          @{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} `
                             -AutoSize;

#endregion

#region Create and Apply Aggregated Policy
# Create an aggregated Storage QoS policy
$AggregatedPolicy1 = New-StorageQosPolicy -Name AggregatedPolicy1 `
                                         -PolicyType Aggregated `
                                         -MinimumIops 300 `
                                         -MaximumIops 400 `
                                         -CimSession S2D-CLU;
# Show policy object
$AggregatedPolicy1 | select * ;

# Assign the Policy to all vhd(x) files on VM03 and VM04
Get-VMHardDiskDrive -CimSession S2D-CLU `
                    -VMName VM03,VM04 | `
             Set-VMHardDiskDrive `
                    -QoSPolicyID $AggregatedPolicy1.PolicyId  `
                    -CimSession S2D-CLU;

# Get VMName, VHD and QoS Policy ID for VM01 and VM02
Get-VMHardDiskDrive -CimSession S2D-CLU `
                    -VMName VM03,VM04 | select VMName,Path,QoSPolicyID


# Start Diskspd Tests for Aggregated Policy

Enter-PSSession -ComputerName S2D01;

# VM names
$VMNames = ('VM03', 'VM04');
# Create credentials for local account
$LocalPassword = "P@ssw0rd123!";
$secLocalPassword = ConvertTo-SecureString $LocalPassword -AsPlainText -Force;
$LocalCreds = New-Object System.Management.Automation.PSCredential ("Administrator", $secLocalPassword);

Foreach($VMName in $VMNames)
{
    
    # Waiting for VM to be running
    while ((Invoke-Command -VMName $VMName -Credential $LocalCreds {"Test"} -ea SilentlyContinue) -ne "Test")
    {
        Sleep -Seconds 1
    }
    
    # Create PS session to the VM
    $VMSession = New-PSSession -VMName $VMName `
                               -Credential $LocalCreds;

    # Format Disk
    Invoke-Command -Session $VMSession `
                   -ScriptBlock {
                   
                   # Kill diskspd process if exists
                   Stop-Process -Name diskspd `
                                -ErrorAction SilentlyContinue `
                                -WarningAction SilentlyContinue;
                   
                   # Start Diskspd tests
                   $diskspdarguments = "-r -w30 -d1200 -W120 -b8k -t2 -o6 -h -L -Z1M -c64G T:\testfile.dat"
                   Start-Process `
                                -FilePath "C:\test\diskspd.exe" `
                                -ArgumentList $diskspdarguments  `
                                -NoNewWindow `
                                -PassThru `
                                -ErrorAction Stop | Out-Null
                   } `
                   -ArgumentList $VMName `
                   -ErrorAction Stop;
    
    # Remove PSSession
    Remove-PSSession -Session $VMSession;
};



# Get all current flows - Virtual machine name, Hyper-V host name, IOPs, and VHD file name, sorted by IOPS.
Get-StorageQosFlow -CimSession S2D-CLU | `
                   Sort-Object StorageNodeIOP -Descending |`
                    ft InitiatorName, `
                       @{Expression={$_.InitiatorNodeName.Substring(0,$_.InitiatorNodeName.IndexOf('.'))};Label="InitiatorNodeName"}, `
                       StorageNodeIOPs, `
                       Status, `
                       @{Expression={$_.FilePath.Substring($_.FilePath.LastIndexOf('\')+1)};Label="File"} `
                       -AutoSize  

# Get current flows for VM03
Get-StorageQosFlow -InitiatorName VM03 `
                   -CimSession S2d-CLU | Format-List;

# View volume performance data
Get-StorageQosVolume -CimSession S2D-CLU | Format-List  




# Stop Didskspd Tests for Aggregated Policy

Enter-PSSession -ComputerName S2D01;

# VM names
$VMNames = ('VM03', 'VM04');
# Create credentials for local account
$LocalPassword = "P@ssw0rd123!";
$secLocalPassword = ConvertTo-SecureString $LocalPassword -AsPlainText -Force;
$LocalCreds = New-Object System.Management.Automation.PSCredential ("Administrator", $secLocalPassword);

Foreach($VMName in $VMNames)
{
    
    # Waiting for VM to be running
    while ((Invoke-Command -VMName $VMName -Credential $LocalCreds {"Test"} -ea SilentlyContinue) -ne "Test")
    {
        Sleep -Seconds 1
    }
    
    # Create PS session to the VM
    $VMSession = New-PSSession -VMName $VMName `
                               -Credential $LocalCreds;

    # Format Disk
    Invoke-Command -Session $VMSession `
                   -ScriptBlock {
                   
                   # Kill diskspd process if exists
                   Stop-Process -Name diskspd `
                                -ErrorAction SilentlyContinue `
                                -WarningAction SilentlyContinue;
                   } `
                   -ArgumentList $VMName `
                   -ErrorAction Stop;
    
    # Remove PSSession
    Remove-PSSession -Session $VMSession;
};
#endregion

2.6 - Storage Spaces Direct Health.ps1

#region Get Storage Spaces Direct Metrics
# Get Storage Soaces Direct cluster metrics
Get-StorageSubSystem -FriendlyName clus* `
                     -CimSession S2D-CLU | `
                         Get-StorageHealthReport -Count 1 `
                                                 -CimSession S2D-CLU;
# Get Storage Spaces Direct metrics for Volume
Get-Volume -FileSystemLabel Volume1 `
           -CimSession S2D-CLU | `
                 Get-StorageHealthReport -Count 1 `
                                         -CimSession S2D-CLU;

# Get Storage Spaces Direct metrics for node S2D01
Get-StorageNode -CimSession S2D-CLU | `
                 Where-Object {$_.Name -like "S2D01*"}  | `
                       Get-StorageHealthReport -Count 1 `
                                               -CimSession S2D-CLU;
#endregion

#region Get Storage Spaces Direct Faults
# Get, Save and List HHD drive information
$RemovedHDD = Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"}
$RemovedHDD

# Detach Hard drive from VM
Get-VMHardDiskDrive -VMName S2D01 | where {$_.Path -like "*HDD01*"} | Remove-VMHardDiskDrive


# Return S2D Cluster Faults
Get-StorageSubSystem -FriendlyName clus* `
                     -CimSession S2D-CLU  | `
                          Debug-StorageSubSystem -CimSession S2D-CLU;

# Return S2D Volume Faults
Get-Volume -FileSystemLabel Volume1 `
           -CimSession S2D-CLU | `
                Debug-Volume -CimSession S2D-CLU;

# Return S2D Share Faults
Get-FileShare -Name  `
              -CimSession S2D-CLU | `
                   Debug-FileShare -CimSession S2D-CLU

# Attach Hard drives
Add-VMHardDiskDrive -VMName $RemovedHDD.VMName `
                    -ControllerType SCSI `
                    -ControllerNumber $RemovedHDD.ControllerNumber `
                    -ControllerLocation $RemovedHDD.ControllerLocation `
                    -Path $RemovedHDD.Path
#endregion

#region Get Storage Spaces Direct Health Actions
# Get S2D Health Actions
Get-StorageHealthAction -CimSession S2D-CLU
#endregion

2.7.1 - Setup Prerequisites.ps1

#region Create VMs and deploy Windows Server 2016 Servers 
# Executed on the Hyper-V host
# VM names
$VMNames = ('S2D11', 'S2D12', 'S2D13', 'S2D14');

# Prompt for vSwitch Name
$vSwitchName = Read-Host `
                        -Prompt 'Enter vSwitchName.';

# Prompt for storage path where the VMs will be stored
$StoragePath = Read-Host `
                        -Prompt "Enter storage Path. Example 'C:\ClusterStorage\Volume1'";

Foreach($VMName in $VMNames )
{
    
    # Create VM
    New-VM -Name $VMName `
           -Path "$StoragePath\$VMName" `
           -Generation 2 `
           -SwitchName $vSwitchName.ToString() `
           -MemoryStartupBytes 16GB ;
    
    # Set Proc number
    Set-VM -Name $VMName `
           -ProcessorCount 4;
    
    # Create OS VHDx
    New-VHD -Dynamic `
            -SizeBytes 127GB `
            -Path "$StoragePath\$VMName\OSDisk.vhdx";
    
    # Add VHDx to VM
    Add-VMHardDiskDrive -VMName $VMName `
                        -Path "$StoragePath\$VMName\OSDisk.vhdx";
    
    # Add DVD drive without configuration
    Add-VMDvdDrive -VMName $VMName;
};

# Set Vlan on a VM
Set-VMNetworkAdapterVlan -VMName  `
                         -VlanId  `
                         -Access;

#endregion

#region Configure IP Settings
# Execute on Storage Spaces Direct servers
# Get Available interfaces
Get-NetIPAddress `
        -ErrorAction Stop | select InterfaceAlias, InterfaceIndex, IPAddress;

# Prompt for new IP Address. Example: 192.168.1.20
$Interface = Read-Host `
                        -Prompt "Enter Interface Alias to configure:";

# Remove Interface configuration
Remove-NetIPAddress -InterfaceAlias $Interface.ToString() `
                    -Confirm:$false ;

# Prompt for new IP Address. Example: 192.168.1.20
$IPAddress = Read-Host `
                        -Prompt "Enter new IP address for $($Interface.ToString())";

# Prompt for IP Address Prefix (mask). Example: 24
$IPAddressPrefix = Read-Host `
                        -Prompt "Enter IP address prefix for $($Interface.ToString())";

# Prompt for Gateway IP Address. Example: 192.168.1.1
$Gateway = Read-Host `
                        -Prompt "Enter Gateway for $($Interface.ToString())";

# Prompt for DNS IP Address. Example: 192.168.1.4
$DNSServer = Read-Host `
                        -Prompt "Enter DNS Server for $($Interface.ToString())";

# Configure IP settings on interface
New-NetIPAddress –InterfaceAlias $Interface.ToString() `
                 –IPAddress $IPAddress `
                 –PrefixLength $IPAddressPrefix `
                 -DefaultGateway $Gateway;

# Set DNS
Set-DnsClientServerAddress -InterfaceAlias $Interface.ToString() `
                           -ServerAddresses $DNSServer;
#endregion

#region Set Time Zone
# List Time Zones
tzutil /l | more

# Set Time Zone
tzutil /s "" 
#endregion

#region Join servers to domain
# Execute on Storage Spaces Direct servers
# Prompt for new computer name
$NewName = Read-Host `
                        -Prompt 'Enter new Computer Name';

# Prompt for domain name. Example: contoso.local
$DomainName = Read-Host `
                        -Prompt 'Enter domain Name';

# Join computer to domain and restart
Add-Computer -DomainName $DomainName `
             -Force `
             -Restart `
             -NewName $NewName;
#endregion

#region Add Domain Group to Local Administrators
# Execute on Storage Spaces Direct servers
# Prompt for domain group
$DomainGroup = Read-Host `
                        -Prompt 'Enter domain group for local administrators on the server in the following format ';

# Add domain group to local administrators group
Net localgroup Administrators $DomainGroup.ToString() /add
#endregion

#region Enable Nested Virtualization
# Execute on Hyper-V host
# VM names
$VMNames = ('S2D11', 'S2D12', 'S2D13', 'S2D14');
Foreach($VMName in $VMNames )
{
    # Shut Down the VM
    Stop-VM –Name $VMName;
    
    # Enable Nested Virtualization
    Set-VMProcessor -VMName $VMName `
                    -ExposeVirtualizationExtensions $true;
    
    # Enable MAC Address Spoofing
    Get-VMNetworkAdapter -VMName $VMName | Set-VMNetworkAdapter -MacAddressSpoofing On;
    
    # Start VM
    Start-VM -Name $VMName;
};
#endregion

#region Add Disks
# Execute on Hyper-V host
$VMNames = ('S2D11', 'S2D12', 'S2D13', 'S2D14');
Foreach ($VMName in $VMNames){
   
   # SSD Disk Names
   $SSDDiskNames =  ("SSD01.vhdx", "SSD02.vhdx");

   # Create and attach SDD disks
   foreach ($SSDDiskName in $SSDDiskNames )
   {
        $diskName = $SSDDiskName;
        
        # Get the VM
        $VM = Get-VM -Name $VMName;
        
        # Get VM Location
        $VMLocation = $VM.Path;
        
        # Set Disk Size in GB
        $Disksize = 256;
        $DisksizeinBytes = $Disksize*1024*1024*1024;

        # Create Disk
        $VHD = New-VHD -Path  "$VMLocation\$diskName" `
                       -Dynamic `
                       -SizeBytes $DisksizeinBytes;
        
        # Atach the disk
        $AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `
                                               -Path             "$VMLocation\$diskName" `
                                               -ControllerType   SCSI `
                                               -ControllerNumber 0;
       
   };
   # HHD Disk Names
   $HDDDiskNames =  ("HDD01.vhdx", "HDD02.vhdx", "HDD03.vhdx");
   
   # Create and attach HDD disks
   foreach ($HDDDiskName in $HDDDiskNames )
   {
        $diskName = $HDDDiskName;

        # Get the VM
        $VM = Get-VM -Name $VMName;
        
        # Get VM Location
        $VMLocation = $VM.Path;
        
        # Set Disk Size in GB
        $Disksize = 512;
        $DisksizeinBytes = $Disksize*1024*1024*1024;
        
        # Create Disk
        $VHD = New-VHD -Path  "$VMLocation\$diskName" `
                       -Dynamic `
                       -SizeBytes $DisksizeinBytes;
        
        # Atach the disk
        $AddedSharedVHDX = ADD-VMHardDiskDrive -VMName $VM.Name `
                                               -Path             "$VMLocation\$diskName" `
                                               -ControllerType   SCSI `
                                               -ControllerNumber 0;
       
   };
   
};
#endregion

#region Update Windows Server 2016
# Execute on Storage Spaces Direct Servers
# Check for Available Updates
$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `
                      -ClassName MSFT_WUOperationsSession;  
# Scan for Updates
$result = $ci | Invoke-CimMethod -MethodName ScanForUpdates `
                                 -Arguments @{SearchCriteria="IsInstalled=0";OnlineScan=$true};

# Show Updates found for install
$result.Updates;

# Initiate Update and restart
$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `
                      -ClassName MSFT_WUOperationsSession;
# Apply Updates
Invoke-CimMethod -InputObject $ci `
                 -MethodName ApplyApplicableUpdates;

# Restart Server
Restart-Computer; exit;

# Show Installed Updates
$ci = New-CimInstance -Namespace root/Microsoft/Windows/WindowsUpdate `
                      -ClassName MSFT_WUOperationsSession;
$result = $ci | Invoke-CimMethod -MethodName ScanForUpdates `
                                 -Arguments @{SearchCriteria="IsInstalled=1";OnlineScan=$true};
$result.Updates;
#endregion

#region Install Roles
Install-WindowsFeature -Name File-Services;
Install-WindowsFeature -Name Failover-clustering -IncludeManagementTools;
Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart;
#endregion

#region Create Hyper-V virtual switch
Enter-PSSession -ComputerName S2D11;
# Create the virtual switch connected to the adapters of your choice,
# and enable the Switch Embedded Teaming (SET). You may notice a message that 
# your PSSession lost connection. This is expected and your session will reconnect
New-VMSwitch –Name SETswitch `
             –NetAdapterName "Ethernet" `
             –EnableEmbeddedTeaming $true;
#endregion

#region Create Cluster
$nodes = ("S2D11", "S2D12", "S2D13", "S2D14");

# Create Cluster with no storage
New-Cluster –Name S2D-CLU2 `
            –Node $nodes  `
            –NoStorage `
            –StaticAddress ;
#endregion

#region Setup Quorum
# Set Cloud Witness
Set-ClusterQuorum -CloudWitness `
                  -AccountName  `
                  -AccessKey  `
                  -Cluster S2D-CLU2;

# Set File Share Witness
Set-ClusterQuorum –FileShareWitness 
#endregion

#region Enable Storage Spaces Direct
# Enable Storage Spaces Direct
Enable-ClusterStorageSpacesDirect -CimSession S2D-CLU2 `
                                  -Confirm:$false;
# Remove all Storage Tiers
Get-StorageTier -CimSession S2D-CLU2 | Remove-StorageTier -Confirm:$false `
                                                         -CimSession S2D-CLU2;
#endregion

#region Change Media Type
# Change Media Type
Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU2 | `
                Get-PhysicalDisk -CimSession S2D-CLU2 | `
                ? Size           -lt 300GB ` | `
                Set-PhysicalDisk -CimSession S2D-CLU2 `
                                 –MediaType SSD;

Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU2  | `
                Get-PhysicalDisk -CimSession S2D-CLU2 | `
                ? Size           -gt 300GB | `
                Set-PhysicalDisk -CimSession S2D-CLU2 `
                                 –MediaType HDD;
#endregion

#region Create Storage Tiers
# Create Tiers
Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU2  | `
                New-StorageTier –FriendlyName Performance `
                                –MediaType SSD `
                                -ResiliencySettingName Mirror `
                                -CimSession S2D-CLU2;

Get-StoragePool -FriendlyName "S2D*" `
                -CimSession S2D-CLU2 | `
                New-StorageTier –FriendlyName Capacity `
                                –MediaType HDD `
                                -ResiliencySettingName Parity `
                                -CimSession S2D-CLU2;
#endregion

#region Create Volume
# Create Volume
New-Volume -StoragePoolFriendlyName "S2D*" `
           -FriendlyName Volume1 `
           -FileSystem CSVFS_ReFS `
           -StorageTierfriendlyNames Capacity,Performance `
           -StorageTierSizes 1500GB,500GB `
           -CimSession S2D-CLU2;
#endregion

2.7.3 - Failover with Storage Replica.ps1

#region Failover
# Move Replication
# Get VM configuration and shutdown VMs
$ClusterNodes = Get-ClusterNode -Cluster S2D-CLU | select Name;
$VMs = @();
foreach ($ClusterNode in $ClusterNodes)
{
    $VMs += Get-VM -CimSession $ClusterNode.Name;
    Get-VM -CimSession $ClusterNode.Name | Where-Object {$_.State -eq "Running" } | `
            Foreach-Object { Stop-VM $_.Name -CimSession $ClusterNode.Name; };
};

# Move replication
Set-SRPartnership -NewSourceComputerName S2D-CLU2  `
                  -SourceRGName ReplicaGroupDest  `
                  -DestinationComputerName S2D-CLU `
                  -DestinationRGName ReplicaGroupSource `
                  -Confirm:$false;

# Wait for disk to come online
While ((Get-ClusterSharedVolume -Cluster S2D-CLU2 | where {$_.Name -like "*Volume1*"}).State -ne "Online")
{
    Sleep -Seconds 1;
};

# Import VMs
foreach ($VM in $VMS)
{
    Try
    {
        $VMConf = Invoke-Command -ComputerName S2D11 `
                                 -ScriptBlock { Get-Childitem $args[0] -Recurse *.vmcx } `
                                 -ArgumentList $VM.Path
        
        Import-VM -Path $VMConf.FullName `
                  -Register `
                  -CimSession S2D11;
        Start-sleep 5
    }
    Catch
    {
        Write-Warning -Message "VM is already imported.";
    };
};

# Start VMs
$ClusterNodes2 = Get-ClusterNode -Cluster S2D-CLU2 | select Name;
Foreach ($ClusterNode2 in $ClusterNodes2)
{
    
    # Add VM to Cluster
    Get-VM -CimSession $ClusterNode2.Name | `
            Foreach-Object { Add-ClusterVirtualMachineRole -VMName $_.Name `
                                                           -Cluster S2D-CLU2 `
                                                           -WarningAction SilentlyContinue};
    Get-VM -CimSession $ClusterNode2.Name | `
           Foreach-Object { Start-VM $_.Name -CimSession $ClusterNode2.Name; };
};

#endregion

#region Failback

# Shutdown VMs
$ClusterNodes = Get-ClusterNode -Cluster S2D-CLU2 | select Name;
foreach ($ClusterNode in $ClusterNodes)
{
    Get-VM -CimSession $ClusterNode.Name | Where-Object {$_.State -eq "Running" } | `
            Foreach-Object { Stop-VM $_.Name -CimSession $ClusterNode.Name; };
};


# Move Replication
Set-SRPartnership -NewSourceComputerName S2D-CLU  `
                  -SourceRGName  ReplicaGroupSource  `
                  -DestinationComputerName S2D-CLU2 `
                  -DestinationRGName ReplicaGroupDest `
                  -Confirm:$false;


# Wait for disk to come online
While ((Get-ClusterSharedVolume -Cluster S2D-CLU | where {$_.Name -like "*Volume1*"}).State -ne "Online")
{
    Sleep -Seconds 1;
};


# Start VMs
$ClusterNodes = Get-ClusterNode -Cluster S2D-CLU | select Name;
Foreach ($ClusterNode in $ClusterNodes)
{
    Get-VM -CimSession $ClusterNode.Name | `
           Foreach-Object { Start-VM $_.Name -CimSession $ClusterNode.Name; };
};
#endregion