Microsoft Multipath I/O (MPIO) User’s Guide for Windows Server 2012

What's new in MPIO in Windows Server 2012

The following changes to MPIO are available in Windows Server 2012:

PowerShell management and configuration

Utilizing the MPIO module in PowerShell, MPIO can be configured using PowerShell as an alternative to MPCLAIM.exe. See the section Installing and Managing MPIO using PowerShell: section of this document.

Heterogeneous HBA usage with MPIO

Heterogeneous, (That is different) HBA types can now be used together with non-boot Virtual Disks only. In prior releases of Windows Server, it was mandatory to use HBA's of the same model.

Support for MPIO with multiport-SAS enclosures

The use of MPIO with Data volumes on a multiport-SAS enclosure is now supported.

Understanding MPIO Features and Components

About MPIO

Microsoft Multipath I/O (MPIO) is a Microsoft-provided framework that allows storage providers to develop multipath solutions that contain the hardware-specific information needed to optimize connectivity with their storage arrays. These modules are called device-specific modules (DSMs). The concepts around DSMs are discussed later in this document.

MPIO is protocol-independent and can be used with Fibre Channel, Internet SCSI (iSCSI), and Serial Attached SCSI (SAS) interfaces in Windows Server® 2008 and Windows Server 2012.

Multipath solutions in Windows Server 2012

When running on Windows Server 2012, an MPIO solution can be deployed in the following ways:

  • By using a DSM provided by a storage array manufacturer for Windows Server 2012 in a Fibre Channel, iSCSI, or SAS shared storage configuration.
  • By using the Microsoft DSM, which is a generic DSM provided for Windows Server 2012 in a Fibre Channel, iSCSI, or SAS shared storage configuration.

Note

To work with the Microsoft DSM, storage must be SCSI Primary Commands-3 (SPC3) compliant.

High availability solutions

Keeping mission-critical data continuously available has become a requirement over a wide range of customer segments from small business to datacenter environments. Enterprise environments that use Windows Server require no downtime for key workloads, including file server, database, messaging, and other line of business applications. This level of availability can be difficult and very costly to achieve, and it requires that redundancy be built in at multiple levels: storage redundancy, backups to separate recovery servers, server clustering, and redundancy of the physical path components.

Application availability through Failover Clustering

Clustering is the use of multiple servers, host bus adapters (HBAs), and storage devices that work together to provide users with high application availability. If a server experiences a hardware failure or is temporarily unavailable, end users are still able to transparently access data or applications on a redundant cluster node. In addition to providing redundancy at the server level, clustering can also be used as a tool to minimize the downtime required for patch management and hardware maintenance. Clustering solutions require software that enables transparent failover between systems. Failover Clustering [formerly known as Microsoft Cluster Server (MSCS)] is one such solution that is included with the Windows Server 2012 Enterprise and Windows Server 2012 Datacenter operating systems.

High availability through MPIO

MPIO allows Windows® to manage and efficiently use up to 32 paths between storage devices and the Windows host operating system. Although both MPIO and Failover Clustering result in high availability and improved performance, they are not equivalent concepts. While Failover Clustering provides high application availability and tolerance of server failure, MPIO provides fault tolerant connectivity to storage. By employing MPIO and Failover Clustering together as complimentary technologies, users are able to mitigate the risk of a system outage at both the hardware and application levels.

Note

When using the Microsoft Internet SCSI (iSCSI) Software Initiator Boot, a maximum of 32 paths to the boot volume is supported.

MPIO provides the logical facility for routing I/O over redundant hardware paths connecting server to storage. These redundant hardware paths are made up of components such as cabling, host bus adapters (HBAs), switches, storage controllers, and possibly even power. MPIO solutions logically manage these redundant connections so that I/O requests can be rerouted if a component along one path fails.

As more and more data is consolidated on storage area networks (SANs), the potential loss of access to storage resources is unacceptable. To mitigate this risk, high availability solutions, such as MPIO, have now become a requirement.

Considerations for using MPIO in Windows Server 2012

Consider the following when using MPIO in Windows Server 2012:

Upgrade and Deployment Requirements

It is important to note that during Windows Upgrades with systems that are using MPIO, it is necessary to disconnect all but one data path during the upgrade, as MPIO is not available during Windows Setup.

The recommenced steps for an upgrade are as follows:

  1. Disconnect all but one data path to the SAN that you are booting from.
  2. Perform the Windows upgrade.
  3. Install the Multipath I/O feature
  4. Reconnect the additional paths
  5. Verify MPIO settings, or reconfigure MPIO failover polices as desired.

When using the Microsoft DSM, storage that implements an Active/Active storage scheme but does not support ALUA will default to use the Round Robin load-balancing policy setting, although a different policy setting may be chosen later. Additionally, you can pre-configure MPIO so that when it detects a certain hardware ID, it defaults to a specific load-balancing policy setting.

For more information about load-balancing policy settings, see Referencing MPCLAIM Examples.

Windows multipathing solutions are required if you want to utilize the MPIO framework to be eligible to receive logo qualification for Windows Server. For additional information about Windows logo requirements, see Windows QualityOnline Services (Winqual)(http://go.microsoft.com/fwlink/?LinkId=71551).

This joint solution allows storage partners to design hardware solutions that are integrated with the Windows operating system. Compatibility with both the operating system and other partner provided storage devices is ensured through the Windows Logo program tests to help ensure proper storage device functionality. This ensures a highly available multipath solution by using MPIO, which offers supportability across Windows operating system implementations.

To determine which DSM to use with your storage, refer to information from your hardware storage array manufacturer. Multipath solutions are supported as long as a DSM is implemented in line with logo requirements for MPIO. Most multipath solutions for Windows today use the MPIO architecture and a DSM provided by the storage array manufacturer. You can use the Microsoft DSM provided by Microsoft in Windows Server 2012 if it is also supported by the storage array manufacturer. Refer to your storage array manufacturer for information about which DSM to use with a given storage array, as well as the optimal configuration of it.

Multipath software suites available from storage array manufacturers may provide an additional value-add beyond the implementation of the Microsoft DSM because the software typically provides auto-configuration, heuristics for specific storage arrays, statistical analysis, and integrated management. We recommend using the DSM provided by the hardware storage array manufacturer to achieve optimal performance because the storage array manufacturer can make more advanced path decisions in their DSM that are specific to their array, which may result in quicker path failover times.

Making MPIO-based solutions work

The Windows operating system relies on the Plug and Play (PnP) Manager to dynamically detect and configure hardware (such as adapters or disks); including hardware used for high availability/high performance multipath solutions.

Note

You might be prompted to restart the computer after the MPIO feature is first installed.

Device discovery and enumeration

An MPIO/Multipath driver cannot work effectively or efficiently until it discovers, enumerates, and configures different devices that the operating system sees through redundant adapters into a logical group. We will briefly outline in this section how MPIO works with DSM in discovering and configuring the devices.

Without any multipath driver, the same devices through different physical paths would appear as totally different devices, thereby leaving room for data corruption. Figure 1 depicts this scenario.

Figure 1 Multipathing software and storage unit distinction

Following is the sequence of steps that the device driver stack walks through in discovering, enumerating, and grouping the physical devices and device paths into a logical set. (This assumes a scenario where a new device is presented to the server.)

  1. A new device arrives.
  2. The PnP manager detects the device's arrival.
  3. The MPIO driver stack is notified of the device's arrival (it takes further action if it is a supported MPIO device).
  4. The MPIO driver stack creates a pseudo device for the physical device.
  5. The MPIO driver walks through all the available DSMs to determine which vendor-specific DSM can claim the device. After a DSM claims a device, it is associated only with the DSM that claimed it.
  6. The MPIO driver, along with the DSM, verifies that the path to the device is connected, active, and ready for I/O.

If a new path for this same device arrives, MPIO then works with the DSM to determine whether this device is the same as any other claimed device. It then groups this physical path for the same device into a logical set for the multipath group that is called a pseudo-Logical Unit Number (pseudo-LUN).

Unique storage device identifier

For dynamic discovery to work correctly, some form of identifier must be identified and obtainable regardless of the path from the host to the storage device. Each logical unit must have a unique hardware identifier. The MPIO driver package does not use disk signatures placed in the data area of a disk for identification purposes by software. Instead, the Microsoftprovided generic DSM generates a unique identifier from the data that is provided by the storage hardware. MPIO also provides for optionally using a unique hardware identifier assigned by the device manufacturer.

Dynamic load balancing

Load balancing, the redistribution of read/write requests for the purpose of maximizing throughput between server and storage device, is especially important in high workload settings or other settings where consistent service levels are critical. Without MPIO software, a server sending I/O requests down several paths may operate with very heavy workloads on some paths while others are underutilized.

The MPIO software supports the ability to balance I/O workload without administrator intervention. MPIO determines which paths to a device are in an active state and can be used for load balancing. Each vendor's load-balancing policy setting (which may use any of several algorithms, such as Round Robin, the path with the fewest outstanding commands, or a vendor unique algorithm) is set in the DSM. This policy setting determines how the I/O requests are actually routed.

Note

In addition to the support for load balancing provided by MPIO, the hardware used must support the ability to use multiple paths at the same time, rather than just fault tolerance.

Error handling, failover, and recovery

The MPIO driver, in combination with the DSM, supports end-to-end path failover. The process of detecting failed paths and recovering from the failure is automatic, usually fast, and completely transparent to the IT organization. The data ideally remains available at all times.

Not all errors result in failover to a new path. Some errors are temporary and can be recovered by using a recovery routine in the DSM; if recovery is successful, MPIO is notified and path validity is checked to verify that it can be used again to transmit I/O requests.

When a fatal error occurs, the path is invalidated and a new path is selected. The I/O is resubmitted on this new path without requiring the application layer to resubmit the data.

Differences in load-balancing technologies

There are two primary types of load-balancing technologies referred to within Windows. This document discusses only MPIO Load Balancing.

  • MPIO Load Balancing is a type of load balancing supported by MPIO that uses multiple data paths between server and storage to provide greater throughput of data than could be achieved with only one connection.
  • Network Load Balancing (NLB) is a failover cluster technology (formerly known as WLBS) that provides load balancing of network interfaces to provide greater throughput across a network to the server, and is most typically used with Internet Information Services (IIS).
Differences in failover technologies

When addressing data path failover, such as the failover of host bus adapter (HBA) or iSCSI connections to storage, the following main types of failover are available:

  • MPIO-based fault tolerant failover in this scenario, multiple data paths to the storage are configured, and in the event that one path fails, HBA or the network adapter is able to fail over to the other path and resend any outstanding I/O. For a server that has one or more HBAs or network adapters, MPIO provides the following:
    • Support for redundant switch fabrics or connections from the switch to the storage array
    • Protection against the failure of one of the adapters within the server directly
  • MPIO-based load balancing in this scenario, multiple paths to storage are also defined; however, the DSM is able to balance the data load to maximize throughput. This configuration can also employ Fault Tolerant behavior so that if one path fails, all data would follow an alternate path.
  • In some hardware configurations you may have the ability to perform dynamic firmware updates on the storage controller, such that a complete outage is not required for firmware updates. This capability is hardware dependent and requires (at a minimum) that more than one storage controller be present on the storage so that data paths can be moved off of a storage controller for upgrades.
  • Failover Clustering This type of configuration offers resource failover at the application level from one cluster server node to another. This type of failover is more invasive than storage path failover because it requires client applications to reconnect after failover, and then resend data from the application layer. This method can be combined with MPIO-based fault tolerant failover and MPIO-based load balancing to further mitigate the risk of exposure to different types of hardware failures.

Different behaviors are available depending on the type of failover technology used, and whether it is combined with a different type of failover or redundancy. Consider the following scenarios:

Scenario 1: Using MPIO without Failover Clustering

This scenario provides for either a fault tolerant connection to data, or a load-balanced connection to storage. Since this layer of fault tolerant operation protects only the connectivity between the server and storage, it does not provide protection against server failure.

Scenario 2: Combining the use of MPIO in fault tolerant mode with Failover Clustering This configuration provides the following advantages:

  • If a path to the storage fails, MPIO can use an alternate path without requiring client application reconnection.
  • If an individual server experiences a critical event such as hardware failure, the application managed by Failover Clustering is failed over to another cluster node. While this scenario requires client reconnection, the time to restore the service may be much shorter than that required for replacing the failed hardware.

Scenario 3: Combining the use of MPIO in load-balancing mode with Failover Clustering

This scenario provides the same benefits as listed in Scenario 2, plus the following benefit:

During normal operation, multiple data paths may be employed to provide greater aggregate throughput than one path can provide.

About the Windows storage stack and drivers

For the operating system to correctly perform operations that relate to hardware, such as addition or removal of devices or transferring I/O requests from an application to a storage device, the correct device drivers must be associated with the device. All device-related functionality is initiated by the operating system, but under direct control of subroutines contained within each driver. These processes are considerably complicated when there are multiple paths to a device. The MPIO software prevents data corruption by ensuring correct handling of the driver associated with a single device that is visible to the operating system through multiple paths. Data corruption is likely to occur because when an operating system believes two separate paths lead to two separate storage volumes, it does not enforce any serialization or prevent any cache conflicts. Consider what would happen if a new NTFS file system tries to initialize its journal log twice on a single volume.

Storage stack and device drivers

Storage architecture in Windows consists of a series of layered drivers, as shown in Figure 2. (Note that the application and the disk subsystem are not part of the storage layers.) When a device such as a storage disk is first added in, each layer of the hierarchy is responsible for making the disk functional (such as by adding partitions, volumes, and the file system). The stack layers below the broken line are collectively known as the device stack and deal directly with managing storage devices.

Figure 2 Layered drivers in Windows storage architecture

Device drivers

Device drivers manage specific hardware devices, such as a disks or tapes, on behalf of the operating system.

Port drivers

Port drivers manage different types of transport, depending on the type of adapter (for example, USB, iSCSI, or Fibre Channel) in use. Historically, one of the most common port drivers in the Windows system was the SCSIport driver. In conjunction with the class driver, the port driver handles Plug and Play (PnP) and power functionality. Port drivers manage the connection between the device and the bus. Windows Server 2003 introduced a new port driver, StorPort, which is better suited to high-performance, high-reliability environments, and is typically more commonly used today than SCSIport.

Miniport drivers

Each storage adapter has an associated device driver, known as a miniport. This driver implements only those routines necessary to interface with the storage adapter's hardware. A miniport partners with a port driver to implement a complete layer in the storage stack, as shown in Figure 2.

Class drivers

Class drivers manage a specific device type. They are responsible for presenting a unified disk interface to the layers above (for example, to control read/write behavior for a disk). The class driver manages the functionality of the device. Class drivers (like port and miniport drivers) are not a part of the MPIO driver package per se; however, the PnP disk class driver, disk.sys, is used as part of the multipathing solution because the class driver controls the disk add/removal process, and I/O requests pass through this driver to the MPIO bus driver. For more information, see the MPIO drivers sections that follow.

The MPIO driver is implemented in the kernel mode of the operating system. It works in combination with the PnP Manager, the disk class driver, the port driver, the miniport driver, and a device-specific module (DSM) to provide full multipath functionality.

Multipath bus drivers (mpio.sys)

Bus drivers are responsible for managing the connection between the device and the host computer. The multipath bus driver provides a "software bus (also technically termed a "root bus")"—the conceptual analog to an actual bus slot into which a device plugs. It acts as the parent bus for the multipath children (disk PDOs). As a root bus, mpio.sys can create new device objects that are not created by new hardware being added into the configuration. The MPIO bus driver also communicates with the rest of the operating system, and manages the PnP connection and power control between the hardware devices and the host computer, and uses WMI classes to allow storage array manufacturers to monitor and manage their storage and associated DSMs. For more information about WMI, see MPIO WMI Classes(http://msdn.microsoft.com/en-us/library/ff562468.aspx).

DSM management

Management and monitoring of the DSM can be done through the Windows Management Instrumentation (WMI) interface. In Windows Server 2012, MPIO can be configured by using the mpclaim.exe tool, and additionally includes the needed WMI code within the MPIO drivers.

Note: The PowerShell module and MPIO UI can only be used for configuring the Microsoft DSM. When using a vendor-provided DSM, please refer to the vendor instructions for configuration steps with MPIO.

MPIO DSM

As explained previously in this document, a storage array manufacturer's device-specific module (DSM) incorporates knowledge of the manufacturer's hardware. A DSM interacts with the MPIO driver. The DSM plays a crucial role in device initialization and I/O request handling, including I/O request error handling. These DSM actions are described further in the following sections.

Device initialization

MPIO allows for devices from different storage vendors to coexist, and be connected to the same Windows Server 2012-based system. This means that a single server running Windows Server can have multiple DSMs installed on it. When a new eligible device is detected via PnP, MPIO attempts to determine which DSM is appropriate to handle the device. MPIO contacts each DSM one at a time. The first DSM to claim ownership of the device is associated with that device and the remaining DSMs are not allowed a chance to press claims for that already claimed device. There is no particular order in which the DSMs are contacted, although the Microsoft DSM is always contacted last. If the DSM does support the device, it then indicates whether the device is a new installation, or is the same device previously installed but is now visible through a new path.

MPIO device discovery

Figure 3 illustrates how devices and path discovery work with MPIO.

Figure 3 Devices and path discovery with MPIO

Request handling

When an application makes an I/O request to a specific device, the DSM that claimed the device makes a determination, based on its internal load-balancing algorithms, as to which path the request should be sent.

Error handling

If the I/O request fails, the DSM is responsible for analyzing the failure to determine whether to retry the I/O, to cause a failover to a new path, or to return the error to the requesting application. In case of a failover, the DSM determines what new path should be used. The actual rebuild of the I/O and resubmission of the I/O is done by MPIO, and is not the responsibility of the DSM. The details of the DSM/MPIO interaction to make all of this happen are beyond the scope of this document, and are provided in the MPIO Driver Development Kit (DDK) available from Microsoft.

Details about the Microsoft DSM in Windows Server 2012

The Microsoft device-specific module (DSM) provided as part of the complete solution in Windows Server 2012 includes support for the following policy settings:

  • Failover Only Policy setting that does not perform load balancing. This policy setting uses a single active path, and the rest of the paths are standby paths. The active path is used for sending all I/O. If the active path fails, one of the standby paths is used. When the path that failed is reactivated or reconnected, the standby path can optionally return to standby if failback is turned on. For more information about how to configure MPIO path automatic failback, see the section later in this document titled, "Configure the MPIO Failback Policy."
  • Round Robin Load-balancing policy setting that allows the DSM to use all available paths for MPIO in a balanced way. This is the default policy that is chosen when the storage controller follows the active-active model and the management application does not specifically choose a load-balancing policy setting.
  • Round Robin with Subset Load-balancing policy setting that allows the application to specify a set of paths to be used in a round robin fashion, and with a set of standby paths. The DSM uses paths from active paths for processing requests as long as at least one of the paths is available. The DSM uses a standby path only when all of the active paths fail. For example, given 4 paths: A, B, C, and D, paths A, B, and C are listed as active paths and D is the standby path. The DSM chooses a path from A, B, and C in round robin fashion as long as at least one of them is available. If all three paths fail, the DSM uses D, the standby path. If paths A, B, or C become available, the DSM stops using path D and switches to the available paths among A, B, and C.
  • Least Queue Depth Load-balancing policy setting that sends I/O down the path with the fewest currently outstanding I/O requests. For example, consider that there is one I/O that is sent to LUN 1 on Path 1, and the other I/O is sent to LUN 2 on Path 1. The cumulative outstanding I/O on Path 1 is 2, and on Path 2, it is 0. Therefore, the next I/O for either LUN will process on Path 2.
  • Weighted Paths Load-balancing policy setting that assigns a weight to each path. The weight indicates the relative priority of a given path. The larger the number, the lower ranked the priority. The DSM chooses the least-weighted path from among the available paths.
  • Least Blocks Load-balancing policy setting that sends I/O down the path with the least number of data blocks currently being processed. For example, consider that there are two I/O(s): one is 10 bytes and the other is 20 bytes. Both are in process on Path 1, and there are no outstanding I/Os on Path 2. The cumulative outstanding amount of I/O on Path 1 is 30 bytes. On Path 2, it is 0. Therefore, the next I/O will process on Path 2.
  • Determining whether to use the Microsoft DSM vs. a Vendor's DSM To determine which DSM to use with your storage, refer to information from your hardware storage array manufacturer. Multipath solutions are supported as long as a DSM is implemented in line with logo requirements for MPIO. Most multipath solutions for Windows today use the MPIO architecture and a DSM provided by the storage array manufacturer. You can use the Microsoft DSM provided by Microsoft in Windows Server 2012 if it is also supported by the storage array manufacturer. Refer to your storage array manufacturer for information about which DSM to use with a given storage array, as well as the optimal configuration of it.

Multipath software suites available from storage array manufacturers may provide an additional value-add beyond the implementation of the Microsoft DSM because the software typically provides auto-configuration, heuristics for specific storage arrays, statistical analysis, and integrated management. We recommend using the DSM provided by the hardware storage array manufacturer to achieve optimal performance because the storage array manufacturer can make more advanced path decisions in their DSM that are specific to their array, which may result in quicker path failover times.

Installing and Configuring MPIO using the GUI

This section explains how to install and configure Microsoft Multipath I/O (MPIO) on Windows Server 2012.

Install MPIO on Windows Server 2012

MPIO is an optional feature in Windows Server 2012, and is not installed by default. To install MPIO on your server running Windows Server 2012, perform the following steps.

To add MPIO on a server running Windows Server 2012

  1. Open Server Manager. To open Server Manager, click Start, point to Administrative Tools, and then click Server Manager.
  2. In the Server Manager tree, click Features.
  3. In the Features area, click Add Features.
  4. In the Add Features Wizard, on the Select Features page, select the Multipath I/O check box, and then click Next.
  5. On the Confirm Installation Selections page, click Install.
  6. When the installation has completed, on the Installation Results page, click Close. When prompted to restart the computer, click Yes.
  7. After restarting the computer, the computer finalizes the MPIO installation.
  8. Click Close.

MPIO configuration and DSM installation

When MPIO is installed, the Microsoft device-specific module (DSM) is also installed, as well as an MPIO control panel. The control panel can be used to do the following:

  • Configure MPIO functionality
  • Install additional storage DSMs
  • Create MPIO configuration reports

Open the MPIO control panel

Open the MPIO control panel either by using the Windows Server 2012 Control Panel or by using Administrative Tools.

To open the MPIO control panel by using the Windows Server 2012 Control Panel

  1. On the Windows Server 2012 desktop, click Start, click Control Panel, and then in the Views list, click Large Icons.
  1. Click MPIO.
  2. On the User Account Control page, click Continue.

To open the MPIO control panel by using Administrative Tools

  1. On the Windows Server 2012 desktop, click Start, point to Administrative Tools, and then click MPIO.
  2. On the User Account Control page, click Continue.

The MPIO control panel opens to the Properties dialog box.

Note

To access the MPIO control panel on Server Core installations, open a command prompt and type MPIOCPL.EXE.

MPIO Properties dialog box

The MPIO Properties dialog box has four tabs:

MPIO Devices By default, this tab is selected. This tab displays the hardware IDs of the devices that are managed by MPIO whenever they are present. It is based on a hardware ID (for example, a vendor plus product string) that matches an ID that is maintained by MPIO in the MPIOSupportedDeviceList, which every DSM specifies in its Information File (INF) at the time of installation.

To specify another MPIO device, on the MPIO Devices tab, click Add.

Note

In the Add MPIO Support dialog box, the vendor ID (VID) and product ID (PID) that are needed are provided by the storage provider, and are specific to each type of hardware. You can list the VID and PID for storage that is already connected to the server by using the mpclaim tool at the command prompt. The hardware ID is an 8character VID plus a 16-character PID. This combination is sometimes referred to as a VID/PID. For more information about the mpclaim tool, see Referencing MPCLAIM Examples.

Discover Multi-Paths Use this tab to run an algorithm for every device instance that is present on the system and determines if multiple instances actually represent the same Logical Unit Number (LUN) through different paths. For such devices found, their hardware IDs are presented to the administrator for use with MPIO (which includes Microsoft DSM support). You can also use this tab to add Device IDs for Fibre Channel devices that use the Microsoft DSM.

Note

Devices that are connected by using Microsoft Internet SCSI (iSCSI) are not displayed on the Discover Multi-Paths tab.

DSM Install This tab can be used for installing DSMs that are provided by the storage independent hardware vendor (IHV). Many storage arrays that are SPC-3 compliant will work by using the MPIO Microsoft DSM. Some storage array partners also provide their own DSMs to use with the MPIO architecture.

Note

We recommend using vendor installation software to install the vendor's DSM. If the vendor does not have a DSM setup tool, you can alternatively install the vendor's DSM by using the DSM Install tab on the MPIO control panel.

Configuration Snapshot This tab allows you to save the current Microsoft Multipath I/O (MPIO) configuration to a text file that you can review for troubleshooting or comparison purposes at a later time.

The report includes information on the device-specific module (DSM) that is being used, the number of paths, and the path state.

You can also save this configuration at a command prompt by using the mpclaim command. For information about how to use mpclaim, in an elevated command prompt, type run mpcliam /?. For more information, see Appendix A: MPCLAIM.EXE Usage examples.

Claim iSCSI-attached devices for use with MPIO

Note

This process causes the Microsoft DSM to claim all iSCSI-attached devices regardless of their vendor ID and product ID settings. For information about how to control this behavior on an individual VID/PID basis, see Installing and Managing MPIO using PowerShell:.

To claim an iSCSI-attached device for use with MPIO

  1. Open the MPIO control panel, and then click the Discover Multi-Paths tab.
  2. Select the Add support for iSCSI devices check box, and then click Add. When prompted to restart the computer, click Yes.
  3. When the computer restarts, the MPIO Devices tab lists the additional hardware ID "MSFT2005iSCSIBusType_0x9." When this hardware ID is listed, all iSCSI bus attached devices will be claimed by the Microsoft DSM.

Configure the load-balancing policy setting for a Virtual Disk (Aka LUN)

MPIO LUN load balancing is integrated with Disk Management. To configure MPIO LUN load balancing, open the Disk Management graphical user interface.

Note

Before you can configure the load-balancing policy setting by using Disk Management, the device must first be claimed by MPIO. If you need to preselect a policy setting for disks that are not yet present, see Installing and Managing MPIO using PowerShell:

To configure the load-balancing policy setting for a LUN

  1. Open Disk Management. To open Disk Management, on the Windows desktop, click Start; in the Start Search field, type diskmgmt.msc; and then, in the Programs list, click diskmgmt.
  2. Right-click the disk for which you want to change the policy setting, and then click Properties.
  3. On the MPIO tab, in the Select the MPIO policy list, click the load-balancing policy setting that you want.
  4. If desired, click Details to view additional information about the currently configured DSM.

Note

When using a DSM other than a Microsoft DSM, the DSM vendor may use a separate interface to manage these policies.

Note

For information about DSM timer counters, see Configuring MPIO Timers.

Configure the MPIO Failback policy setting

If you use the Failover Only load-balancing policy setting, MPIO failback allows the configuration of a preferred I/O path to the storage, and allows automatic failback to be the preferred path if desired.

Consider the following scenario:

  • The computer that is running Windows Server 2012 is configured by using MPIO and has two connections to storage, Path A and Path B.
  • Path A is configured as the active/optimized path, and is set as the preferred path.
  • Path B is configured as a standby path.

If Path A fails, Path B is used. If Path A recovers thereafter, Path A becomes the active path again, and Path B is set to standby again.

To configure the preferred path setting

  1. Open Disk Management. To open Disk Management, on the Windows desktop, click Start; in the Start Search field, type diskmgmt.msc; and then, in the Programs list, click diskmgmt.
  2. Right-click the disk for which you want to change the policy setting, and then click Properties.
  3. On the MPIO tab, double-click the path that you want to designate as a preferred path.

Note

The setting only works with the Failover Only MPIO policy setting.

4. Select the Preferred check box, and then click OK.

Installing and Managing MPIO using PowerShell:

Note: The PowerShell module for management of MPIO is available on Windows Server 2012 only.

Use of the MPIO module in PowerShell requires an "elevated" PowerShell window, opened with Administrator privileges.

Query the installation state of MPIO:

To determine if MPIO is currently installed, use the following command:

Enable or Disable the MPIO Feature:

Listing commands available in the MPIO module:

The commands available in the MPIO module can be listed using get-command as shown below

Full help and example content for the MPIO module is available via the following two methods:

  1. In PowerShell, after importing the MPIO module, updated help can be downloaded from the internet by running the following command: Update-Help
  2. This same content is available online on TechNet at the following location: http://technet.microsoft.com/library/hh826113.aspx

Obtaining additional PowerShell help and examples

For a complete list of available PowerShell cmdlets in the Storage module, including usage and examples, please refer to the following TechNet site:

a. MPIO Cmdlets in Windows PowerShell http://technet.microsoft.com/library/hh826113.aspx

You can also obtain help for individual commands by specifying the cmdlet with Get-Help, such as shown below:

Get-Help Get-MPIOAvailableHW

Obtaining and updating help for PowerShell cmdlets

In Windows 8, PowerShell modules which ship with Windows do not include help content "inbox" instead this content is provided on the Internet, and may be updated via PowerShell from a computer which has Internet access.

Once a PowerShell module has been imported into the current PowerShell session, the help content may be downloaded and updated via the following command:

Update-Help

Note: the –Force parameter must be used if attempting to update more than once per 24 hour period.

The get-help cmdlet offers multiple levels of verbosity, which include

  • No switch specified, only basic help is returned.
  • -Detailed provides detailed help
  • -Example provides examples of the cmdlet in use.
  • -Full provides all available help content for the specified cmdlet.

For example, in order to obtain script examples for the Get-Disk cmdlet, the following command is utilized:

Get-Help Get-MPIOAvailableHW –Examples

Script Example: Configuring MPIO using PowerShell:

In this example, I wish to perform the following tasks:

Tip: If these steps are performed prior to connecting devices of the desired BusType, you can typically avoid the need for a restart.

  • Install the MPIO feature on a new Windows Server 2012 installation.
  • Configure MPIO to automatically claim all iSCSI devices.
  • Configure the default Load Balance policy for Round Robin.
  • Set the Windows Disk timeout to 60 seconds.

Here is what this script would look like:

 # Enable the MPIO Feature 
Enable-WindowsOptionalFeature -Online -FeatureName MultipathIO  
# Enable automatic claiming of iSCSI devices for MPIO 
Enable-MSDSMAutomaticClaim -BusType iSCSI  
# Set the default load balance policy of all newly claimed devices to Round Robin 
Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR 
 
# Set the Windows Disk timeout to 60 seconds. Set-MPIOSetting -NewDiskTimeout 60  

Scripting the Configuration of MPIO Timer Values

Timers for MPIO can be managed using the Get-MPIOSetting and Set-MPIOSetting PowerShell cmdlets:

Example: Viewing the current MPIO settings.

Configuring MPIO Timers

The following MPIO timer values can be configured to tune the behavior of MPIO to meet operational requirements. In most cases, the default values may be adequate; however, it may be necessary to adjust these settings to obtain optimal performance for your environment.

Consider the following scenarios:

Scenario 1 A two-node Windows Server failover cluster is configured with multiple connections to storage for each node by using MPIO, and employs multiple active paths to maximize throughput. Due to application Service Level Agreement (SLA) requirements, in the event of path failures, short timeout values are required by the customer so that the resources will failover to the other cluster node more quickly. In this case, timer values such as the PDORemovePeriod are set to a low value and then tested to ensure compliance with customer requirements.

Scenario 2 A single server that is not configured as a failover cluster is configured with MPIO and multiple connections to storage to provide both increased throughput and fault tolerance of path failures. In this case, timers such as PDORemovePeriod are increased to allow additional time for path recovery to occur. Additionally, testing is performed to ensure that if the maximum timer values are experienced, it would balance, allowing ample time for path recovery under realistic production load with the need to minimize the amount of time allowed before the disk objects are removed and I/O failures are exposed to upper-level applications to alert support personnel of the issue.

For any customer scenario, determining the best timer values to use depends on a number of different variables, such as any of the following, which could all potentially impact whether the current settings would meet SLA or Operating Level Agreement (OLA) requirements:

  • The number of paths that exist at the time a problem is encountered
  • The number of paths that are failed
  • The number of paths that are already attempting to recover
  • The amount of in-flight I/Os, and so forth
  • The CPU load on the system at the time an issue occurs

Note

Settings 1 through 5 in the following table can be set through the user interface. The information provided on these settings is specific to the use of Microsoft DSMs. When using a vendor-provided DSM, refer to vendor documentation for information about the recommended timer values.

Important

Although it is possible to set the following values to a very large number, we recommend that you use caution when doing so, and that you test the values for applicability prior to using them in a production environment.

For example, the value MAXULONG is 0xFFFFFFFF. If this value were applied to a setting such as PDORemovePeriod (where it represents seconds), the value would equate to approximately 49,000 days of delay before an error would be reported.

Setting

Definition

PathVerifyEnabled

Flag that enables path verification by MPIO on all paths every N seconds (where N depends on the value set in PathVerificationPeriod).

Type is Boolean and must be filled with either 0 (disable) or 1 (enable). By default, it is disabled.

PathVerificationPeriod

This setting is used to indicate the time period (in seconds) with which MPIO has been requested to perform path verification. This field is only honored if PathVerifyEnabled is TRUE.

This timer is specified in seconds. The default is 30 seconds. The maximum allowed is MAXULONG.

PDORemovePeriod

This setting controls the amount of time (in seconds) that the multipath pseudo-LUN will continue to remain in system memory, even after losing all paths to the device.

When this timer value is exceeded, pending I/O operations will be failed, and the failure is exposed to the application rather than attempting to continue to recover active paths.

This timer is specified in seconds. The default is 20 seconds. The max allowed is MAXULONG.

RetryCount

This setting specifies the number of times a failed I/O if the DSM determines that a failing request must be retried. This is invoked when DsmInterpretError() returns Retry = TRUE. The default setting is 3.

RetryInterval

This setting specifies the interval of time (in seconds) after which a failed request is retried (after the DSM has decided so, and assuming that the I/O has been retried a fewer number of times than RetryCount).

This value is specified in seconds. The default is 1 second.

UseCustomPathRecoveryInterval

If this key exists and is set to 1, it allows the use of PathRecoveryInterval.

PathRecoveryInterval

Represents the period after which PathRecovery is attempted. This setting is only used if it is not set to 0 and UseCustomPathRecoveryInterval is set to 1.

The two new settings were introduced to account for the following scenario:

  • A transient error somewhere causes a path to briefly fail and recover.
  • MPIO detects that the path has failed and thus performs a failover.
  • The failed path was the last path for a particular pseudo-LUN, so its PDO Remove Timer started ticking down.
  • The error was brief enough and PnP was busy enough that PnP missed the fact that the path went away and came back. Thus, there are no PnP events generated to indicate that the path is back online.
  • The pseudo-LUN never sees the path come back online and it gets removed after its PDO Remove Timer runs out.

The end result is that the system now has at least one path and one device online, but no pseudo-LUN to represent that device.

MPIO has a path recovery mechanism that can be used to avoid this issue. However, by default, the period at which path recovery is attempted is set to twice in the PDORemovePeriod. In the majority of cases, the default is acceptable, but it does not solve the problem in this particular scenario. This is where the settings listed in the previous tables come into play. They allow you to configure the timer that determines the period at which path recovery attempts are done. Thus, by setting the PathRecoveryInterval to less than the PDORemovePeriod, the path recovery attempt happens before the pseudo-LUN gets removed, the path is detected as back online, and the pseudo-LUN is saved from removal.

We recommend that you test the use of this value before widespread deployment in production to ensure that path recovery attempts are not happening so frequently that it has a significant impact on regular I/O.

For example, if the PDORemovePeriod is set to 60 seconds, a good starting point for the PathRecoveryInterval may be 30 seconds. This interval causes path recovery to be attempted every 30 seconds.

Important

Caution is advised when setting the PathRecoveryInterval to small values. By decreasing this value, larger amounts of path verification traffic are generated. This traffic increases with the number of LUNs available on the host, and the smaller the value.

Appendix A: MPCLAIM.EXE Usage examples

Note

It is recommended that the MPIO module in PowerShell be used for the configuration of MPIO where applicable.

List the disks that are currently claimed by MPIO

The following command, mpclaim.exe –s –d, returns the following additional information:

  • The MPIO disk number that is used with mpclaim commands
  • The system disk number as it corresponds to the number that is used in Disk Management
  • The current load-balancing policy setting
  • The device-specific module (DSM) that is managing the device

Change the load-balancing policy settings

To change the load-balancing policy setting to Least Blocks for all disks that are claimed by Microsoft DSM, use the following command: mpclaim.exe –L –M 6

To clear the load-balancing policy settings for all disks claimed by Microsoft DSM and reset to the default, use the following command:

mpclaim.exe –L –M 0

Valid load-balancing policy settings for the –L switch are as follows:

Parameter

Definition

0

Clear the Policy

1

Failover Only

2

Round Robin

3

Round Robin with Subset

4

Least Queue Depth

5

Weighted Paths

6

Least Blocks

7

Vendor Specific

Examples of common MPCLAIM.exe commands used to configure MPIO

Task

Command

Examples

Notes

Add MPIO support for Fibre Channel devices

mpclaim.exe -r -i -d <_VendorID> < _ProductID>

For example, to add support for a device with Vendor="Vendor8" and Product="Product16", use the following command:

mpclaim -r -i –d "Vendor8 Product16 "

Note that the vendor string length is 8 characters, the product string length is 16 characters, and both fields are padded with spaces as needed.

Add MPIO support for all iSCSI devices

mpclaim -r -i –d "MSFT2005iSCSIBusType_0x9"

To claim only specific iSCSI devices, see the previous section, "Add MPIO support for Fibre Channel devices."

Add MPIO support for all devices that are enterprise storage devices

mpclaim.exe -r -i -a ""

The tool analyzes all devices that are seen by the system, determines if there are multiple paths to the device, and if there are, adds MPIO support for them. If any iSCSI disk devices are found, MPIO support is added for them.

Remove MPIO support for Fibre Channel devices

mpclaim.exe -r -u -d <_VendorID> < _ProductID>

For example, to add support for a device with Vendor="Vendor8" and Product="Product16", use the following command:

mpclaim -r -u –d "Vendor8 Product16"

mpclaim -r -u -d "ABCWXYZ ""Vendor8 Product16 "

mpclaim -r -u -d "ABCWXYZ " "DEJKLMN "

Note that the vendor string length is 8 characters, and the product string length is 16 characters.

Remove MPIO support for all iSCSI devices

mpclaim.exe -r -u –d "MSFT2005iSCSIBusType_0x9"

To remove only specific iSCSI devices, see the previous section, "Remove MPIO support for Fibre Channel devices."

Remove MPIO support for all devices on the system

mpclaim.exe -r -u -a ""

The tool removes MPIO support for all devices on the system, even if multiple paths do not exist to the array.

To view all detected enterprise storage

Mpclaim -e

This command also displays the associated Vendor ID and Product ID required to add or remove support for the given device.

To view all storage that is currently claimed by the Microsoft DSM

Mpclaim -r

Capture the MPIO configuration with MPCLAIM.exe

Note

To use mpclaim, you must be running cmd.exe with administrator privileges. mpclaim.exe –v C:\Config.txt

Using this command results in a report saved to the current command path, such as the following:

MPIO Storage Snapshot on Tuesday, 05 May 2009, at 14:51:45.023 
Registered DSMs: 1 
================ 
+--------------------------------|-------------------|----|----|----|--|-----+ 
|DSM Name                        |      Version      |PRP | RC | RI |PVP| PVE | 
|--------------------------------|-------------------|----|----|----|--
-|-----| 
|Microsoft DSM                   
|006.0001.07100.0000|0020|0003|0001|030|False| 
+--------------------------------|-------------------|----|----|----|--
-|-----+ 
 
 
Microsoft DSM 
============= 
MPIO Disk1: 02 Paths, Round Robin, ALUA Not Supported 
        SN: 600D310010B00000000011                                                  
Supported Load-Balancing Policy Settings: FOO RR RRWS LQD WP LB  
    Path ID          State              SCSI Address      Weight 
    -------------------------------------------------------------------------- 
    0000000077030002 Active/Optimized   003|000|002|000   0 
        Adapter: Microsoft iSCSI Initiator...              (B|D|F: 
000|000|000) 
        Controller: 46616B65436F6E74726F6C6C6572 (State: Active)  
    0000000077030001 Active/Optimized   003|000|001|000   0 
        Adapter: Microsoft iSCSI Initiator...              (B|D|F: 
000|000|000) 
        Controller: 46616B65436F6E74726F6C6C6572 (State: Active)  
MPIO Disk0: 01 Paths, Round Robin, ALUA Not Supported 
        SN: 600EB37614EBCE8000000044                                                
Supported Load-Balancing Policy Settings: FOO RR RRWS LQD WP LB  
    Path ID          State              SCSI Address      Weight 
    -------------------------------------------------------------------------- 
    0000000077030000 Active/Optimized   003|000|000|000   0 
        Adapter: Microsoft iSCSI Initiator...              (B|D|F: 
000|000|000) 
        Controller: 46616B65436F6E74726F6C6C6572 (State: Active)  
Microsoft DSM-wide default load-balancing policy settings: Round Robin  
No target-level default load-balancing policy settings have been set. 

Adding a Hardware ID for Use with MPIO and Viewing Hidden Devices

To add a hardware ID to be used with MPIO

  1. Open the MPIO control panel.
  2. On the MPIO Devices tab, click Add.
  3. In the Add MPIO Support dialog box, type the hardware ID for the device that you want to be managed by MPIO, and then click OK.

Note

For a list of all hardware IDs for all currently attached devices, open a command prompt, and then type mpclaim –E. You can copy and paste the ID that you want to be used with MPIO in Step 3 of this procedure, but do not include the beginning or ending quotes around the hardware ID.

After the device is claimed by MPIO, it is hidden in Device Manager and a pseudo-LUN device is shown in Device Manager.

For example, the single path disk device listed in Device Manager that is named "emBoot sanFly SCSI Disk device" becomes a hidden device, and is replaced with a new device named "emBoot sanFly Multi-Path Disk Device." The exact name depends on several factors, including the type of device and the DSM being used.

To view a hidden device

  • To view the original disk device, in Device Manager, on the Action menu, click Show Hidden Devices.

Determining the Hardware ID to Be Managed by MPIO

To determine the hardware IDs that are already managed by MPIO, open a command prompt, and then type MPCLAIM.EXE –H.

Note

The hardware ID Vendor 8Product 16 is a placeholder value only to show the correct Vendor ID/Product ID format. This ID is not used to control the management of any disk devices.

To list the currently attached Vendor ID/Product ID for all disks on the system that can be claimed, open a command prompt, and then type MPCLAIM –E.

Note

If the value in the MPIO-ed field is YES, the device is already claimed.

Appendix B: Path configuration requirements for MPIO

MPIO with MSDSM (Microsoft Device Specific Module) requires symmetric presentation of disks across logical paths.

If a disk is presented on a given logical path, all disks presented on that path need to also be presented on all logical paths that contain any subset of those disks.

Logical paths correspond to LUN-path pairs which are managed by the DSM. Physical paths correspond to the actual physical hardware over which the logical paths are hosted. There may be more logical paths than physical paths, for example, with multiple logins in iSCSI scenarios, or in switched fibre channel infrastructure scenarios.

Invalid configurations may result in performance issues and spurious disk or path failures during failover scenarios.

For example:

Consider the following configuration:

  • Disks A, B, and C
  • Paths 1, 2, 3, and 4

If disks A and B are both presented on paths 1, 2, and 3, and disk C is presented to only path 4, this is a valid configuration as it meets the above requirement of symmetrical presentation.

Valid

Path 1: A, B

Path 2: A, B

Path 3: A, B

Path 4: C

By contrast, if disk A and B are both presented on path 1, disk B is presented on path 2, disk A is presented on path 3, and disk C is presented on path 4, this is an invalid configuration because paths 2 and 3 are not symmetric with path 1.

Invalid

Path 1: A, B

Path 2: B

Path 3: A

Path 4: C

To confirm symmetric disk presentation, use Device Manager to compare the Path ID assigned to each disk (LUN or Logical Unit Number presented by the SAN) as described in the following steps:

  1. Open Control Panel and launch Device Manager.
  2. Select View, Devices by Connection.
  3. Select each disk device under the Microsoft Multi-Path Bus Driver and select Properties from right click menu.
  4. In the Properties window, select the General tab and record the LUN number indicated by the Location. Then go to the MPIO tab and record each Path ID for each disk device in step 3.
  5. For example, you may find the following:

LUN1 is presented on Path ID 77030000 and 77030001.

LUN2 is presented on Path ID 77030000 and 77030001.

LUN3 is presented on Path ID 77030000 and 77030001.

LUN4 is presented on Path ID 77030002.

Because each LUN that is presented to Path ID 77030000 is also presented to Path ID 77030001 this is a symmetric presentation. Although LUN4 is only presented to Path ID 77030002 it is symmetric because there are no other LUNs presented to that Path ID.

However if you were to find the following:

LUN1 is presented on Path ID 77030000 and 77030001.

LUN2 is presented on Path ID 77030000.

LUN3 is presented on Path ID 77030001.

LUN4 is presented on Path ID 77030002.

It would be considered an asymmetric presentation because LUN1 was presented to both Path ID's but LUN2 was only presented to path 77030000 and LUN3 was only presented to 77030001.

To correct this invalid configuration, modify the LUN2 to add Path ID 77030001 and modify LUN3 to add Path ID 77030000. No change is required to the LUN4 presentation since no other LUNs are presented to that Path ID.

Appendix C: Enabling Software Tracing for MPIO

This section discusses how to enable software trace logging for troubleshooting MPIO issues.

Note

Software trace logs are binary files that are typically only used by Microsoft Technical Support for troubleshooting an issue, and are not directly viewable.

Create a GUID file to enable tracing

You must first create a GUID file by using Notepad. The file must contain the GUID of the provider that corresponds to the driver that you want to trace. For this example the file name is MPIOGUID.CTL. The first line is required for tracing MPIO and the second line is required for tracing the Microsoft DSM, as follows:

{8E9AC05F-13FD-4507-85CD-B47ADC105FF6} 0x0000FFFF 0xF

{DEDADFF5-F99F-4600-B8C9-2D4D9B806B5B} 0x0000FFFF 0xF

The format of the files is as follows:

<GUID of provider to trace> <Trace Flag> <Trace Level>

Note

The Trace Flag and Trace Level values are typically specified by Microsoft Technical Support, depending on the type of troubleshooting being performed.

Start tracing

To start tracing, type the following at the command prompt:

logman.exe create trace <Name> -ets -nb 16 256 -bs 64 –o <LogFile> -pf

<GUID File>

Logman.exe is present in the %windir%\system32 directory. The name <Name> is assigned to that trace session. The trace level is controlled by the value of the flag in the GUID File. <Guid File> contains the trace GUID and the trace flag. The trace messages are written to <Log File>.

For example, to start a trace session named MPIOTrace and create a log file named MPIOTrace.Log, type the following command:

logman.exe create trace MPIOTrace -ets -nb 16 256 -bs 64 -o MPIOTRACE.log -pf MPIOGUID.CTL

Stop tracing

To stop tracing, type the following at the command prompt:

logman.exe stop <Name> -ets

For example, to stop the trace session with the name MPIOTrace, type the following command:

logman.exe stop MPIOTrace -ets

The log file is a binary file. If you are troubleshooting an MPIO issue with Microsoft Technical Support, send the log file to your support representative, who can then use it to analyze the failure.

Query the trace status

To verify that tracing is running properly, type the following at a command prompt:

logman.exe query -ets

To return extended information about the tracing status, type the following at a command prompt:

logman query MPIOTrace -ets

Appendix D: Glossary of Terms

Acronym

Definition

ALUA

Asymmetric Logical Unit Access

DDK

Driver Development Kit

DSM

device-specific module

GUI

graphical user interface

HBA

host bus adapter

I/O

input/output

IHV

independent hardware vendor

IIS

Internet Information Services

iSCSI

Internet SCSI

ISV

independent software vendor

IT

information technology

LUN

Logical Unit Number

MPIO

Multipath I/O

MSCS

Microsoft Cluster Server

NLB

Network Load Balancing

PDO

physical device object

RAID

redundant array of independent disks

SAN

storage area network

SAS

Serial Attached SCSI

SPC-3

SCSI Primary Commands - 3

USB

universal serial bus

WMI

Windows Management Instrumentation