Use of Windows Server 2016 VMs for Testing Storage Spaces Direct
Windows Server 2016 announces Storage Spaces Direct (S2D), it builds highly available storage systems which are virtual shared storage across the servers using local disks. This is a magnificent step towards Microsoft’s Windows Server Software-defined Storage (SDS) history.
It eases the deployment and management of SDS systems. It also explores the use of new classes of disk devices, like, SATA disk devices, which was previously unavailable to the clustered Storage Spaces with shared disks.
The following below-listed document demonstrates details about the technology, functionality, and how does it can be deployed on the physical hardware.
Notes: For reliable performance during production, you require specific hardware. As long as you explore it via basic testing and know its features, try configuring it on Virtual Machines.
- You have a working exposure of configuring and managing Virtual Machines (VMs).
- You are equipped with a fundamental knowledge of Windows Server Failover Clustering (cluster).
- Windows Server 2012 R2 software or Windows Server 2016 along with the Hyper-V Role installed and configured on the host VMs.
- Hyper-V servers can be part of a host failover cluster, or stand-alone.
- Enough capacity to host four VMs along with the mentioned configuration requirements .
- VMs can be present on the same server, or it can be on distributed environment across servers (as long as the networking connectivity permits for traffic can be routed to all present VMs having much throughput and low latency time.)
An Analysis of Storage Spaces Direct
S2D uses those disks that are dedicatedly connected to the one node of a Windows Server 2016 failover cluster and permits Storage Spaces for creating pools using those available disks.
Virtual Disks (Spaces) which are configured on the pool is going to have their redundant data (mirrors or parity) spread across the disks present in separate nodes of the cluster.
Since replicas of the data are distributed, this permits accessibility to data irrespective of node failures or machine shut down for maintenance.
You can deploy S2D implementation on VMs, with respective VM configured along with many virtual disks connected to the VM’s SCSI Controller.
Each respective node of the cluster executing inside of the VM might be able to connect to its own disks, but S2D will permit all the disks which can be used in Storage Pools that duly spans the cluster nodes.
S2D benefits SMB3 as the protocol transport for sending redundant data, for the mirroring or given parity spaces which are to be distributed across the nodes. This is described in the following diagram:
Configuration #1: Single Hyper-V Server (or Client)
One of the simplest configurations is single machine hosting, which all of the VMs used for the S2D system. In this case, a Windows Server 2016 Technical Preview 2 (TP2) system executing on a desktop class machine along with 16GB space or RAM and using a 4 core modern processor.
The VMs are also duly configured similarly. Assume having a virtual switch which is connected to the host’s network and this client try to connect and, again a second virtual switch is created that is set for Internal network, for providing another network path for S2D to utilize among the VMs.
The configuration can be explained via the following diagram:
Configuration of Hyper-V Host
- Configuring a virtual switch which is connected to the machine’s physical NIC, and other virtual switch configured for internal usage only.
For instance: assume two virtual switches. Among them one is configured to permit network traffic out to the world, let us labeled is as “Public”. And another is configured to permit network traffic among VMs configured on the same host ONLY, let us labeled them as “InternalOnly”.
Configuration of VM
For creating four or more Virtual Machines
- Memory: In case you are using Dynamic Memory, then the default of 1024 Startup RAM is going to be sufficient. In case using Fixed Memory, then it must be configured with 4GB or more.
- Network: Configure respective two network adapters. First, is connected to the virtual switch with external connection and second, the network adapter is connected to the virtual switch which is configured for internal usage only.
It is strongly recommended that one should have more than one network, each one them is connected to separate virtual switches such that in case one stops the flow of network traffic, the other(s) can be used and permit the cluster and Storage Spaces Direct system for rest to the nodes in running state.
- Virtual Disks: Respective VM require a virtual disk that can be is used as a boot/system disk, and two or more virtual disks which can be used for Storage Spaces Direct utilization.
- Disks which are used for Storage Spaces Direct should be connected to the VMs virtual SCSI Controller.
- Similar to all other existing systems, the boot/system disk should have unique SIDs, i.e., they must be installed from ISO or via other installation methods. And, in case, duplicated VHDx is used then it needs to be generalized before the copy was made and issued.
- VHDx type and size: You must have at least eight VHDx files (having four VMs along with two data VHDx each). The data disks used can be either “dynamically expanding” or can be of “fixed size”.In case, fixed size is used, then allow the size of 8GB or more, and then calculate the size in combination of VHDx files such that you don’t exceed the storage available on your system.
For instance: The following is used as the Settings dialog for a VM that is configured as a part of an S2D system among one of the Hyper-V hosts.
Booting from the Windows Server TP2 VHD that was downloaded from Microsoft’s external download site, and which is connected to the IDE Controller 0 (this had to be a Gen1 VM as the TP2 file that was downloaded is a VHD and not as a VHDx).
Creating two VHDx files which are to be used by S2D, and then , they are connected with the SCSI Controller. Please note the VM is connected to the Public and InternalOnly virtual switches.
Kindly Note: Please do not permit the virtual machine’s Processor Compatibility setting. These settings will disable certain processor capabilities that S2D needs inside the VM.
This option choice is unchecked by default and needs to be in the same state. Diagram is shown to depict the same:
Configuration of Guest Cluster
Once the VMs are configured successfully, creating, launching and managing the S2D system inside the VMs is almost similar in procedure for supporting physical hardware:
- Begin to start the VMs
- Now, Configure the Storage Spaces Direct system, via the “Installation and Configuration” section of the reference link mentioned here: Storage Spaces Direct Experience and Installation Guide
As this existing in VMs is using only VHDx files as its storage, there present a no SSD or other faster media which allow tiers. Hence, skip the steps which enable or configure tiers.
Configuration #2: Two or more Hyper-V Servers
You may not possess a single machine having enough associated resources to host all four VMs, or you may already equip with infrastructure having a Hyper-V host cluster to be deployed on, or might have more than single Hyper-V servers that you wish to spread the VMs across. Shown in the diagram, configuration spread across two nodes:
This configuration is similar to the single host configuration. The only differences are listed below:
Configuration of Hyper-V Host
- Virtual Switches: Each respective host is recommended to include the minimum of two virtual switches in case the VMs is to be used.
They require to be connected externally to the different NICs, separately on the systems. One can be set up over a network that can be routed to the world for client access, and the other can be over a network which is not externally routed.
Or, they both can be set up on externally routed networks, as well. You can select to use a single network, however, then it is going to have all the client traffic and S2D traffic will be consuming common bandwidth, and there is no redundancy in case the single network goes down for the system S2D VMs to stay connected.
But, as this is for testing and verification of S2D, you don’t require having the resiliency to network loss requirements that we strongly recommend for deployment of the production.
Example instance: On the given system assume having an internal 10/100 Intel NIC and a dual port Pro/1000 1gb card. All Three NICs is going to have virtual switches. Label one Public and connect it to the 10/100 NIC as the connection to the rest of the world is through a 100mb infrastructure.
Then have the 1gb NICs which is assumed to be connected to a 1gb desktop switch (two different switches), and which provides present hosts two network paths between each other for S2D to use.
As noted, three networks are not a requirement, but we have this available on our hosts to use them all.
Configuration of VM
Example Instance: shown below is a snap of a VM configuration on existing two host configurations. Kindly note the following:
- Memory: Configured this consuming 4GB of RAM in place of dynamic memory. It is a choice, in case you have enough memory resources on existing nodes for dedicating memory.
- Boot Disk: The boot disk is a VHDx, therefore, you will be able to use a Gen2 VM.
- Data Disks: Select to configure four data disks dedicated per VM. The minimum is two. Let’s try for four. All VHDx are configured on the SCSI Controller.
- Network Adapters: Assume having three adapters, each is connected to one among the three virtual switches present on the host for utilizing the available network bandwidth that existing hosts may provide.
Setting up Storage Spaces Direct on Virtual Machines:
Technical Preview 5 contains improvements which automatically configures the storage pool and storage tiers in “Enable-ClusterStorageSpacesDirect”. It utilizes a combination of bus type and media type for determining devices which are used for caching and the automatic configuration of the storage pool and storage tiers.
Virtual machines do not declare the media type as it permit automatic configuration, as it does on physical machines. In TP5 you will observe the following in case you use “Enable-ClusterStorageSpacesDirect”. There is a workaround, which will notify you directly with this error message :
Enable–ClusterStorageSpacesDirect : Disk eligibility failed At line:1 char:1 + Enable–ClusterStorageSpacesDirect +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (MSCluster_StorageSpacesDirect:root/MSCLUSTER/…ageSpacesDirect) [Ena ble–ClusterStorageSpacesDirect], CimException + FullyQualifiedErrorId : HRESULT0x80070032,Enable–ClusterStorageSpacesDirect
You can try resolving this by turning off automatic configuration and skipping entitlement verification, enabling S2D, manually create the storage pool and storage tiers afterward.
Below is an example sequence of steps to perform this:
#Create cluster and enable S2D
New-Cluster -Name CJ-CLU -Node node1,node2,node3 -NoStorage
Enable-ClusterS2D -CacheMode Disabled -AutoConfig:0 -SkipEligibilityChecks
#Create storage pool and set media type to HDD
New-StoragePool -StorageSubSystemFriendlyName *Cluster* -FriendlyName S2D -ProvisioningTypeDefault Fixed -PhysicalDisk (Get-PhysicalDisk | ? CanPool -eq $true)
Get-StorageSubsystem *cluster* | Get-PhysicalDisk | Where MediaType -eq “UnSpecified” | Set-PhysicalDisk -MediaType HDD
#Create storage tiers
$pool = Get-StoragePool S2D
New-StorageTier -StoragePoolUniqueID ($pool).UniqueID -FriendlyName Performance -MediaType HDD -ResiliencySettingName Mirror
New-StorageTier -StoragePoolUniqueID ($pool).UniqueID -FriendlyName Capacity -MediaType HDD -ResiliencySettingName Parity
#Create a volume
New-Volume -StoragePool $pool -FriendlyName Mirror -FileSystem CSVFS_REFS -StorageTiersFriendlyNames Performance, Capacity -StorageTierSizes 2GB, 10GB
How does it is different from what I perform over my VMs with Shared VHDx?
Shared VHDx is a valid and strongly recommended solution for offering shared storage to a guest cluster (that cluster which is running inside of VMs). It permits a VHDx which is to be accessed via multiple VMs in parallel in order to offer clustered shared storage.
In case any nodes (VMs) faces failure, then the others node have access to the VHDx and the clustered roles using the storage over the VMs will continue to access their respective data without interruption.
S2D permits accessibility to clustered roles over clustered storage spaces present inside the VMs without provisioning shared VHDx on the host. Using S2D, you can allow VMs with a boot/system disk and then two or more extra VHDx files configured for respective VM. You can also create a cluster inside the VMs, configure S2D and use resilient clustered Storage Spaces for existing clustered roles present inside the VMs.
- Microsoft Sharepoint2018.03.30How to Select the Best between SharePoint Server and SharePoint Online
- SharePoint Hosting2018.03.22Avoid SharePoint Compliance Risk by implementing a Robust Information Governance Plan
- Dedicated Hosting2018.03.20Guide to Selecting the Best between Office 365 Hosting and Hosted Exchange
- QuickBooks2018.03.07Boost Up Your Accounting Performance with Managed QuickBooks Support Services