Becoming a vSAN Specialist: Section 3 – vSAN Configuration

VSAN-featured

In this section, I will go over the following objectives found within the VMware vSAN Specialist Blueprint: Objective 3.1 – Identify physical network requirements

Objective 3.1 – Identify physical network requirements

Let’s start with the network basics.

  1. Dedicated network port for vSAN traffic
  2. 10GB (dedicated or shared) highly recommended, required for all flash deployments) <1ms latency
  3. 1GB dedicated for hybrid setups. Real work environments would suffer with 1GB (Minus ROBO) <1ms latency
  4. vSAN VMkernal port required for each ESXi host, even if it isn’t contributing storage
  5. ESXi hosts within a vSAN cluster must all utilize Layer 2/3 upstream

Below is an image taken from Chapter 2 of vSAN Essentials 6.2

VMKernal Interface vSAN

As you can see, the vSAN network users the VMkernal interface for all intra-vSAN traffic. The majority of the traffic used within the interface is storage related, such as read/writes. The traffic also consists of multicast traffic, pre vSAN 6.6 release. The vSAN VMkernal also serves another important job, which is heartbeats. These heartbeats occur between each ESXi hosts within the vSAN cluster to determine hosts status.

Other observations include:

  1. IPv6 is supported
  2. Network I/O control recommended to guarantee vSAN a percentage of bandwidth. Especially useful in 10Gb shared links, where vSAN will contend with other traffic, such as vMotion, etc (Must use VDS, VSS not compatible)
  3. vSAN includes an license to utilize VDS
  4. NIC teaming/Failover over LACP is supported
  5. VDS recommended to ensure uniformity
  6. Designing redundant networking of components should be utilized. Since storage is the backbone of any organization, one must plan and test to ensure availability. You don’t want one component failing to cause a data unavailability event!

Two Node (ROBO) Deployment

If you have a remote office location which needs a vSAN footprint, the ROBO deployment model is the perfect solution. The setup includes two ESXi hosts and a witness appliance.  The failures to tolerate would=1, since RAID1 is required in the scenario. Each VM’s components are split, with each ESXi host owning 50%. The witness hosts, which has access to the vSAN network, will make availability decisions (tiebreaker) based on knowledge it has of the ESXi host and the components within it.

*Important*
While the witness hosts is supported as a nested solution with ESXi, it must not be nested within the two node ROBO deployment! For obvious reasons, if you loose the hosts that “hosts” the Witness appliance, you will have a data unavailability event 😦

  1. No more than 500ms of latency round trip between each ROBO location and the witness appliance (on the vSAN network, but not onsite) over at least a 1Mbps connection.
  2. Must have at least 10GBs for All Flash (node to node) and 1GBs for Hybrid (node to node)
  3. 10GBs is recommended for best performance

Stretched Cluster

One of the great features with vSAN is stretched cluster. Stretched cluster allows you to have a VM’s components deployed across different sites. This is great for disaster recovery since VMs can be initiated at the standby site in case of disaster. This truly is an amazing technology. Let’s look at some of the requirements for this to work.

  1. Must have 10GBs between sites, with no more than 5ms latency round trip
  2. Must have 100Mbps between data site and witness site, with no more than 200ms latency round trip from the data site to the witness site

If you were to have a site failure, VMs can be intilizaed at the alternative site. Think of the possibilities. Additionally, everything can be automated so that in case of a site failure, things hardly miss a beat!

%d bloggers like this: