Last week I wrote a detailed post last week on vSAN. Now that vSAN has been explained and the benefits understood, let’s get started with how to enable it within a lab setting.
Note: This is a quick blog on how to enable vSAN, which will be part of a larger mini series that will take us through each step. It’s primary purpose is to show how easy it is to enable vSAN. With that said, vSAN shouldn’t be enabled within a production environment without proper planning, consideration, and change controls. While vSAN is something you can enable with the “flip of a switch”, it doesn’t mean you should. Careful consideration should be taken while planning. Performance is something else that you should keep in mind. If all possible, engage the help of a vSAN resource that can guide you through the process. This isn’t something to be taken lightly, and caution should be taken when designing your vSAN deployment. Finally, the VMware Hardware Compatibility Guide should always be used.
Without referencing this, you can quickly set yourself up for failure. With that said, always reference the compatibility guide religiously!
Ok, enough with the doom and gloom. vSAN can be enabled with a click of the mouse. In order to show you how easy it is (within a lab setting of course), let’s begin.
First things first, let’s go over the recommended requirements for utilizing vSAN.
- Minimum of three ESXi hosts for standard deployments, minimum of two ESXi hosts and a witness hosts for the smallest deployment (ROBO)
- Minimum 6GB per ESXi Host
- VMware vCenter server
- At least one device for capacity tier (FLASH or Magnetic Disk) and one device for cache tier (Must be flash)
- One disk controller/Pass through JDOD highly recommended
- Dedicated network port for vSAN traffic. 10GB highly recommended, 1GB will work, but will suffer in real work environments (Minus ROBO)
With that said, let’s begin. I already have my lab environment setup with the above requirements.
First, you will need to ensure VMkernal adapters have Virtual SAN selected as a service. This will allow for VSAN communication between the nodes in the cluster. This is needed on either your vDS or on each port group per host.
Now that we have confirmed each adapter has vSAN configured, we can right click your datacenter within vCenter and select create new cluster. Name your cluster, then click “OK”. Ensure the option for vSAN isn’t selected.
Second, add at least three ESXi hosts that will be taking part of the vSAN cluster. Right click on the newly created cluster, provide the IP, username, and password. Repeat to add all three hosts to the cluster.
All three hosts have been successfully added.
Third, click on the newly created cluster for vSAN and go to “configure“. Locate “general” under vSAN. Click the configure radio button.
Finally, you are presented with the configuration options for vSAN. Ensure “Disk Claiming” is set to manual and click next.
Since we had previously added each adapter with vSAN, you should have green check boxes. In a production environment you would never have your management traffic and vSAN traffic on the same adapter. For lab purposes only should have this configuration.
Ensure do not claim is presented on each of the local disk associated with each ESXi hosts.
Click finish on ready to complete. You have just successfully configured vSAN!
You should see green completed checks with each hosts (as they are added).
You should now see vSAN as turned on within the configuration portion of the cluster.
That’s it! It really is that simple. Next time we will discuss how to add local disk and create a vSAN datastore.