This week was the launch of the much anticipated VMware VSAN in General Availability. This product has been in beta for some time now, and getting a lot of attention from customers, partners, and from its competitors.
Today is really the more official launch with the availability of vSphere 5.5 Update 1:
If you want to run Horizon View, you will also need to update to 5.3.1:
Horizon Workspace 1.8 is also available here:
VMware has been a leader in the virtualization platform with the storage virtualization using great tools like SDRS (Storage Distributed Resource Scheduler), SIOC (Storage I/O Control) and the ability to create datastore clusters to use profile-based storage based on performance factors of the underlying hardware.
Where VMware really twisted up the industry last year was with the beta release of VSAN, or Virtual SAN, which suddenly put them a little further into the storage market. Let’s recap quickly on what VSAN is for those who don’t already know.
What is VSAN?
VSAN is a Virtual SAN which is comprised of local storage attached to the hosts, made of of a combination of traditional SATA/FATA/SAS magnetic disks plus SSD or PCI flash for local cache. The nodes participate in a cluster, much like the shared nothing clustering concept.
It requires between 3-32 nodes, so that the clustering of the nodes can withstand a failure. Hosts are added which meet the hardware requirements, then once a VSAN cluster is created, it is presented to the hosts as a single shared datastore. You can also have additional vSphere hosts that simply attach to the datastore as a shared storage once your VSAN datastore is provisioned.
Image courtesy of VMware.com: http://www.vmware.com/products/virtual-san/
Sizing your VSAN will depend on your current and future requirements, but luckily it scales nicely. It is recommended that the sizing of your SSD is a minimum of 10% of your non-SSD disk in the disk group for efficiency. When you are sizing your VSAN cluster, remember that the SSD is a read/write cache and will not be included in the sizing of your VSAN datastore.
Cool technology is only cool if you have a use case for it, so that is what is important for us to have in order to understand how we can use it.
- ROBO (Remote Office, Branch Office)
- Tier 2/3 Workloads
The scalability of VSAN is great, but some fear it has limitations which is what make it a better target for Tier 2/3 and potentially BCP/DR with its native ability to support vSphere Replication. VDI will work very well because of the flash optimization and low latency to the storage.
There will be a lot of great reference cases that are coming, and with those we will see working examples of how VSAN can be leveraged for different workloads.
At the recent VMware PEX, it was published that VSAN was able to reach nearly 1 million IOPS using a 100% read on 4K blocks, and at the March 6th announcement they showcased up to 2 million IOPS on a 32-node cluster! This is pretty cool to see high performance like this coming out, and although it isn’t the whole picture (because your workload does a lot more than just read-only), it is a sign that Tier 1 workloads aren’t necessarily out of scope for a VSAN cluster.
How much does VSAN cost?
This has been the one of the most contentious details about VSAN since the product announcement. Regardless of the intangible value that any product has, the reality is that pricing is a key factor in adoption.
VSAN pricing is still being firmed up, and this doesn’t include any specials and bundled SKUs that may come in the next short while. There are different options to license the product also which adds flexibility to match your particular use-case. This is what I have found so far through various sources online:
- Per-User pricing: VDI deployments can leverage a per-user model which is planned at $50 USD. The named versus concurrent choice will inevitably line up with your existing Horizon licensing.
- Per-CPU pricing: Pricing that I’ve seen shows a per-CPU deployment at $2495 USD which includes vSphere Distributed Switch (vDS) which normally requires Enterprise Plus licensing for your host.
The pricing advantage with VSAN also comes in the ability to use industry standard SSD hardware and PCI flash cards (must satisfy the VMware VSAN HCL) rather than more pricey SAN hardware. The safety comes in the ability to withstand failures of disks, and you can also adjust striping to support more disk failures for that extra level of comfort if you so desire.
Now that we have the general VSAN pricing, we need to understand some basics on what limitations we have when running VMware VSAN.
At the time of GA release (March 12, 2014), these are the stated limitations. Note that they will change as the product evolves so please consult VMware documentation directly for the current details if you are reading this long after the launch.
- Minimum hosts in cluster: 3
- Maximum hosts in cluster: 32
- Maximum number of VM guests per VSAN volume: 100
- Maximum number of VM guests per VSAN cluster: 3200
- Maximum number of HA protected guests: 2048
- Minimum disks per disk group: 1 SSD, 1 non-SSD
- Maximum disks per disk group: 1 SSD, 7 non-SSD
- Maximum SSD disks per disk group: 1
- Maximum non-SSD disks per disk group: 7
- Maximum disk groups per host: 5
- Minimum vSphere version: 5.5 update 1
- Maximum failures tolerated: 1 (3-node cluster) up to 3 (32-node cluster)
- Maximum VSAN VMDK size: 2TB minus 512 bytes (vSphere 5.5 is capable of up to 64TB VMDK but is not supported yet)
- Number of FT guests supported: 0 (feature not available on VSAN datastores yet)
Your VSAN will require a VMkernel port for its traffic, and although 1GbE is supported, it is recommended to have a minimum of 10GbE for your VSAN network. Data must traverse between the hosts, so that will have to be accounted for in your network topology.
Hardware info is important also, so be sure to consult the Hardware Compatibility List (HCL) for supported hardware: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan
Another note is that the VSAN configuration is only available in the vSphere web client. VSAN is supported on both the Windows vCenter implementation as well as the vCenter Server Appliance. As mentioned above, you are required to be running vSphere 5.5 or higher so if that’s not ready for for you, it’s time to plan your upgrade 🙂
You will also see that there is no limitation on vSphere version provided you are running Essentials Plus or higher. While the vSphere Distributed Switch is included with the VSAN license, there is no support at the time of this writing for other distributed switches (e.g. Cisco Nexus 1000V, IBM 5000V) although that information may change soon.
If you are running VMware SRM (Site Recovery Manager) you are out of luck for now. The GA release is designed to work with vSphere Replication, but SRM is not yet supported (that’s a whole other conversation).
There is much more to dive into as far as the design and implementation that we will deal with an a later post, but this is meant to let you know the high level details so that you can get started on your proof of concept testing.
Get your VSAN while it’s hot!
The cool thing is that you will get a 20% discount on VSAN (10 licenses or more) if you were a beta user, however you must remove and reinstall VSAN and update to ESXi 5.5 Update 1 for the update. There is no upgrade path from the beta edition. Please refer to KB2074147 for information on managing the update for existing machines: http://kb.vmware.com/kb/2074147
Get on over to the product page to find out more information: http://www.vmware.com/products/virtual-san/
Before you get started deploying, make sure that you check out the great resource guides which will help lead you to a successful architecture and implementation of this great new product: http://www.vmware.com/products/virtual-san/resources.html
To test drive the product with the VMware Hands On Labs, you can go to HOL-SDC-1308 (Documentation here: http://docs.hol.vmware.com/HOL-2013/HOL-SDC-1308_html_en/)
Click the image above to go to the VMware Hands-on-Labs online and try out the course.