You’ve all heard it: The Year of VDI. It has consistently been the mantra of the launch of each calendar year since Citrix and VMware gained significant adoption during recent years. But why is it both true and false at the same time?
Desktop versus Server Virtualization
Server virtualization has taken hold in an incredible fashion. Hypervisors have become a part of every day datacenter deployments. Whatever the flavor, it is no longer necessary to justify the purchase of a products like VMware vSphere or Microsoft Hyper-V. And for those who embraced open source alternatives already, KVM, Xen and the now burgeoning OpenStack ecosystem are joining the ranks as standard step-1 products when building and scaling a datacenter.
Server virtualization just made sense. We have 24 hour workload potential because of a 24/7/365 usage scenario plus backups, failover technologies and BCP needs.
Desktop Virtualization is a good thing
The most commonly quoted reason for desktop virtualization is the cost of managing the environment. In other words, the push to move towards VDI is about policy based management of the environment. Removing or limiting the variables in desktop and application management makes the overall management and usage experience better. No arguments there.
So why hasn’t it hit? One powerful reason is the commoditization of desktop hardware. It used to cost thousands of dollars in the 70s to purchase basic desktop hardware. Throughout the 80s, 90s and 2000s the price of desktop hardware has plummeted to the point where corporate desktops are now available for $300-$500 dollars and they are amortized over 2 or 3 year cycles.
And now the CFO has their say
The impetus to use VDI save money on desktop hardware went away. We now have thin desktops that are nearly the same price as full physical desktops. There is no doubt that this has slowed the uptake of VDI in a strong way. When it comes to putting together our annual expenses, the driver has to be strong to make the shift.
Next up is the classic “Microsoft Tax”. While we may reduce the cost somewhat at the hardware layer, we are still bound to the needs of the consumer of the desktop to provide Microsoft OS and software. There is a reason why we don’t even talk about Linux on the desktop anymore. If people are ready for Linux, they will just use it. There are however millions of software consumers that require Microsoft tools. That’s just a fact.
So now that we enter 2014 and all of the analysts and pundits tout the new DaaS (Desktop-as-a-Service) revolution, we have to still be realistic about the amount of impact it will have on the overall market place. I don’t doubt that it will continue to gain footing, but nowhere near the level of adoption that server virtualization was able to produce.
A Patchwork Quilt
In my opinion, we have already gone down a parallel timeline on policy based desktop management. With Microsoft SCCM, LanDesk and a number of other imaging and application packaging tools already in many organizations, there is less of a need to make the shift towards VDI. There are great use cases for it for sure, but it will be a difficult battle to siphon away the physical desktop processes that have done us well up to now.
Patch management and application delivery can do a lot towards providing the policy based management that we are being told is the prime objective of many VDI products. I’m a big proponent for VDI myself, but I am also realistic about how much of the overall market it has already and will cut into.
So, is this the fate of network virtualization?
Network Virtualization is costly, but that’s OK
So now we have an interesting shift in the market again. Network virtualization has gone from a project in the labs of Stanford to becoming a real, market ready product with many vendors putting their chips on the table.
Not only are ASIC producers like Cisco and Juniper Networks coming forward with solutions, but VMware with their purchase and integration of Nicira to produce VMware NSX has created a significant buzz in the industry. Sprinkle in the massive commitment from open source producers with OpenFlow and Open vSwitch and there is undoubtedly a real shift coming.
2015 will be the year of Network Virtualization
In 2014 we will see a significant increase in the understanding and adoption of network virtualization tools and technologies. With the upcoming GA release of Cisco ACI and more adoption of open source solutions in the public and private cloud, we will definitely see a growth in the NV adoption.
Remember, NV isn’t about reducing physical network hardware. It is about reducing the logical constraints and increasing the policy and security integration at the network layers. Server virtualization has laid the groundwork to create a perfect pairing really.
When does NV become the standard in networking deployment?
This is the real question we need to ask. As all of the analysts pore over the statistics and lay out what the landscape looks like, we as architects and systems administrators have an important task to deal with: Making NV work for us.
In my mind, network virtualization is a powerful, enabling technology. We have already come a long way in a short time in the evolution of networking. From vampire taps to the upcoming 100GBE hardware in a couple of decades is pretty impressive. Now we can fully realize the value of this hardware that we have sitting on the datacenter floor by extending it with virtualization tools and techniques that gave us exponential gains in productivity and efficiency at the server levels.
It’s coming to us one way or another, so I say that we dive in and do something wondrous together.
Who’s in for the ride? Count me in!
While I can see Network virtualization becoming A popular standard (perhaps even The standard) in the big environments, I really don’t see it scaling down near as far as environments with just a basic vSphere cluster or two. Everything has its limits, whether natural or imposed, the trick is figuring out where they are before wasting time trying to go beyond them.
Another limit to VDI is the issue of putting ALL the eggs in one basket. No matter how good you make the basket, it is still a single point of failure. Having some basic functions that can work when the mother ship is having issues is a huge draw, especially when you can have just as good management of those systems as you can in VDI, and at the same or possibly even cheaper cost.
Great points Andy! I definitely agree that there is a sweet spot for NV which doesn’t hit many SMB customers. It’s definitely an 80/20 situation, but honestly it is probably much higher than 20% that aren’t good targets for it.
Knowing the tipping point where it is worthwhile to make the jump will be a great thing to have. Hoping to see much more real deployments at all levels to find out where it’s the most appropriate.
Great thought on VDI too. Today many orgs can suffer a server outage because the staff can work standalone on their desktop. It’s a whole other issue if they lose access to their desktops 😉