All about:virtualization

Thanks to the great community that has been build around virtualization, cloud, and everything that I’ve been writing about for many years, I have been given some great opportunities to create content to share in those communities.

In my role as Technology Evangelist for Turbonomic, I get to contribute to a very cool community blog which is called about:virtualization that features content related to virtualization, cloud, development, networking, and much more.

Just in case you didn’t already find that content, here is the link to be able to read articles there including some from me, and many other great content contributors from the industry.

about-virt

http://discopos.se/about-virt

Want to contribute to about:virtualization?

We are always looking for community content that can be hosted at the about:virtualization blog.  If you are interested in creating content and being able to get your voice out to the community, please email me to eric.wright {at} vmturbo.com with your contact information and I’ll get you on the road to being a published blogger!

 




Why it is always, and never, the year of VDI, but network virtualization is here to stay

You’ve all heard it: The Year of VDI. It has consistently been the mantra of the launch of each calendar year since Citrix and VMware gained significant adoption during recent years. But why is it both true and false at the same time?

Desktop versus Server Virtualization

Desktop_VirtualizationServer virtualization has taken hold in an incredible fashion. Hypervisors have become a part of every day datacenter deployments. Whatever the flavor, it is no longer necessary to justify the purchase of a products like VMware vSphere or Microsoft Hyper-V. And for those who embraced open source alternatives already, KVM, Xen and the now burgeoning OpenStack ecosystem are joining the ranks as standard step-1 products when building and scaling a datacenter.

Server virtualization just made sense. We have 24 hour workload potential because of a 24/7/365 usage scenario plus backups, failover technologies and BCP needs.

Desktop Virtualization is a good thing

The most commonly quoted reason for desktop virtualization is the cost of managing the environment. In other words, the push to move towards VDI is about policy based management of the environment. Removing or limiting the variables in desktop and application management makes the overall management and usage experience better. No arguments there.

So why hasn’t it hit? One powerful reason is the commoditization of desktop hardware. It used to cost thousands of dollars in the 70s to purchase basic desktop hardware. Throughout the 80s, 90s and 2000s the price of desktop hardware has plummeted to the point where corporate desktops are now available for $300-$500 dollars and they are amortized over 2 or 3 year cycles.

And now the CFO has their say

The impetus to use VDI save money on desktop hardware went away. We now have thin desktops that are nearly the same price as full physical desktops. There is no doubt that this has slowed the uptake of VDI in a strong way. When it comes to putting together our annual expenses, the driver has to be strong to make the shift.

spend-saveNext up is the classic “Microsoft Tax”. While we may reduce the cost somewhat at the hardware layer, we are still bound to the needs of the consumer of the desktop to provide Microsoft OS and software. There is a reason why we don’t even talk about Linux on the desktop anymore. If people are ready for Linux, they will just use it. There are however millions of software consumers that require Microsoft tools. That’s just a fact.

So now that we enter 2014 and all of the analysts and pundits tout the new DaaS (Desktop-as-a-Service) revolution, we have to still be realistic about the amount of impact it will have on the overall market place. I don’t doubt that it will continue to gain footing, but nowhere near the level of adoption that server virtualization was able to produce.

A Patchwork Quilt

patchesIn my opinion, we have already gone down a parallel timeline on policy based desktop management. With Microsoft SCCM, LanDesk and a number of other imaging and application packaging tools already in many organizations, there is less of a need to make the shift towards VDI. There are great use cases for it for sure, but it will be a difficult battle to siphon away the physical desktop processes that have done us well up to now.

Patch management and application delivery can do a lot towards providing the policy based management that we are being told is the prime objective of many VDI products. I’m a big proponent for VDI myself, but I am also realistic about how much of the overall market it has already and will cut into.

So, is this the fate of network virtualization?

Network Virtualization is costly, but that’s OK

So now we have an interesting shift in the market again. Network virtualization has gone from a project in the labs of Stanford to becoming a real, market ready product with many vendors putting their chips on the table.

Not only are ASIC producers like Cisco and Juniper Networks coming forward with solutions, but VMware with their purchase and integration of Nicira to produce VMware NSX has created a significant buzz in the industry. Sprinkle in the massive commitment from open source producers with OpenFlow and Open vSwitch and there is undoubtedly a real shift coming.

2015 will be the year of Network Virtualization

In 2014 we will see a significant increase in the understanding and adoption of network virtualization tools and technologies. With the upcoming GA release of Cisco ACI and more adoption of open source solutions in the public and private cloud, we will definitely see a growth in the NV adoption.

Image source: http://blogs.vmware.com/networkvirtualization/2013/08/vmware-nsx-network-operations.html

Image source: http://blogs.vmware.com/networkvirtualization/2013/08/vmware-nsx-network-operations.html

Remember, NV isn’t about reducing physical network hardware. It is about reducing the logical constraints and increasing the policy and security integration at the network layers. Server virtualization has laid the groundwork to create a perfect pairing really.

When does NV become the standard in networking deployment?

This is the real question we need to ask. As all of the analysts pore over the statistics and lay out what the landscape looks like, we as architects and systems administrators have an important task to deal with: Making NV work for us.

Joe-tweet-300x141

In my mind, network virtualization is a powerful, enabling technology. We have already come a long way in a short time in the evolution of networking. From vampire taps to the upcoming 100GBE hardware in a couple of decades is pretty impressive. Now we can fully realize the value of this hardware that we have sitting on the datacenter floor by extending it with virtualization tools and techniques that gave us exponential gains in productivity and efficiency at the server levels.

It’s coming to us one way or another, so I say that we dive in and do something wondrous together.

Who’s in for the ride? Count me in!




Loose coupling – Winning strategy for hardware, software and processes

With all of the SDDC (Software Defined Data Center) and SDN (Software Defined Networking) coming into the fore these days, it is good to take a look at exactly why it is getting serious focus, and what particular qualities make it a winning strategy.

I’ve mentioned the term loosely coupled systems among my peers for quite a while, and it has finally begun to sink in. I still get asked regularly by people in all different levels and areas of IT exactly what that means.

For a lot of people these are well known concepts, but for some this falls into the buzzword category and doesn’t get the focus that it deserves.

What is Loose Coupling?

With computer systems (I use this general term because it covers hardware and software), we have interconnections which create the overall system. Huh? Don’t worry, that is a strange sentence to work out the meaning of. What it essentially means is that each part of the system (aka sub-systems) make up the “system” that we use.

An example is a web site. The web application can have a web server front end, a data layer where the content is stored, a networking layer which connects the outside world in, and that connects the web server to the database server. Each of these layers connect to each other with loose coupling. Requests are created and completed without creating a persistent tunnel. Different software can be injected at each layer as long as a connector (which many know as a driver) for the next layer.

What is Tight Coupling?

One example of a tightly coupled system that many of us are facing right now is VMware View. If you are running VMware View 4.7 on VMware vSphere 4.x or 5.0 though vCenter 5.0 you are all good. The challenge comes now with wanting to move your vCenter version. If you migrate to vCenter 5.1 to update your vSphere to 5.1 you have one major issue: VMware View 4.7 is not supported for the 5.1 platform.

So in the very connected ecosystem of the VMware products, they have created a tightly coupled system. Because of the tight coupling, there is a limitation on the way that the systems can be updated. To many, this is where Microsoft has presented real challenges. As OS and applications become more inter-dependent, the tight coupling increases which makes for a nightmare when you want to upgrade in parts and not necessarily the whole end-to-end environment.

At the same time, my VMware example could be a situation where the systems are coupled so to speak, but the lack of API compatibility creates an interoperability issue. The dependencies have increased, and version management becomes a nightmare unless you can have all systems brought up to new revisions in tandem.

Another example could be a .NET environment which uses specific .NET 3.5 methods that are not compatible with SQL 2012, or rather that SQL 2012 cannot accept connections using the 3.5 application code. There will have to be extra work done to enable connecting these environments, and in some cases where there are large enough gaps in software builds, it may not be possible.

What is the deal with APIs?

Every vendor that approaches me to discuss their product gets the same question from me. “Do you expose your API for me to interact with?”. Why is this an important question? In the past (and still today), many systems are treated as black-box apps which only provide access to the customer through a proprietary interface like a web front end, a GUI or a command line interface (CLI).

An API (Application Programming Interface) provides a programmatic way to interact with the system. This allows us to read, write and manage the content of the system and with a published API, you can now extend that source system into any of a number of your own internal systems. APIs also allow other vendors to jump on board and leverage the methods offered by the source vendor.

An example would be VMware’s VAAI (vStorage APIs for Array Integration) which provides other vendors with a way to speak directly to the underpinnings of vSphere storage and get the most out of the baked in features.

Read more on VAAI here: http://www.vmware.com/products/datacenter-virtualization/vsphere/storage-api.html

REST is best!

REST (acronym for REpresentational State Transfer) is an architecture which has specific constraints that ensure its standardization, and by using the HTTP verbs (GET, PUT, POST, DELETE) it provides a common method to address any resource. When we refer to RESTful methods, it is a guarantee that the system behaves using a well known set of rules. Writing connectors and scripts for RESTful APIs is a saving grace for developers. It doesn’t guarantee a future-proof method to address that system, but if is much more likely that the target vendor will maintain that standard addressability and any changes happen underneath the covers.

You need to understand these concepts

Quite simply, this is where things are going. I realize that my major work lately has been away from code and admin processes and concentrating on bringing concepts to life with people, process and technology and preparing environments for the next step. It is quite a nice evolution for things, but it is also a challenge because breaking to the next step is where the real challenge is.

I would recommend that you should conceptually understand cloud topology, and more importantly, get a handle on the methodologies that create working cloud environments. Even if you aren’t in a development shop, get your hands on some DevOps reading and put those practices into place wherever you can. Every step you take now is a great step forward with your career, and for your technology organization.

Don’t be in the laggards category on the technology adoption curve because it is much more difficult to accelerate once these systems are at a point of maturity.

Hmmm…this doesn’t seem like a loosely coupled system

 




2013 Predictions in Virtualization and Server Technology

carnacWell folks, it’s that time of year. Usually people are putting together their best of 2012 stuff now, but I’m taking the different tactic and putting myself out there to predict some changes which will come in 2013.

The usual disclaimer: As with any predictions, these are my own and have no basis on NDA content, inside knowledge nor do the represent those of my sponsors, employer or vendors.

Mostly, I just wanted to put out my thoughts on what will be hot in 2013 and it’s sprinkled a little with the hope for what I would love to have come out this year. I’ve avoided anyone else’s prediction posts to ensure that these are formed from my own desk, and as a result I have to apologize if you see some duplication with other bloggers and vendors who may have said the same thing. With all that said, here we go!

2012 in a Paragraph

There were a lot of really exciting changes in 2012 around virtualization technology in server and cloud environments. Some of the significant events came with strong acquisitions by the big players including Dell, VMware, Cisco, HP and EMC. Clearly the innovation game has been stepped up by not only creating, but in integrating technologies together. Acquisitions dominated the news and the push towards cloud technologies was the top focus for software and hardware vendors alike.

Hot Trends in 2013

The forecast is cloudy. Just like every year seems to be “The Year of VDI”, there is going to be an aggressive drive towards the “cloud”. What will become more apparent in 2013 is the real definition of the cloud, and products that can simplify the path to get there.

Converged Platforms

This is quickly becoming a more populated market. Where Cisco UCS was the pioneer in an untouched region of the datacenter, they have new players in the game with IBM, HP and Dell recently announcing their converged infrastructure platforms.

With converged hardware platforms we will see much more of the true concept of compute node versus servers, networking and storage being treated as independent parts of the server infrastructure.

The growth in is area will come with better pricing in the mid-range customer space. This is clearly one of the fastest growing sectors of the marketplace as we see more and more small to medium customers latching on to virtualization.

And in the large scale customer space we may see some new competitors where Vblock has reigned in its lone presence of the large scale converged infrastructure platforms.

SDDC and SDN

The Software Defined Datacenter (SDDC) and Software Defined Networking (SDN) are clearly getting lots of focus into 2013. One word: Nicira. We were all impressed with the great integration of Cisco networking into the vSphere platform with the Cisco N1000v, so the announcement of the acquisition of Nicira by VMware was clearly a shot across the bow of Cisco.

Beware the buzzword marketers of course because there will be a glut of “software defined datacenter” products that hit the shelves riding the wave of popularity. But while the words drip from the tongues of marketing departments, we have to be aggressive in our research to truly measure the place of these products in the class of the SDDC.

1. OpenStack

If you haven’t already taken a look at the OpenStack platform, you should. Even if it isn’t what is on your radar for deployment in 2013, it is a really important infrastructure movement that will aggressively take hold in 2013. Multiple vendors are jumping in and for those who are looking at cloud technology but don’t want vendor lock-in, this is definitely one of the hottest pieces of technology that people are looking at.

2. Stretched Clustering

Although the cloud platforms are clearly the hottest trends, they may be just that for most. Trends aren’t necessarily the way that many businesses want to lean with their technology. VXLAN and stretched clusters will become a much more common infrastructure design for companies that want to leverage metropolitan sites. This is like the dance lesson before we take on the full cloud design. The key with this is that it moves the networking beyond the physical boundary and makes SDN part of the day-to-day infrastructure.

3. PaaS will catch up to IaaS

With Infrastructure as a Service (IaaS) becoming part of the vocabulary in 2012, the next step is for customers to look towards Platform as a Service (PaaS) options as we head into 2013. The ability to develop applications to be deployed without the encumbrances of being platform aware will be something that becomes part of the normal course of doing business. The PaaS movement is fairly new, and I encourage you to take a look at what is out there and prepare for the almost inevitable integration into your environment. Very cool stuff for sure!

VMworld 2013 Predictions

This will be a really exciting year for VMworld with some great announcements and new product enhancements across the whole spectrum. These are as much hopes as predictions, but this is what I can imagine we will see this year at the VMworld events.

1. vSphere 6.0

There will be a new version of vSphere in the pipeline which will either be 6.0 or 5.1, but with my prediction of the full Nicira integration it seems that 6.0 will be fitting as it is a major release. Features that were introduced with 5.1 will also be next generation in 2013, so you will see some impressive gains with technology like vSphere Data Protection, vSphere Replication and with efficiencies in the VDI implementation from the View camp.

2. Horizon Data finally will be released

We’ve been watching since the announcement of Project Octopus for this to be released, and with the Horizon platform being steadily pushed out, the much sought after Horizon Data product will make its debut. For those who are in the Financial Services and Health Care Services, this is going to be exciting. Private cloud data storage with mobility features has been a long awaited feature.

3. vCenter Server Appliance will add Microsoft SQL support

While the vCenter Server appliance is a great option for the small to medium business, but with the more limited database support, and limit of 5 managed vSphere hosts, it isn’t quite ready for prime time in larger environments. This year may be the year which will see the addition of support for Microsoft SQL as a back end database engine and support for larger datacenter implementations. Perhaps we may even see Linked Mode support added 🙂

4. Licensing

This will be a very interesting year for licensing of the VMware suite of products. I am predicting that the will be a drop in pricing of many of the products and more promotions to migrate customers to the vCloud suites.

I can also imagine that there will be either a merger of some of the traditional vSphere editions or we will see features which were only available to the Enterprise+ category becoming available in more editions.

The fight for adoption will be won in the licensing arena. Having the greatest feature sets in your product is one thing, but getting more customers to integrate those features is the key to real marketplace dominance.

5. Focus on vFabric

Much like the increase in focus on the SDN area, I imagine that VMware will be getting more features and more education and application reference architectures in their vFabric product space. The vFabric offering is very exciting

6. Cloud Foundry hits prime time and will get Private Cloud support

The PaaS market space is the next area where vendors will get aggressive. If VMware wants to really throw down the gauntlet to bring application environments on board, the growth of Cloud Foundry support will be a great way to lead the charge. Because the Cloud Foundry community is a fast growing group, it will be a great addition to the Private/Hybrid Cloud offering to bring this product to the fore.

Certification, Education and Community

Education is (as it should be) getting real focus from corporations, vendors and by consultants and independent staffers everywhere; and this is with good reason. The attention to education and certification is helping to drive people to be on top of their game and when that happens, everybody wins. Companies are going to see their IT workforce become more involved and with more competition comes better knowledge, better processes and better service.

Community groups have become increasingly popular, and this will also increase in 2013. I am personally involved with the VMUG organization and I do as much as I can to contribute for the PowerShell community by sharing information. More and more of the blogger community today will become prominent (hopefully including me!) and there will be more, and better information sharing through the technology communities in 2013.

Automation and Orchestration

As Cody Bunch says: “Orchestrate all the things!”. Don’t fear the script, because it will not replace you, but it will actually enable you to do things better, easier and advance your skills and also add to the standard practices in technology for your organization. Although scripting and automation has been a staple among mainframe and Unix since the 70s, the move from process scripting to full datacenter automation and the addition of the Service Catalog will be the real win for you and your organization going forward.

Thank You for a great 2012

This has been a great year for me and DiscoPosse.com thanks to all of my readers, my great sponsors and the amazing communities that I’ve been involved with. I’ve been very lucky to be able to surround myself with the greats of technology and I hope I’ve done well to add value with the things that I’ve done here.

2013 will bring some great new posts, lots of exciting new things for me and for the site and I can’t wait to watch the ball drop at midnight New Year’s eve and kick into the next year!

Merry Christmas and Happy New Year!

MerryChristmas