Big Updates with Turbonomic 6.4 – vSAN/HCI and Horizon VDI

The Turbonomic team has been working hard on some excellent features in the most recent Turbonomic 6.4 release. As a long-time VMware vExpert and advocate for the virtualization community, I’m especially excited about two very slick additions for VMware vSAN and VMware Horizon VDI which I’ve been testing out and loving the results!

We have a lot to share at the VMworld event which I’ll be attending and if you aren’t able to join me there in San Francisco, I have a great team presenting in Barcelona, plus you can always check out the Turbonomic Resources page for videos, blogs, and other updates.

VMware vSAN and VMware HCI

There is little doubt that Hyperconverged infrastructure is becoming much more widely used. vSAN introduces some great new deployment patterns and also changes the way that you need to plan how to build and operate in order to get the best out of HCI and the vSAN distributed storage platform.  I’m writing this as I wait to board a flight to VMworld 2019 so there will be lots to talk about there as we share the new Turbonomic 6.4 release and also hear about the news and updates from VMware.

Real-time optimization for applications is now enhanced by fully feature-aware analytics from VMware vSAN including compression, deduplication, and the redundancy/resiliency settings.  This means that when decision are made by Turbonomic about where to place and how to scale VMs, containers, and applications, there is additional understanding of the actual capabilities and capacity of vSAN that are helping to drive the decision.

This also means that planning for growth includes the performance and capacity that is feature-aware.  When you run plans to model how a host replacement, or scenarios of how you may scale or migrate workloads, you are getting the actual infrastructure and application scale decisions based on the available performance/capacity plus full understanding of the raw storage and host configurations needed to deliver it.

Here is a quick highlight of the vSAN/HCI features in Turbonomic 6.4 (yes…that’s my crazy voice in there)

VMware Horizon VDI

Is it the year of VDI finally?! Based on how I’ve been seeing things in some super exciting customer deployments, yes it certainly is. One of the really cool examples is a company with their entire workforce on VDI…and it’s not a small company. The old school issue of managing “boot storms” is at a whole different level when you have worldwide employees across every time zone and all running on centralized Horizon infrastructure distributed across a few data centers for resiliency and latency reduction.

The use-case that Turbonomic is solving is that when users login and run, the actual consumption of resources is continuously analyzed for real-time, maximums, average, and historical patterns in order to decide how they should scale their VDI instance and where it should be placed based on the underlying infrastructure hosting the Horizon desktop pools.

Turbonomic will tell you where to place the user, how to size their desktop, and can even automate the changes in a service window and even tracks the user logout trigger so that Horizon admins can let their Horizon user entitlements be driven for better performance and efficiency by Turbonomic. That also extends into the scaling and growth planning which is done using the same analytics engine. Super cool!

Here’s a quick highlight reel of the feature:

Much more to come!

Here is a quick end-to-end list of the significant updates of the release which also includes a ton of really great updates for Microsoft Azure, the introduction of what are now rebranded as right-time actions, a big set of updates for Kubernetes, and much more!  Congratulations to the Turbonomic engineering team and a great release and let me know if you want any more details as I’m happy to jump in and help folks get to know more about the platform!




VMworld 2019 – vCoffee Community Bean Exchange

Got beans?  Want beans?  Coming to VMworld 2019 in San Francisco?  Welcome to the #vCoffee Community Bean Exchange group!

How does vCoffee Community Bean Exchange Work?

Find a local roaster or some of your very favorite beans.  Package up one or two single pound bags of whole bean coffee and bring them to VMworld with you in San Francisco.  Everyone who fills out the form below (that’s the important part) will be a part of the exchange.  Our crew (aka DiscoPosse and anyone willing to help) will mix and match the caffeinated goodness and prepare pickup packages for folks to take back to their home coffee brewing stations.

Can’t see the form? Just click here: https://discopos.se/VMworldCommunityCoffeeExchange2019

powered by Typeform

Big thanks to Cody Bunch and a bunch of folks who formalized this in 2017.  During the 2018 event we had a total of 44 pounds of coffee exchanged!

Support the Tech Community – Get a great mug!

This year also features the first time we are providing a chance to pick up some devilishly good swag thanks to the team at Diabolical Coffee!  Drop on over to the Custom Ink page and order your limited run Diabolical Coffee tumbler which is being offered for the until July 23rd in order to make sure you get yours in time for VMworld.  Mugs ship directly to your home which is even easier!

UPDATE JULY 29:  We made the print run and hopefully all of our supporters will receive their mugs before VMworld.  I will have some with me at the event for anyone who missed getting them in the initial outreach.  Thank you to this amazing community for all of your support!

Where to Bring your Beans

Bean drop off and pick up will be at the Turbonomic booth in the expo hall.  Beans must be dropped off by midday Tuesday and pickup can begin on Wednesday.  Join in the fun and I can’t wait to see everyone in person at the event!  Logistics email will be sent out prior to and during VMworld which is why it’s super important that you fill out the form to be included in the communications.

Can’t see the form? Just click here: https://discopos.se/VMworldCommunityCoffeeExchange2019

powered by Typeform



Multi-Cloud: You keep using that word…

It isn’t surprising in 2019 how many times I bump into an environment or organization where the word multi-cloud comes up.  Technology presents us with lots of architectural choices that often get very buzzword-centric.  Multi-cloud leads the pack on popularity and buzz wordiness.  Multi-cloud also continues  to be a strangely ill-defined even at this point in the evolution of our cloud adoption in the industry.

Multi-Cloud 2012

In 2012 it was all about whether IT teams would be looking at the newly available public cloud alternatives to solve the woes of bursting for resource needs.  That was the most often pitched use-case that we used to Devine what a “hybrid cloud” was.  Vendors and architects showed exciting diagrams of on-premises applications that would suddenly burst out to the cloud to scale when demand increased.  Sounded good at the time.

The issues we would face became very obvious, very quickly.

  • Where is your data for that bursting application?
  • How do you define and deploy networks that securely span the hybrid platform?
  • What applications do you have today that can leverage this architectural pattern?

Multi-cloud was right on the heels of the hybrid cloud story because the “big 3” players (AWS, Azure, Google Cloud) were all pushing to become the target for your workloads.  This meant that you could now run your bursting application across more than one cloud.

This new multi-cloud idea was fast becoming the classic “solution looking for a problem”.  It was technically interesting (read: challenging) and we begin hearing stories of Netflix and others who were taking advantage of the burstable options in and across clouds.

The questions we needed to ask began to solidify:

  • Is cloud lock-in a concern?  In other words, does being beholden to a single cloud provider put you at risk both technically and in your negotiation position?
  • Are your engineering teams capable and eager to own the tooling to build for more than one cloud platform?
  • Does the risk versus reward and ROI work out for a multi-cloud application scenario?

If you run single-purposed applications that are scale-out capable, the questions become easier to answer.  Multi-cloud looked like a neat idea for most organizations, but a pipe dream at the same time.  Most IT orgs weren’t even widely deploying applications that could cross their own private data centers in an active-active scenario, let alone across multiple clouds.

Do a google image search for “multi-cloud 2012” and you’ll see a treasure trove of presentations on the concepts and not a lot of live use-cases in production.

Multi-Cloud 2019

We’ve come a long way.  More technical solutions arrived in the ensuing years to open the door to real potential of a legitimate multi-cloud deployment.  Containerization and container schedulers (read: Kubernetes) now make the underlayers virtually invisible.  So, did that application that runs on one cloud and burst to the second cloud turn out to be the right use-case?  Nope!  Time has proven that use-case to be as long-lasting as the Palm Pilot.  Good idea that didn’t execute.  But that’s fine.

Orchestration and infrastructure-as-code with platforms like Puppet, Chef, RackN, Terraform, and others, truly open up the possibilities of architecting and deploying for any underlying platform and your developers can build their applications on higher abstractions that can finally span these clouds.

What Multi-Cloud Is and Is Not

It still does not really make sense to span your applications or burst your applications across cloud platforms in most cases.  Despite the hype and technical capabilities these use-cases haven’t proven out:

  • Single application that spans across more than one cloud provider
  • Scale-out apps with data

Why didn’t these play out like the marketing messages of yesteryear would have indicated they could?  Pretty simply:

  • Data gravity for the apps – front-end servers kept so far from the back end create latency…so what if you distribute the data?
  • Data consistency in a globally-distributed, scale out database requires low-latency between nodes or apps designed for eventual consistency

Data seems to be one of the biggest markers for why we localize applications.  We can shard out the data all we want, but a true enterprise application with high throughput that needs to be resilient and distributed also comes with the baggage of needing high throughput on low-latency with consistency across the entire application set.

Basically, the most needy application that caused us to make it resilient by spanning the cloud also causes us to fall to the lowest common denominator of data gravity. Web applications are one thing.  They can work in a distributed fashion without as much pain and re-architecture.  The reality is that distributed apps that are backed by large data sets will be more likely candidates to build resiliency within a cloud rather than across multiple clouds.

Before you begin throwing names like LinkedIn, and Netflix, as examples, remember that they are apps which are purpose built for multi-cloud and they even had to build up their own tooling to do so because the native cloud platforms didn’t provide a way to do it.  CI/CD across clouds is a whole other beast which opened up a new and exciting conversation of “can we versus should we?” as we looked at borrowing the toolsets that Netflix, Walmart, and others open sourced.

What does make sense as multi-cloud use cases is using the best-of-breed solution for each application and leveraging software-defined networking to access and share resources across clouds.  The magic of VPNs and persistent networks across cloud providers now easily enables valuable scenarios:

  • Active Directory on Azure as the authoritative IAM for AWS and on-premises Windows workloads – federate your IAM across all clouds and on-premises using a common platform and then use it to manage Windows servers across all of your cloud providers
  • AWS-specific apps and Azure-specific apps – Why force developers to abandon services that could be purpose-built to solve a problem.  Need Rekognition for one application set and also want to use the Azure ML services for another?  Don’t port to a single cloud to achieve alleged reduction in complexity when it means sacrificing better services to meet the needs of your apps and business
  • Acquisitions of teams, apps, and companies that already have a large footprint on one or more clouds – A huge use-case to maintain a multi=cloud  architecture is that you acquired an already established platform through an acquisition.  Why would you spend needless effort porting and migrating to another cloud.  The ROI most likely won’t be in your favor

The multi-cloud reality is treating each cloud like it’s own data center and choosing to interconnect those clouds for resiliency and to leverage the best features and capabilities of each.  There will be some added complexity as we adopt tooling to deploy and manage these environments, but the benefit far outweighs the cost of effort in using multiple tools.  Forget the single pane of glass and forget the multi-cloud spanning application.  Let’s embrace multi-cloud for what it really is which is a beautiful opportunity to stop and ask what we really want to accomplish with the underlying tech.

 




Free eBook – Managing Kubernetes Performance at Scale

I’m super proud to share that my team has worked with O’Reilly to create a great little ebook that you can download for free (yay!!).  Eva Tuczai, who works on our Advanced Engineering team working with large-scale production container deployments at some of the most complex and interesting environments.  Asena Hertz works with me on the Marketing team leading our container and cloud-native work with Product Marketing and working directly with customers and engineers to advance the intelligent, performant, and successful adoption of Kubernetes and containerized platforms.

You’re just a click away from downloading the free ebook and I highly encourage you to read up on how to think and architect for at-scale deployments BEFORE they scale!  This is a must read for any virtualization or containerization engineer.  If you have any questions or want to dig further into this and other Kubernetes topic, please do leave a comment on the post and I’m happy to jump in to help out in any way I can.

Big thanks to Eva, Asena, and my entire team for putting this together.  If you’re going to Kubecon in spring 2019 then you will be able to get a print copy at the Turbonomic booth so make sure you keep watching the website for event updates.

Download the free book today here: https://turbonomic.com/kubernetes-at-scale or by clicking on the image above.  Thanks for supporting the open source movement!