Tech Field Day VFD3 – Coho Data – Coho-ly Moly this is a cool product!

Firstly, we have to start by congratulating Coho Data on the production launch which was announced today which is very exciting for the team.

If you didn’t already know Coho Data, they came from their beginnings as Convergent.io with a specific goal of shifting the way that storage is built and managed. They are proud to say that they are a software company that deploys on top of a storage platform. This is where the good stuff happens.

I like my storage like I like my weather – Predictable

One of the strengths of the Coho Data DataStream product is that they scale a predictable, linear rate when adding storage hardware into their deployment. As you add more disk units to the controller, it grows exactly as you are told so that your 180K IOPS with your first unit are simply multiplied as you grow your storage pool at 180K per unit.

Image courtesy of Coho Data: http://www.cohodata.com/the-coho-difference

Image courtesy of Coho Data: http://www.cohodata.com/the-coho-difference

There are a significant amount of measurements happening inside a storage system, as well as with how it is accessed. The discussion we had in our session talked about the realistic gains we can make in physical components (10GbE connectivity, PCI-e SSD cards, Xeon processors), which can be undone by performance challenges at the hypervisor and application layers above it. There is good, old-fashioned physics that we can’t dispute, but the real issue happens with software challenges in utilizing that hardware.

SDN and SDS – That’s a lot of SDx

There is some not-so-secret sauce behind this which is in the software layers that are creating the efficiency and predictable IOPS delivery with reliability of the underlying data.

We throw around the term software-defined a lot, but that really is what is the heart of what Coho Data has put together. The controller software is handling the distribution of data, read/write caching on flash, and the protection of data for overall integrity.

Add onto that the SDN (Software Defined Networking) components with the built in Arista switch running OpenFlow. I’m a big fan of Arista and of OpenFlow with what they are doing in their products so this was particularly cool to see the marriage of an interesting storage platform with a forward-thinking networking environment.

Image courtesy of ThinkAhead: http://www.thinkahead.com/coho-data-unveils-hybrid-flash-storage-combined-software-defined-networking/

Image courtesy of ThinkAhead: http://www.thinkahead.com/coho-data-unveils-hybrid-flash-storage-combined-software-defined-networking/

There is a 4x10GbE uplink which is very hardware-based of course, so there is no doubt that the access to the storage is not constrained by bandwidth nor latency. I would actually encourage that you head over to the Ahead blog where Chris Wahl (@ChrisWahl) introduced Coho Data which they have been testing in its beta: http://www.thinkahead.com/coho-data-unveils-hybrid-flash-storage-combined-software-defined-networking/

Where is the right place to cache?

This is the ultimate question facing us as architects of a data center environment. There are numerous places in the data stream to put flash resources to accelerate the data movement, with advantages and penalties at every point depending on a number of factors.

For the storage-specific focus of what Coho Data is doing, the flash layer is designed to handle the bulk of the data storage to provide the most accelerated delivery of hot data to the hypervisor. There is also the ability to do write acknowledgements in the flash layer which makes some folks cringe, but there is obviously a near-immediate write that follows to the persistent storage layer to ensure data consistency.

Host based caching is really slick for a number of reasons, however depending on your host environment topology (number of nodes, distribution of workload), there are some situations where the benefits of host-side caching isn’t necessarily the ideal choice. I agree that I would prefer to have my storage environment handle the entire lifecycle of the data below the hypervisor and leverage the flash and high-speed bus to the persistent tiers. It gets even nicer when the SDN layer provides access to it 🙂

Want to learn more?

Head on over to Coho Data (http://www.cohodata.com) for more information, and you can download the DataStream Whitepaper  here:

whitepaper

Make sure to follow Coho Data on Twitter (@CohoData) and tell them that DiscoPosse sent you 🙂

DISCLOSURE: Travel and expenses for Tech Field Day – Virtualization Field Day 3 were provided by the Tech Field Day organization. No compensation was received for attending the event. All content provided in my posts is of my own opinion based on independent research and information gathered during the sessions.



Why it is always, and never, the year of VDI, but network virtualization is here to stay

You’ve all heard it: The Year of VDI. It has consistently been the mantra of the launch of each calendar year since Citrix and VMware gained significant adoption during recent years. But why is it both true and false at the same time?

Desktop versus Server Virtualization

Desktop_VirtualizationServer virtualization has taken hold in an incredible fashion. Hypervisors have become a part of every day datacenter deployments. Whatever the flavor, it is no longer necessary to justify the purchase of a products like VMware vSphere or Microsoft Hyper-V. And for those who embraced open source alternatives already, KVM, Xen and the now burgeoning OpenStack ecosystem are joining the ranks as standard step-1 products when building and scaling a datacenter.

Server virtualization just made sense. We have 24 hour workload potential because of a 24/7/365 usage scenario plus backups, failover technologies and BCP needs.

Desktop Virtualization is a good thing

The most commonly quoted reason for desktop virtualization is the cost of managing the environment. In other words, the push to move towards VDI is about policy based management of the environment. Removing or limiting the variables in desktop and application management makes the overall management and usage experience better. No arguments there.

So why hasn’t it hit? One powerful reason is the commoditization of desktop hardware. It used to cost thousands of dollars in the 70s to purchase basic desktop hardware. Throughout the 80s, 90s and 2000s the price of desktop hardware has plummeted to the point where corporate desktops are now available for $300-$500 dollars and they are amortized over 2 or 3 year cycles.

And now the CFO has their say

The impetus to use VDI save money on desktop hardware went away. We now have thin desktops that are nearly the same price as full physical desktops. There is no doubt that this has slowed the uptake of VDI in a strong way. When it comes to putting together our annual expenses, the driver has to be strong to make the shift.

spend-saveNext up is the classic “Microsoft Tax”. While we may reduce the cost somewhat at the hardware layer, we are still bound to the needs of the consumer of the desktop to provide Microsoft OS and software. There is a reason why we don’t even talk about Linux on the desktop anymore. If people are ready for Linux, they will just use it. There are however millions of software consumers that require Microsoft tools. That’s just a fact.

So now that we enter 2014 and all of the analysts and pundits tout the new DaaS (Desktop-as-a-Service) revolution, we have to still be realistic about the amount of impact it will have on the overall market place. I don’t doubt that it will continue to gain footing, but nowhere near the level of adoption that server virtualization was able to produce.

A Patchwork Quilt

patchesIn my opinion, we have already gone down a parallel timeline on policy based desktop management. With Microsoft SCCM, LanDesk and a number of other imaging and application packaging tools already in many organizations, there is less of a need to make the shift towards VDI. There are great use cases for it for sure, but it will be a difficult battle to siphon away the physical desktop processes that have done us well up to now.

Patch management and application delivery can do a lot towards providing the policy based management that we are being told is the prime objective of many VDI products. I’m a big proponent for VDI myself, but I am also realistic about how much of the overall market it has already and will cut into.

So, is this the fate of network virtualization?

Network Virtualization is costly, but that’s OK

So now we have an interesting shift in the market again. Network virtualization has gone from a project in the labs of Stanford to becoming a real, market ready product with many vendors putting their chips on the table.

Not only are ASIC producers like Cisco and Juniper Networks coming forward with solutions, but VMware with their purchase and integration of Nicira to produce VMware NSX has created a significant buzz in the industry. Sprinkle in the massive commitment from open source producers with OpenFlow and Open vSwitch and there is undoubtedly a real shift coming.

2015 will be the year of Network Virtualization

In 2014 we will see a significant increase in the understanding and adoption of network virtualization tools and technologies. With the upcoming GA release of Cisco ACI and more adoption of open source solutions in the public and private cloud, we will definitely see a growth in the NV adoption.

Image source: http://blogs.vmware.com/networkvirtualization/2013/08/vmware-nsx-network-operations.html

Image source: http://blogs.vmware.com/networkvirtualization/2013/08/vmware-nsx-network-operations.html

Remember, NV isn’t about reducing physical network hardware. It is about reducing the logical constraints and increasing the policy and security integration at the network layers. Server virtualization has laid the groundwork to create a perfect pairing really.

When does NV become the standard in networking deployment?

This is the real question we need to ask. As all of the analysts pore over the statistics and lay out what the landscape looks like, we as architects and systems administrators have an important task to deal with: Making NV work for us.

Joe-tweet-300x141

In my mind, network virtualization is a powerful, enabling technology. We have already come a long way in a short time in the evolution of networking. From vampire taps to the upcoming 100GBE hardware in a couple of decades is pretty impressive. Now we can fully realize the value of this hardware that we have sitting on the datacenter floor by extending it with virtualization tools and techniques that gave us exponential gains in productivity and efficiency at the server levels.

It’s coming to us one way or another, so I say that we dive in and do something wondrous together.

Who’s in for the ride? Count me in!




My Media List: December 2013

One of the interesting things I saw recently, was a post about people sharing their media list. This could be books, music, movies or any type of content that they are using at the moment to advance themselves, or just for pure enjoyment.

I thought this was a great concept as a big fan of lifelong learning, so I wanted to start by sharing the content that I’ve been looking at in the recently, and that I’m using for learning and enjoyment as I head into the end of 2013.

These are the titles that I’ve been reading in the last couple of weeks.

My Books

The Phoenix Project

This is a re-read for me, but I actually make a point to review this great book every few months. It is a particularly quick read because the story is laid out beautifully and it is a real page-turner for folks who have are in the IT industry or in fact in just about any business. I had reviewed the book previously, so I’ll refer you to that post which is here: DiscoPosse Review: The Phoenix Project.

Click here to see the book on Amazon –The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win

Designing for Behavior Change

As a member of the O’Reilly Blogger Review program, I have been given this book a couple of weeks ago and I am really pleased with it so far. It covers design from end to end and the way that design affects, and is affected by behavior. I really like anything that brings in the people side of design. There are real roots in behavioral psychology in technology that are often overlooked or misunderstood. UX/UI work is gaining focus for a reason.

Click here to see the book on Amazon –Designing for Behavior Change: Applying Psychology and Behavioral Economics

Lean Startup

Eric Ries does a phenomenal job of bringing the startup successes and challenges to the reader with Lean Startup, and whether you are in an “Enterprise” organization, or you are in a SMB or startup business, this book is absolutely applicable.

What I find to be great about this book is that is brings great concepts that you can apply to all aspects of your work and home life, and I have seen the benefit first hand. This is also one of the books that I re-read regularly, and each time it excites me to become better with things that I do.

Click here to see the book on Amazon – The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses

vCAT

The VMware Press vCloud Architecture Toolkit is a great guide for those who do, or will soon work with VMware vCloud. Even if you are just wanting to evaluate whether this is a possible solution for your organization, this is one of the best guides I’ve found to lay out the technical, and people/process components needed to successfully deploy a VMware vCloud environment.

Click here to see the book on Amazon –VMware vCloud Architecture Toolkit (vCAT): Technical and Operational Guidance for Cloud Success (VMware Press Technology)

How to Create a Mind

If you are a fan of Ray Kurzweil, this is one if his great books.  An interesting insight into how the mind behaves, how learning occurs, and how pattern recognition shapes the way that we work. Again, this touches on the behavioral topics that I really enjoy reading and I recommend to others.

Click here to see the book on Amazon – How to Create a Mind: The Secret of Human Thought Revealed

Software Defined Networking with OpenFlow

This is a new book that was just released in November from Packt Publishing. This book gets deep quickly, so it is a real opportunity to dive right into the technical details of using OpenFlow and Software Defined Networking (SDN). If you want to see real topology examples with code examples paired up to allow you to use, this is a great resource.

Click here to see the book on Amazon – Software Defined Networking with OpenFlow

Videos

Pluralsight.com – Cisco CCNA Data Center: Intro to Data Center Networking

While I’ve already been CCNA certified, I had really key reasons to view this course. First, I wanted to refresh the concepts to prepare for the new CCNA-DC which uses Nexus technologies which were not covered in my CCNA Routing and Switching that I have already. Secondly, the course is given by Chris Wahl (http://www.wahlnetwork.com), who does a great job of delivering the content with both a technical depth, and a conversational delivery that makes it very easy to retain the info and stay interested throughout the course.

Click here to see more: CCNA Data Center with Chris Wahl