In a Software Defined world, we are facing lots of new challenges with bringing people up to speed with the intricacies of what makes any of our core components “Software Defined”.
With EMC bringing out their new ViPR (Virtualization Platform Re-imagined aka Project Bourne), we now have another entry into the Software Defined Storage side of the environments. For the real detailed look on that, I’d suggest taking a peek at Chad Sakac’s post here (http://virtualgeek.typepad.com/virtual_geek/2013/05/storage-virtualization-platform-re-imagined.html) which tells the story quite well.
There will be lots of analysis coming out on this platform and it’s effect on the SDS marketplace, but that is not what we want to talk about right here. Let’s first get back to basics and talk about the key features that are driving forces behind the Software Defined “things”.
What’s My Vector Victor?
How did we get here, and where are we going? There are layers and components within the environment which allow it to be abstracted away from the hardware.
In fact, the beauty of decoupling the data plane and the control plane is that we can even abstract away from the software, provided we all speak the same language through APIs and open standards. In other words, the control plane can be from one vendor, and the data plan can be from another vendor altogether allowing for flexibility with our choices at each layer.
All that we require is the API is utilized and now we have common instruction sets available between the layers without any care or awareness of the underpinnings of the secondary components. A fully open API and open standards using the GPL or Apache license would be ideal. That is also another conversation all to itself, but let’s get back to what we want to focus on which is the control plane versus the Data Plane.
From the networking world, we know that the control plane is the layer at which the map of the environment exists. It is able to be aware of the routes to the data, and the methods to reach it (protocol). For networking, this is the routing table. The data itself is never aware of the route it will travel because the packet header is augmented with that information as it travels. At each step it is read, routed, and effectively the path is stripped off at each step and recreated with the next step and the data itself is happily dumped off at its destination as it expected to be.
The data plane is the actual data storage and instructions. This layer is handled by the physical (or virtual) storage provider through the API. So while we have a control plane provided by one vendor (e.g. EMC ViPR) can offload the data storage into the storage environment where it is more optimally handled (e.g. Amazon S3, OpenStack Swift). The data plane abstraction would allow us to use hybrid solutions for storage.
How about we simplify a little bit more?
I like to put things together for people with an analogy wherever possible. Let’s look at a restaurant. We have your consumer (you) and you have access to the control plane (menu) without any awareness of the method by which the data (food) will be created. Thanks to decoupling, they could take your order through the kitchen door, right out the back to the neighboring restaurant and bring your food back to you through the kitchen door and you would enjoy it just the same. All that you know is that the request (API call) went through the door and the result (data) came back to you through that same door as expected.
That is a relatively close model to the abstraction layers between control and data. In the traditional method where everything is hardware dependent the restaurant scenario isn’t quite as valid. But what is important about the abstracted layers is that hard dependencies are removed.
Clouds in the forecast
As you can guess, this is an important step in the EMC vision of a cloud environment and how they, combined with VMware can now manage the control plane to your cloud environments, but you can offload the data plane work to heterogeneous environments and commodity hardware.
While you may not be ready for a cloud deployment just yet, this is another key player putting their solutions out to market with what I imagine is a hope that other vendors will latch on to it and develop to a standard. Hopefully a fully open standard. The EMC announcement is effectively a VMware, Microsoft, OpenStack, AWS announcement in that case because it opens the door from here to a much bigger plan.
So hopefully this is a quick view of what the control plane and data plane separation means and how it comes into play with products that fit into the SDN and SDS arena.