The Cloudistics Turbine Acceleration Approach

It’s not often that storage gets exciting. We have seen a lot of interesting disruptions in the storage field over the last couple of years with new hardware, and new software, which is designed to take the ever-present storage environment to the next level.

As flash inside the primary array became a common approach to add some tiering capability, the game changed a little. Along came all-flash arrays, which upended the concept of tiered storage by offering that a primary storage device can be all flash. This is interesting, but also wrought with challenge when it comes to pricing.

No matter how you slice the dollars per gigabyte numbers, all-flash storage has quite often proven to be more expensive in practice for many organizations. Some software has appeared on the landscape to attack this problem with server RAM, or with host-level flash cache. These have had a positive effect, but have also led many organizations to still dig in their heels to move towards this approach.

Quite often, the server-side flash options prove to be too expensive depending on the physical topology. It also opens up a future dependency on both software and server-side hardware. Hyperconvergence and hybrid approaches are interesting, but the barrier to entry still seems high depending on the implementation.

cloudistics-logoThis led me to Cloudistics who have taken a new, and interesting approach to the very real challenge of accelerating storage, and reducing latency in your SAN environment.

I recently spoke with Cloudistics CEO, Najaf Husain. We talked about the very interesting opportunities in the market today around storage acceleration as well as lots about the important need to provide a disruptive approach without requiring significant change in the data centers of organizations today.

Wait, what? Disruption, without disruption? Exactly! Organizations are often wary of making moves into technology adoption that require the upheaval of physical environments, and as the all-flash, and software acceleration party is just getting started, how will Cloudistics take a different tack?

The Cloudistics Turbine Approach

Cloudistics Turbine is built to accelerate your SAN storage environment on VMware vSphere ESXi, Microsoft Hyper-V, KVM, or Citrix hypervisor platforms.

Rather than replacing your existing storage hardware, the Cloudistics team lets you quite simply drop in a Cloudistics Turbine appliance, map the current storage using the Turbine environment, and repoint the hypervisor to the new Turbine accelerated LUNs:

LogicalChartDescription

The resulting environment also gives you the flexibility of running as a write-back cache versus the traditional write-thru cache which requires a write confirmation back to the source storage platform before acknowledging the commit:

Write-back

By embracing a sequential bypass capability, Cloudistics also lets the large sequential writes go straight to the SAN. This is done by the Turbine engine, which selects the cache utilization, based on disk load:

TechCharts

With a nicely designed UI and clean UX, Cloudistics has done well to present a clean view of the environment, all with a simple HTML5 interface:

cloudistics-ui

Cloudistics Redundancy and Resiliency Approach to Safety

While the approach of injecting itself in the data path can often seem difficult for some to embrace, Cloudistics have provided a fully redundant physical and software platform.

Inside the physical Turbine appliance are a fully redundant pair of 1U units. Not only that, but each unit contains redundant power supplies, NICs, and flash arrays, with a RAID10 deployment to ensure both performance and resiliency of the file system across the platform.

NICs are bonded and provide high speed access to the environment on the four 10 GbE connections. Writes are done at 40 Gbps and reads are done at 4×10 GbE.

cloudistics-turbine-specsFor fans of Infiniband (me included!), Cloudistics has used Infiniband to link the two units within the chassis to provide a high-speed channel between the devices.

I did ask about the failure risk introduced by being added to the data path. Cloudistics provides a storage LUN mapping which is identical to the existing mapping with a .TURBINE added. If there was ever a situation where the environment became physically unavailable, the hypervisor could attach to the original storage LUN presentations directly. Only in-flight data that was not able to have a completed write to the back-end storage may be lost.

Given the speed of the writes due to high-speed network connectivity, and the redundancy on the Turbine appliance, the risk is near-zero and would be no different than the risk of the originating storage LUNs failing. In other words, if something goes wrong with your Turbine environment, it is probably a data center wide interruption, which means that everything will need to be attended to.

Fighting the Cost Challenge with Storage Acceleration

Boasting a $0.35/GB price, the Cloudistics team have come to the storage acceleration market at an unparalleled price offering. There will be some questions about the ability to achieve that full per GB value across the entirety of the environments that utilize SAN storage, but compared to the all-flash offerings in the market, this is clearly a big boost in value for the purchase cost.

Operationally, there are lots of benchmarks that have been provided from openBench that show good results on SQL platforms, one of which illustrating a massive boost in density while greatly reducing the latency and queue depth on the platform:

sql-perf

Find out More About Cloudistics

To find out more, you can go to the Cloudistics website to get the openBench report and to contact the team to request a demo.

Najaf is a very dynamic individual, and his team has a storied past in every area of technology. It’s definitely going to be interesting to watch the Cloudistics approach to storage acceleration, and to see how companies can embrace their Turbine product suite.

Seeing is believing, so I highly recommend that you take a demo to see for yourself!




Don’t Throw out that Spinning Disk Purchase Order Yet

Wait, what? Isn’t Flash the only future? Isn’t cloud-native the only way to develop applications? Isn’t [future of IT product] the only real solution?

It’s time for a quick little health check on the IT ecosystem. Before we start, I have to admit that I do lean forward with regards to technology. The reason is that I’ve witnessed countless technologists and organizations alike get caught out as technology passed them by and they were left scrambling to catch up.

As you’ll see when we wrap this quick little article, there is a reason I brought this up.

[insert IT product] of the future!

Whenever we look for the next big thing, and trust me, we are all doing it in one way or another, we tend to look a little too far down the road. Whether it’s the pundits (me included) or the analysts, there is a need to have the 5 year crystal ball so that we make the appropriate decision now.

A very important practice I was reminded of when discussing upcoming features that are on a road map, is that when you talk about what’s coming before it is available, it tends to slow down the buying cycle. People may be willing to hang on a little longer for that feature that you are touting.

We know this as the Microsoft/Oracle/VMware/[many vendors] vaporware approach that has disappointed us so many times in the past.

The storage industry, we are told, is at an inflection point. Let’s roll back the calendar 10 years. The storage industry, 10 years ago, was at an inflection point. Here’s a hint…in 5-10 years it will be at an other inflection point. The same could be said for the network industry, the software industry, the hypervisor market.

We are always at an inflection point. What is often forgotten about is that the long tail of legacy also preserves its place in the industry for much longer that it is often described.

I titled this article in relation to many folks who are looking to abandon spinning disks for flash arrays and all-flash architectures across the board. We have been told about how that is the inevitable future. Don’t get me wrong, there is a massive shift happening in data centers around the world. Flash storage is a phenomenal tool in the IT toolbox to bring us to a new generation of storage. It does not, however, stop the massive traditional magnetic storage market which has a long life left in it.

Will our future predictions of today look as crazy as the future views in Popular Science used to? Back when they were published, it seemed like it was where things were going. Watch this and tell me if we got there:

Beta lost the war in the late 1980s, so why did it just die in 2016?

If you’ve been around long enough, you may remember the Beta versus VHS standards war. More recently we saw a similar battle over the DVD standards where Blu-Ray won out over HD-DVD. The reason that this is important is that it was only just announced that Betamax tape production will end next year in 2016 according to sources.

The long tail of legacy has been proven out in many aspects of IT. While we like to blame the luddite mentality for hanging on to a lot of legacy technology and methodologies, the reality is that each of those legacy technologies serve a distinct purpose.

The world of technology is moving into the cloud, onto flash storage, up the stack to containers and PaaS, and the open source alternatives to the traditional incumbent vendors are taking hold and growing. It is very certainly a shift, but we also have a long time before we evacuate the data centers of hardware just yet.




Taking on your storage situation with CloudPhysics – new Storage Analytics release!

What’s one of the most common things we see as virtualization admins is the classic performance issues which lead many to say “It must be a problem with the SAN”. The black-box feel of the storage layer can often be a blame destination when there undetermined performance problems.

The CloudPhysics way: Products powered by you!

The team at CloudPhysics is doing something that really makes their offering exciting. They are letting us as customers lead the direction of how the product works. There is a continuous feedback from customers, and the team works very hard to stay ahead of the curve with what the customers are seeking to make their day-to-day virtual data center operations smoother.

Bring the cloud methods inside

Have you ever looked around at your environment for unused VM guests? Maybe not, but just imagine if you were hosting your environment in a public cloud that charged for all powered on guests. This is where the new features will be really cool because you can use the Unused VMs card to see which machines aren’t being used and you can then choose to either remove them or perhaps you will want to just power them down and only bring them online as needed for work or to patch them.

Unused VMs Screen Shot 2014-03-24

The win on this is that you can free up storage that is taking up valuable space or make an informed choice about how to manage that VM resource to get the best performance and utilization out of your data center storage environment.

Don’t let contention get you down

Up to now, I have some of my favorite cards that I tout to readers and my colleagues. Among those is the classic “Snapshots Gone Wild”, the “Cost Calculator for AWS” and the “Cost Calculator for vCHS”. Each is an instant hit for their own reasons, and now I can say that I think I’ve found my new favorite card with the upcoming Datastore Contention v2 card! The Datastore Contention card v2 with the new in-depth view will be a must have IMHO:

Datastore Contention Screen Shot 2014-03-24

Just imagine being able to narrow down performance issues to a targeted VM, and even a specific disk. It’s a more fully transparent view of real utilization with a depth of information that should make your VMware admins smile 🙂

There is a 30 day trial program to give the new Storage Analytics a try which you can get to by clicking the image below:

cloudphysics-try-storage-analytics

Head on over to the CloudPhysics blog to read about the upcoming features here: Who’s Minding Your Storage Zoo? Try CloudPhysics New Storage Analytics.

 

 




Tech Field Day VFD3 – Pure Storage and the all-flash revolution

As we close out our first day of presentations here at Virtualization Field Day 3, we are at the office of Pure Storage in Mountain View. Pure Storage is a really neat company for a number of reasons. Their all-flash array is a product that is not an evolution of an existing product which was simply being augmented with a flash tier to accelerate data storage and retrieval. In fact, they launched among our community using great events like Tech Field Day to an avid audience of storage enthusiasts.

grandmasterflashWhat is the strategy to all-flash?

How about a simple strategy: Let’s deliver an all-flash storage array for a lower price than traditional spinning disk? Wow! That’s quite an aggressive tagline, but what Pure Storage does is to work at delivering a performance and consolidation platform that lowers the per-VM cost to bring its customers 0.3-0.7ms data access with 5-10x consolidation through inline de-duplication and compression.

So how do they do this? Very good question, and it is comprised of a lot of features at the hardware and software layer. I couldn’t do it justice in a quick post, so please forgive me that I won’t dive into the deep technical goodies here, but I wanted to look at some of the other aspects that make Pure Storage interesting in what they do.

The Forever Flash promise

forever_flash_clearThis is really cool! When you bring a Pure Storage product into your data center you will size it as needed and the typical experience is to acquire storage with a long lease cycle because of the high cost to acquire enterprise scale storage.

The challenge is that the same large scale storage really needs care and feeding, and with the aggressive moves happening in storage engineering, it seems counter-productive to sign on for a long lease on large storage.

With the Forever Flash program you can actually upgrade your controllers every 3 years to align with the updates that have been engineered by Pure Storage, and to top it off the rest of your storage in the chassis then has its support cycle re-aligned with the upgraded hardware. Effectively it is as if you just put the product on the floor and started your support contract again.

Incremental upgrades also give the same re-up for your support, so you can continue to grow your Pure Storage environment and stay up to date on features, hardware, software and support, all at the same time.

For info on the Forever Flash program head on over here: Forever Flash with Pure Storage

Is it really lower cost?

I have to be honest that it sometimes seems like a little sleight of hand when I see all-flash options that can be deployed for a lower per-VM or per-GB than other traditional storage arrays.

Image courtesy of Pure Storage - http://www.purestorage.com/resources/roi.html

Image courtesy of Pure Storage – http://www.purestorage.com/resources/roi.html

Looking at the diagram we can see how the overall cost is accounted for with storage and how Pure Storage is able to put storage on the floor at a customer for a per-GB cost that comes it much lower than expected because of compression, power/cooling reduction, and minimal management overhead for the administration.

That being said, there is a real cost to bringing all-flash solutions into the data center. In my opinion, this may not be appropriate for many SMB organizations with moderate workloads. There is a definite target market, and a lot of factors come into play to define where the sweet spot is for moving to an all-flash solution.

The basic deployment is a 2 controller implementation with a half-populated shelf, so there is an entry point that is attainable for many organizations.

The bits matter

Literally. The way that one storage stands apart from another is the way the bits are read, written, replicated, de-duplicated and generally managed. Hardware becomes a genuine question when dealing with flash storage because of the alternatives that are available (MLC,eMLC,SLC) and the Pure Storage goal is to find the balance with performance and reliability with managing the price point to keep their solution as a cost-effective offering.

Since their first array hit production, they have replaced 5 drives altogether and one was for a firmware issue. That’s a pretty good track record. We discussed a lot of deep-dive details on hardware, software, and workloads which was eye opening and encouraging.

Pure Storage also built their solution with a 512 byte block default rather than the traditional 4K block size. This has some really slick advantages in how the performance can be increased at many points. In fact, it means that block alignment is no longer an issue because the block sizing eliminates the performance issues that come into play with 4K blocks and certain application/VM features.

I’ll be sure to post the videos so that you can see some of the targeted talks on content such as the thick versus thin, eager zero versus lazy zero on SSD. Very interesting info on tactically handling performance in conjunction with your hypervisor features.

My thoughts

I really like the idea of what Pure Storage is doing as a company, and as product creators on both the software and hardware side. It would be great to be able to have an all-flash solution in my data center, and that time may come as my workloads are more able to take advantage of the predictable performance and speed of all-flash arrays.

I see OpenStack in there 🙂

There is a current Cinder driver for the Folsom release with an upcoming update to support the Havana build. With an increased customer field growing in the OpenStack space, the team has added that they will be increasing some focus on the platform to align with requirements from the consumers.

Is Pure Storage right for you?

This one will be something that every organization has to evaluate, but I can say that the people and the product are great here at Pure Storage and it is absolutely worth putting on the evaluation plan to see how it may fit into your data center.

You can be the judge with your particular situation, but make sure to reach out to the team at Pure Storage on Twitter (@PureStorage) and at their website http://www.purestorage.com for more details.

DISCLOSURE: Travel and expenses for Tech Field Day – Virtualization Field Day 3 were provided by the Tech Field Day organization. No compensation was received for attending the event. All content provided in my posts is of my own opinion based on independent research and information gathered during the sessions.