Cisco Workload Optimization Manager 2.2 Released!

The Turbonomic and Cisco teams have released our next Cisco Workload Optimization Manager platform with the most recent update to version 2.2, packed with much more cloudy goodness and also with the addition of new targets and more features in the cloud for both planning and real-time optimization.

One of my favorite parts of building out what I love to call the Cisco Stack is that the integration from the application (Cisco AppDynamics), down to the containers (Cisco Container Platform), into the virtualization layer (VMware, Hyper-V, OpenStack), and down to the metal (Cisco UCS, Cisco HyperFlex) including the network (Cisco Nexus, Cisco ACI, Cisco Tetration).

What’s Inside CWOM 2.2?

Big updates in this one also include cloud pricing enhancements, custom pricing (any cloud), Azure CSP rate cards, and the ability to do future planning for reserved capacity purchases on AWS which is not available in any platform to-date.

The release aligns with Turbonomic version 6.3 for the feature set so you can get a quick view of what’s inside the latest in my sizzle reel here:

You can visit the main solution page here: https://www.cisco.com/c/en/us/products/servers-unified-computing/workload-optimization-manager/index.html

Full User Guide for CWOM 2.2 here: https://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/ucs-workload-optimization-mgr/user-guide/2-2/cisco-ucs-wom-user-guide-2-2.pdf

Download the latest edition here: https://software.cisco.com/download/home/286321574/type/286317011/release/2.2.0

Full list of Turbonomic and Cisco partner resources here: https://resources.turbonomic.com/cisco




Adding SSH Access for DigitalOcean when Using Terraform

We’ve been looking at how to add a little Terraform into your IT infrastructure provisioning toolkit lately. Using DigitalOcean is also super easy and inexpensive for testing out processes and doing things like repetitive builds using Terraform.

The first post where we saw how to do a simple Terraform environment build on DigitalOcean appeared at my ON:Technology blog hosted at Turbonomic. That gave us the initial steps for a quick droplet deployment.

We also talked about how to access your DigitalOcean droplets via the command line using SSH keys here which is very important. The reason that it is important is that without SSH keys, you are relying on using the root account with a password. DigitalOcean will create a complex password for you when deploying your droplet. This is not something you can find out without actually resetting the root password and restarting your droplet. This is both insecure (reverting to password access instead of SSH key pair) and also disruptive because you are rebooting the instance to do the password reset.

Now it’s time to merge these two things together!

Adding SSH key details to the Terraform DigitalOcean provider

We are going to add a few things to what we have already done in those two other posts. You will need the following:

Getting your SSH fingerprint is a simple process. Start by going to the top right of your DigialOcean console to the icon which has a dropdown for your account settings:

In the profile page, choose the Settings option from the menu on the left-hand panel:

The SSH fingerprint that you’ll need is in the security settings page. Keep this somewhere as safe as you would your SSH keys themselves because this is an important piece of security information.

Using the SSH Details in Environment Variables

Our settings are going to be stored using local environment variables just like with our DigitalOcean key was in the first example blog. Because we have a few other things to keep track of now we will see the changes in the provider.tf file:

Our environments variables are going to have the same format which is TF_VAR_digitalocean_ssh_fingerprint which is your fingerprint you got from the security settings. The other two things we need are the TF_VAR_digitalocean_pub_key and TF_VAR_digitalocean_private_key parameters which are the paths to your local SSH key files.

NOTE: The use of the file locations is actually not needed for basic key configuration using Terraform. I just thought we should set that up which will come to use later on in other blogs around using Terraform with DigitalOcean.

Use the export command to sett up your variables.  Our Terraform file contains an extra config parameter now which you’ll see here:

These new parameters will read in all that we need to launch a new droplet, attach the appropriate SSH key by the fingerprint in DigitalOcean, and then to allow us to manage the infrastructure with Terraform.

Time for our routine, which should always be: terraform validate to confirm our syntax is good followed by a terraform plan to test the environment:

Now we run our terraform apply to launch the droplet:

Now we have launched a droplet on DigitalOcean with Terraform. Use the SSH command line to connect to the droplet as the root account. Make sure you’ve done all the steps in the previous blog to set up your ssh-agent and then you should be all set:

This is the next step in making more secure, repeatable, and compassable infrastructure using Terraform on DigitalOcean. These same methods will also be showing up as we walk through future more complex examples on DigitalOcean and other providers.

Let’s clean up after ourselves to make sure that we take advantage of the disposable and elastic nature of our public cloud infrastructure by very easily running the terraform destroy command to remove the droplet:

Hopefully this is helpful!




Attaching Turbonomic to your AWS Environment

While it’s a seemingly simple task, I wanted to document it quickly and explain one of the very cool things about attaching your Turbonomic instance to AWS.  For the latest release of the TurboStack Cloud Lab giveaway , we wanted to move further up the stack to include the AWS cloud as a target.

Even without the TurboStack goodies, you can already attach to AWS and get back great value in a few ways.  Let’s see the very simple steps to attach to the cloud with your control instance.

Attaching to an AWS Target

First, log in to your server UI with administrative credentials that will allow you to add a target.  Go to the Admin view and select Target Configuration from the workflows panel:

01-add-target

Click on the Add button and enter the following details:

  • Address: aws.amazon.com
  • Username: Your Access Key from AWS
  • Password: Your Access Key Secret from AWS

02-aws-target-type

Next click the Add button in the Pending Targets area below the form, then press the Apply button at the very bottom.  That will take you to the next step which validates your configuration:

04-added-validation

Now that you are validated, you will begin to see data populating as it is discovered from AWS.  The discovery cycle runs by default every 10 minutes, and as each entity is discovered, it is polled asynchronously from there for continuous gathering of instrumentation.

In your Inventory view, you will see the addition of AWS entities under the Datacenters, Physical Machines, Virtual Machines, and Applications sections:

05-aws-inventory

If you expand one of the Datacenters, you will see that it is defined by Regions (example: ap-northeast-1) and then underneath that, you can expand to see the Availability Zones represented as Hosts:

region-az

Let’s expand our Applications, and Virtual Machines, where you see the stitching of the entities across all of the different entity types:

06-aws-consumption-path

You can see that we have a Virtual Machine (EC2 instance) named bastion, which also has an Application entity, and you can see that it consumes resources from the us-east-1a AZ, with an EBS volume in the same AZ.

You can also see the cumulative totals under the Virtual Machines list to get a sense of how many instances you have running across the entire AWS environment.  The running instances are counted in brackets at the end of each region listing.  How cool is that?!  As someone who forgets instances all the time running across numerous regions from testing, this has been a saving grace for me.

You can also use the Physical Machines section to view each region.  When you drill down into the PMs_ section, you will see the AZ listings underneath there.

pm-counters

 

That’s all it takes to get your AWS environment attached, and we will dive into some other use cases on what you can do for application level discovery, and much more on the AWS public cloud.

 

 




Don’t Throw out that Spinning Disk Purchase Order Yet

Wait, what? Isn’t Flash the only future? Isn’t cloud-native the only way to develop applications? Isn’t [future of IT product] the only real solution?

It’s time for a quick little health check on the IT ecosystem. Before we start, I have to admit that I do lean forward with regards to technology. The reason is that I’ve witnessed countless technologists and organizations alike get caught out as technology passed them by and they were left scrambling to catch up.

As you’ll see when we wrap this quick little article, there is a reason I brought this up.

[insert IT product] of the future!

Whenever we look for the next big thing, and trust me, we are all doing it in one way or another, we tend to look a little too far down the road. Whether it’s the pundits (me included) or the analysts, there is a need to have the 5 year crystal ball so that we make the appropriate decision now.

A very important practice I was reminded of when discussing upcoming features that are on a road map, is that when you talk about what’s coming before it is available, it tends to slow down the buying cycle. People may be willing to hang on a little longer for that feature that you are touting.

We know this as the Microsoft/Oracle/VMware/[many vendors] vaporware approach that has disappointed us so many times in the past.

The storage industry, we are told, is at an inflection point. Let’s roll back the calendar 10 years. The storage industry, 10 years ago, was at an inflection point. Here’s a hint…in 5-10 years it will be at an other inflection point. The same could be said for the network industry, the software industry, the hypervisor market.

We are always at an inflection point. What is often forgotten about is that the long tail of legacy also preserves its place in the industry for much longer that it is often described.

I titled this article in relation to many folks who are looking to abandon spinning disks for flash arrays and all-flash architectures across the board. We have been told about how that is the inevitable future. Don’t get me wrong, there is a massive shift happening in data centers around the world. Flash storage is a phenomenal tool in the IT toolbox to bring us to a new generation of storage. It does not, however, stop the massive traditional magnetic storage market which has a long life left in it.

Will our future predictions of today look as crazy as the future views in Popular Science used to? Back when they were published, it seemed like it was where things were going. Watch this and tell me if we got there:

Beta lost the war in the late 1980s, so why did it just die in 2016?

If you’ve been around long enough, you may remember the Beta versus VHS standards war. More recently we saw a similar battle over the DVD standards where Blu-Ray won out over HD-DVD. The reason that this is important is that it was only just announced that Betamax tape production will end next year in 2016 according to sources.

The long tail of legacy has been proven out in many aspects of IT. While we like to blame the luddite mentality for hanging on to a lot of legacy technology and methodologies, the reality is that each of those legacy technologies serve a distinct purpose.

The world of technology is moving into the cloud, onto flash storage, up the stack to containers and PaaS, and the open source alternatives to the traditional incumbent vendors are taking hold and growing. It is very certainly a shift, but we also have a long time before we evacuate the data centers of hardware just yet.