Setting up Turbonomic Action Notifications to Slack Channels

An interesting use-case that I’ve bumped into lately is where folks want to enable automation, but they also need to know when automated things happen. Email was the common platform for notifications, and still is, but there are many more organizations adoption Slack for day-to-day activity monitoring and building out interesting interactive ways to enable the ChatOps approach to IT operations management.

Since you may have followed along my first article which showed you how to set up a custom WebHook integration for your Slack team channel, we will take that one step further and show you how to configure Turbonomic to send notifications of actions to your Slack channel.

Setting up Action Scripts in Turbonomic

One of the cool features within Turbonomic is something called Action Scripts. These are scripts that are run when a particular actions happens on a particular entity within the environment. Action scripts run at different times in the process including before (PRE) and after (POST) the action so that you can either get notification or to trigger some interaction with the action.

Action Scripts run for every action time available including moves, scale/resize, and more. The naming of each Action Script is relative to the timing (PRE/POST) and the action type. You only need to create one Action Script which is hosted on your Turbonomic control instance and launched by the Turbonomic engine as actions are triggered.

The official documentation on using Action Scripts is here, but for our purposes here I will give you a crash course in creating a PRE move script so that we can send Slack notifications when an application workload is about to move.

Variables Accessible during Action Script Execution

There are a number of environment variables which are generated when a Turbonomic action is instantiated. Some of these include:

$VMT_TARGET_NAME – the entity which is subject to the move action
$VMT_CURRENT_NAME – the source location where the entity is located
$VMT_NEW_NAME – the destination where the entity will be moved
$VMT_ACTION_NAME – the unique ID for the action

These are the ones that I’ve chosen to include for my Slack notifications because I will want to know the workload which is subject to the move, the source location, target location, and then having the ID of the action is helpful for auditing and also for more deeper integration with a true ChatOps approach that we will dive into further in another post.

For now, the Slack notifications will be simply to log for us using our Slack channel whenever there are moves occurring. You can select from any of the different actions in the Action Scripts, so this is a good place to start.

The PRE_MOVE_VirtualMachine.sh Script

The simplest view of the script is as follows. Simply create a file named PRE_MOVE_VirtualMachine.sh which is the one that is called by a move action. This could be anything from a VM migration across hosts, clusters, or also container pod changes and more.

We need to leverage the action variables that we have been given and pass them into the our Slack API call. The simplest method for this is to inject a cURL command into the Action Script that will run using the native cURL command available on your Turbonomic instance.

The command to post to the API for Slack requires your WebHook URL which you can get by following this guide that helps you get the WebHook set up.

This is the full GitHub Gist of the code. If you have existing Action Scripts in the folder, you can simply append these lines to your existing script.

Take note of the use of quotes within the command line as we need to pass the variables into the cURL command which requires additional double-quotes around the entire command.

Last step – Enable Action Script for Moves in Turbonomic

At the time of this writing, the Action Scripts features are still in the traditional flash UI. Go to the Policy view in your Turbonomic instance and expand the Action | VM section where we will enable the Action Scripts for Virtual Machines in this case.

Simply check off the Action Script Settings setting for the PreMove action and you are all set. In the image above we can see that I also have Move actions automated which may be set to Manual for your environment.

NOTE: Enabling policy changes within Turbonomic will trigger a refresh of the actions. This is because the state of your policies has changed and the entities in the environment must shop for the appropriate resources to satisfy their demand based on the newly formed policy configuration. This is the nature of the system being real-time so that no actions are held when they could be stale or unnecessary due to other environmental changes that have occurred.

The Slack View

Under your Slack channel, you will now begin seeing notifications whenever an action occurs. This is what your channel will start to look like as the moves take place:

In my case, I have enabled full automation; This means that these actions are triggered and the notification is done as the action is about to occur. We can also do POST_MOVE script which is handy if we are building out other hooks.

The goal with Action Scripts is to be able to integrate with any application lifecycle management process or product.  Look for much more in the coming weeks as we walk through some more integrations that can be done with this method.




Got Logs? Get a PaperTrail: First thoughts

I stumbled upon Papertrail through a Twitter Ad (hey, those things work sometimes!) and figured that I should take a quick look. Given the amount of work I’ve been doing around compliance management and deployment of distributed systems, this seems like it may be an interesting fit. Luckily, they have a free tier as well which means it’s easy to kick the tires on it before diving in with a paid commitment.

The concept seems fairly easy:

The signup process was pretty seamless. I went to the pricing page to see what the plan levels are which also has the Free Plan – Sign Up button nicely planted center of screen:

What I really like about this product is the potential to go by data ingestion rather than endpoints for licensing. Scalability is a concern with pricing for me, so knowing that the amount of aggregate data drives the price was rather comforting to me.

The free tier gets a first month with lots of data followed by a 100 MB per month follow on limit. That’s probably not too difficult to cap out at, so you can easily see that people will be drawn to the 7$ first paid tier which ups the data to 1GB of storage and 1 year of retention. Clearly, at 7 days retention for the free tier, this is meant to just give you a taste and leave you looking for more if the usability is working for you.

First Steps and the User Experience

On completion of the first form, there is a confirmation email. You are also logged in immediately and ready to roll with the simple welcome screen:

Clicking the button to get started brings you to the instruction screen complete with my favorite (read: most despised) method of deploying which is pushing a script into a sudo bash pipe.

There is an option to run each script component which is much more preferred so you can see the details of what is happening.

Once you’ve done the initial setup process, you get a quick response showing you have active events being logged:

Basic logging is one thing for the system, so the next logical step is to up the game a bit and add some application level logging which is done using the remote-rsyslog2 collector. The docs and process to deploy are available inside the Papertrail site as well:

Now that I’ve got both by system and an application (I’ve picked the Apache error log as a source location) working, I’m redirected to see the live results in my Events screen (mildly censored to protect the innocent):

You can highlight some specific events and drill down into the different context views by highlighting and clicking anywhere in the events screen:

Searching the logs is pretty simple with a search bar that uses simple structured search commands to look for content. Searches are able to be saved and stored for reporting and repetitive use.

On the first pass, this looks like a great product and is especially important for you to think about as you look at how to aggregate logs for the purpose of search and retention for security and auditing.

The key will be making sure that you clearly define the firewall and VPC rules to ensure you have access to the remote server at Papertrail and then to make sure that you keep track of the data you need to retain. I’ve literally spent 15 minutes in the app and that was from first click to live viewing of system and application logs. All that and it’s free too.

There is a referral link which you can use here if you want to try it out.

Give it a try if you’re keen and let me know your experiences or other potential products that are freely available that could do the same thing. It’s always good to share our learnings with the community!




Turbonomic Technical Poster Goodness

As a long-time fan of the technical posters that came out of the vCommunity and PowerShell community, I was very happy to have a chance to work with Rene Van Den Bedem (aka @VCDX133) on something fun at Turbonomic.  Rene and I teamed up to craft the first official Turbonomic Technical poster for Turbonomic version 5.9 that you can download in PDF format right from the Green Circle Community

Big thanks to Rene for all that he has done to help my team with this, and of course for all of his continued support of many community efforts across all of our shared IT communities.

Click the handy dandy link here to go to the site and get your own Turbonomic Technical poster!




One Vault to Secure Them All: HashiCorp Releases Vault Enterprise 0.7

There are a few key reasons that you need to look at Vault by HashiCorp. If you’re in the business of IT on the Operations or the Development side of the aisle, you should already be looking at the entire HashiCorp ecosystem of tools. Vault is probably one that has my eye the most lately other than Terraform. Here is why I think it’s important:

  • Secret management is difficult
  • People are not good at secret management
  • Did I mention that secret management was difficult?

There are deeper technical reasons around handling secrets with automated deployments and introducing full multi-environment CI/CD, but the reality for many of the folks who read my blog and who I speak to in the community is that we are really early in our traditional application management to next-generation application management evolution. What I mean is that we are doing some things to enable better flow of applications and better management of infrastructure with some lingering bad practices.

Let’s get to the good stuff about HashiCorp Vault that we are talking about today.

Announcing HashiCorp Vault Enterprise version 0.7!

This is a very big deal as far as release go for a few reasons:

  • Secure multi-datacenter replication
  • Expanded granularity with Access Control policies
  • Enhanced UI to manage existing and new Vault capabilities

Many of the development and operations teams are struggling to find the right platform for secret management. Each public cloud provider has their own self-contained secret management tool. Many of the other platform providers such as Docker Datacenter also have their own version. The challenge with a solution that is vendor or platform specific is that you’re locked into the ecosystem.

Vault Enterprise as your All Around Secret Management

The reason that I’ve been digging into lots of the HashiCorp tools over the last few years is that they provide a really important abstraction from the underlying vendor platforms which are integrated through the open source providers. As I’ve moved up the stack from Vagrant for local builds and deployment to Terraform for IaaS and cloud provider builds, the secret management has leapt to the fore as an important next step.

Vault has both the traditional open source version and also the Vault Enterprise offering. Enterprise gives you support, and a few nifty additions that the regular Vault product don’t have. This update includes the very easy-to-use UI:

Under the replication area in the UI we can see where our replicas are enabled and the status of each of them. The replication can ben configured right in the UI by administrators which eases the process quite a bit:

Replication across environments ensures that you have the resiliency of a distributed environment, and that you can keep the secret backends close to where they are being consumed by your applications and infrastructure.  This is a big win over standalone version which required opening up VPNs, or serving over HTTPS which was the way many have been doing it in the past.  Or, worse, they were running multiple vaults in order to host one on each cloud or on-prem environment.

We have response wrapping very easily accessible in the UI:

As mentioned above, we also have the more granular policy management in Vault Enterprise 0.7 as you can see here:

If you want to get some more info on what HashiCorp is all about, I highly suggest that you have a listen to the recent podcasts I published over at the GC On-Demand site including the first with founder Mitchell Hashimoto, and the second with co-foudner Armon Dadgar. Both episodes will open up a lot of detail on what’s happening at HashiCorp, in the industry in general, and hopefully get you excited to kick the tires on some of these cool tools!

Congratulations to the HashiCorp team and community on the release of Vault Enterprise 0.7 today!  You can read up on the full press release of the Vault Enterprise update here at the HashiCorp website.




Customizing the Turbonomic HTML5 Login Screen Background

DISCLAIMER:  This is currently unsupported as any changes made to your Turbonomic login page may be removed with subsequent Turbonomic application updates.  This is meant to be a little bit of fun and can be easily repeated and reversed in the case of any updates or issues. Sometimes you want to spice up your web view for your application platforms.

This inspiration came from William Lam  as a little fun add on when you have a chance to update your login screen imagery. With the new HTML5 UI in Turbonomic it is as easy as one simple line of code to add a nice background to your login screen. Here is the before:

Since I’m a bit of a space fanatic, I want to use a little star-inspired look:

To add your own custom flavor, you simply need to remotely attach to your TAP instance over SSH, browse to the

/srv/www/htdocs/com.vmturbo.UX/app directory, and then modify the BODY tag in the index.html file.

Scroll down to the very bottom of the file because it’s the last few lines you need to access. Here is the before view:

Here is the updated code to use in your BODY tag:

body style="background-image: url(BACKGROUNDIMAGEFILENAME);background-size: contain;background-repeat: no-repeat;background-color: #000000"‍‍‍‍‍‍‍

This is the code that I’ve used for a web-hosted image:

body style="background-image: url(https://static.pexels.com/photos/107958/pexels-photo-107958.jpeg);background-size: contain;background-repeat: no-repeat;background-color: #000000"‍‍‍‍‍‍‍‍

Note the background-color tag as well.  That is for the overflow on the screen when your image doesn’t fill the full screen height and width.  I’ve set the background to be black for the image I’ve chosen. You can also upload your own custom image to your Turbonomic instance into the same folder, but as warned above, you may find that this update has to happen manually as you do future application updates to the Turbonomic environment.

For custom local images, the code would be using a local directory reference.  For ease of use, upload the image file right to the same folder and you can simply use the filename in the CSS code. The real fun is when you get to share your result.

I’d love to see your own version of the custom login screen. Drop in a commend below with your example and show how you liven up your Turbonomic instance with a little personalized view.




Git Remove Multiple Deleted Files

When working in the Git version control system, you may find yourself doing some handling of large numbers of files in a single commit. The commit part is the easy part. Adding files is very simple by using the git add * command which adds all of the new files that appear as new since the most recent commit.

Running a git status shows a few files to be added. We add them all using a git add * command, and see that the files are added and ready for a commit:

git-status-add

When you remove a large number of files, you would think that the same process would work for removing from the previous state of the repository. Removing a single file is done with the git rm filename command. You can use wildcards, but that’s going to do a lot more than you would hope.

WARNING: Seriously, don’t try this on a repository that you care about. If you run a git rm * just like you did with the git add * process, you will see that it could be nothing is removed from the local copy of your repo. In worst situations, you may also find that a lot is removed. A new commit will leave you with a rather unfortunate situation.

How to Safely Remove Deleted Local Files From a Git Repo

There is a simple one-liner that will help you safely remove your local deletions from your repository. This is done by using the git ls-files command with a --deleted -z parameter. This is piped to a git rm command using the filename and full path into the git rm command.

The Magical One-Liner

This is the full one-liner:

git ls-files --deleted -z | xargs -0 git rm

This is the result:

git-rm-xargs

Using that command is much safer. This lets you remove all of the files marked as deleted to ensure your next commit is cleaned of your deleted files and nothing that you unexpectedly removed by a slip of a wildcard statement.