The Goal Graphic Novel is here!

As a long-time fan of all things related to the Theory of Constraints, I was extremely pleased and honored to be able to join the early review program for The Goal: A Business Graphic Novel.  This book has been the foundation of so much that has driven manufacturing to new levels and then into any of a number of industries which have also benefitted from the concepts from the writings of Eli Goldratt.

The format is very interesting because the book itself is a very character-driven story.  The narrative comes across very well in the graphic novel format, so if you’re a fan of this style of reading then The Goal in graphic format will definitely be one to add to your collection.

The next book I can definitely see going this way would be The Phoenix Project.  The story of the Phoenix Project is a derivative of the style and teachings of The Goal with the focus on DevOps methodologies rather than manufacturing.

I can say that this was a great read and if you’re looking for a book that adds a very interesting visual element to a profoundly important story of the Theory of Constraints in action, this is a must-read.  It’s a business book, a personal growth book, and if you look around our IT communities, it is effectively the story of our every day.

You can head on over to the North River Press site to read up on the book and get your copy ordered: http://northriverpress.com/the-goal-a-business-graphic-novel/




Setting up Turbonomic Action Notifications to Slack Channels

An interesting use-case that I’ve bumped into lately is where folks want to enable automation, but they also need to know when automated things happen. Email was the common platform for notifications, and still is, but there are many more organizations adoption Slack for day-to-day activity monitoring and building out interesting interactive ways to enable the ChatOps approach to IT operations management.

Since you may have followed along my first article which showed you how to set up a custom WebHook integration for your Slack team channel, we will take that one step further and show you how to configure Turbonomic to send notifications of actions to your Slack channel.

Setting up Action Scripts in Turbonomic

One of the cool features within Turbonomic is something called Action Scripts. These are scripts that are run when a particular actions happens on a particular entity within the environment. Action scripts run at different times in the process including before (PRE) and after (POST) the action so that you can either get notification or to trigger some interaction with the action.

Action Scripts run for every action time available including moves, scale/resize, and more. The naming of each Action Script is relative to the timing (PRE/POST) and the action type. You only need to create one Action Script which is hosted on your Turbonomic control instance and launched by the Turbonomic engine as actions are triggered.

The official documentation on using Action Scripts is here, but for our purposes here I will give you a crash course in creating a PRE move script so that we can send Slack notifications when an application workload is about to move.

Variables Accessible during Action Script Execution

There are a number of environment variables which are generated when a Turbonomic action is instantiated. Some of these include:

$VMT_TARGET_NAME – the entity which is subject to the move action
$VMT_CURRENT_NAME – the source location where the entity is located
$VMT_NEW_NAME – the destination where the entity will be moved
$VMT_ACTION_NAME – the unique ID for the action

These are the ones that I’ve chosen to include for my Slack notifications because I will want to know the workload which is subject to the move, the source location, target location, and then having the ID of the action is helpful for auditing and also for more deeper integration with a true ChatOps approach that we will dive into further in another post.

For now, the Slack notifications will be simply to log for us using our Slack channel whenever there are moves occurring. You can select from any of the different actions in the Action Scripts, so this is a good place to start.

The PRE_MOVE_VirtualMachine.sh Script

The simplest view of the script is as follows. Simply create a file named PRE_MOVE_VirtualMachine.sh which is the one that is called by a move action. This could be anything from a VM migration across hosts, clusters, or also container pod changes and more.

We need to leverage the action variables that we have been given and pass them into the our Slack API call. The simplest method for this is to inject a cURL command into the Action Script that will run using the native cURL command available on your Turbonomic instance.

The command to post to the API for Slack requires your WebHook URL which you can get by following this guide that helps you get the WebHook set up.

This is the full GitHub Gist of the code. If you have existing Action Scripts in the folder, you can simply append these lines to your existing script.

Take note of the use of quotes within the command line as we need to pass the variables into the cURL command which requires additional double-quotes around the entire command.

Last step – Enable Action Script for Moves in Turbonomic

At the time of this writing, the Action Scripts features are still in the traditional flash UI. Go to the Policy view in your Turbonomic instance and expand the Action | VM section where we will enable the Action Scripts for Virtual Machines in this case.

Simply check off the Action Script Settings setting for the PreMove action and you are all set. In the image above we can see that I also have Move actions automated which may be set to Manual for your environment.

NOTE: Enabling policy changes within Turbonomic will trigger a refresh of the actions. This is because the state of your policies has changed and the entities in the environment must shop for the appropriate resources to satisfy their demand based on the newly formed policy configuration. This is the nature of the system being real-time so that no actions are held when they could be stale or unnecessary due to other environmental changes that have occurred.

The Slack View

Under your Slack channel, you will now begin seeing notifications whenever an action occurs. This is what your channel will start to look like as the moves take place:

In my case, I have enabled full automation; This means that these actions are triggered and the notification is done as the action is about to occur. We can also do POST_MOVE script which is handy if we are building out other hooks.

The goal with Action Scripts is to be able to integrate with any application lifecycle management process or product.  Look for much more in the coming weeks as we walk through some more integrations that can be done with this method.




Why Google Needs Consistency for Enterprise Cloud Customers

Remember Google Buzz? Orkut? Wave? Reader? Google Talk? Then there was Google Picasa…which became photos…so far. There are sites dedicated to what we call the Google Graveyard. This doesn’t even get into the Google Glass, Site Search, Search Appliance and others. I logged into my Google Analytics platform today and found it to be a completely different UI and UX than I have ever seen before…without warning. I used to use Google Hangouts On Air for the Virtual Design Master event every year until this year when HOA no longer works, so I have had to move to using Zoom and pushing to a Youtube Live Event.

The reason that I bring these up is that we have an optics problem with Google which may affect how many potential enterprise cloud customers choose to adopt, or rather to not adopt, Google Cloud Platform. One of the big things that traditional enterprise customers enjoy is the warm embrace of platforms that have consistency. Google has tended to have some challenges around product changes and the public face of those changes. Google most likely has lots of data backing the decision to shift or sunset a product.

Can GCP make Enterprises Greene with Envy?

Diane Greene has come over to Google by way of her most recent startup Bebop being acquired. It’s my opinion that the startup was the packaging in which they could acquire the real value, which is Diane herself. Diane has a proven past success in launching a little virtualization concept into the juggernaut that became VMware. The most recent Google Cloud Next event featured a strong presence of a new focus on the enterprise with an aim to become the number 1 public cloud provider within five years.

A quote that stood out from the event was “I actually think we have a huge advantage in our data centers, in our infrastructure, availability, security and how we automate things. We just haven’t packaged it up perfectly yet.” which highlights the challenge that Google will face. The need for many enterprises is a packaged and neatly consumable product that we know we can adopt and maintain with long support plans and clean deprecation.

There is little doubt of the ability of Google to develop incredible products which will give birth to next-generation application infrastructure that few can rival. The only doubt comes around whether enterprise audiences are going to be ready to adapt to the speed at which Google innovates their product set. If Kubernetes is any sign of how well we are leaning in, then it is very easy to see that Google can take the market on and win a significant share.

Google Cloud Platform will be a juggernaut in the public cloud realm. That is a fact which is being proven out by some major customers moving into the platform already and many more dabbling. Multi-cloud is the new cloud, so GCP will inevitably become a key player in that strategy because of it’s underlying GKE product to support Kubernetes workloads. In my opinion, the multi-cloud approach enabled by containerized workloads with an enterprise-grade scheduler is going to become the goal we should strive for.

The only question is how long it will take before we can all put our trust in one product that Google has lacked in, which is consistency.




Got Logs? Get a PaperTrail: First thoughts

I stumbled upon Papertrail through a Twitter Ad (hey, those things work sometimes!) and figured that I should take a quick look. Given the amount of work I’ve been doing around compliance management and deployment of distributed systems, this seems like it may be an interesting fit. Luckily, they have a free tier as well which means it’s easy to kick the tires on it before diving in with a paid commitment.

The concept seems fairly easy:

The signup process was pretty seamless. I went to the pricing page to see what the plan levels are which also has the Free Plan – Sign Up button nicely planted center of screen:

What I really like about this product is the potential to go by data ingestion rather than endpoints for licensing. Scalability is a concern with pricing for me, so knowing that the amount of aggregate data drives the price was rather comforting to me.

The free tier gets a first month with lots of data followed by a 100 MB per month follow on limit. That’s probably not too difficult to cap out at, so you can easily see that people will be drawn to the 7$ first paid tier which ups the data to 1GB of storage and 1 year of retention. Clearly, at 7 days retention for the free tier, this is meant to just give you a taste and leave you looking for more if the usability is working for you.

First Steps and the User Experience

On completion of the first form, there is a confirmation email. You are also logged in immediately and ready to roll with the simple welcome screen:

Clicking the button to get started brings you to the instruction screen complete with my favorite (read: most despised) method of deploying which is pushing a script into a sudo bash pipe.

There is an option to run each script component which is much more preferred so you can see the details of what is happening.

Once you’ve done the initial setup process, you get a quick response showing you have active events being logged:

Basic logging is one thing for the system, so the next logical step is to up the game a bit and add some application level logging which is done using the remote-rsyslog2 collector. The docs and process to deploy are available inside the Papertrail site as well:

Now that I’ve got both by system and an application (I’ve picked the Apache error log as a source location) working, I’m redirected to see the live results in my Events screen (mildly censored to protect the innocent):

You can highlight some specific events and drill down into the different context views by highlighting and clicking anywhere in the events screen:

Searching the logs is pretty simple with a search bar that uses simple structured search commands to look for content. Searches are able to be saved and stored for reporting and repetitive use.

On the first pass, this looks like a great product and is especially important for you to think about as you look at how to aggregate logs for the purpose of search and retention for security and auditing.

The key will be making sure that you clearly define the firewall and VPC rules to ensure you have access to the remote server at Papertrail and then to make sure that you keep track of the data you need to retain. I’ve literally spent 15 minutes in the app and that was from first click to live viewing of system and application logs. All that and it’s free too.

There is a referral link which you can use here if you want to try it out.

Give it a try if you’re keen and let me know your experiences or other potential products that are freely available that could do the same thing. It’s always good to share our learnings with the community!