Getting Terraform Provisioning Parameters from the Packet.net API

Provisioning on Packet.net is super easy using Terraform. One of the tricks you will need to know up front is that for Terraform and for many other provisioning tools, you need to provide a minimum set of parameters to launch.

As a minimum, you need to provide these following parameters as shown in the Terraform docs for the Packet provisioner:

  • hostname – gotta name ’em all
  • project_id – you need to know, or create the project to launch into
  • facility – which location are you deploying into? (EWR1, SJC1, etc.)
  • plan – which node type?
  • billing_cycle – hourly or monthly
  • operating_system – which OS will the node run?

Some are simple to use because they are your criteria. We choose the hostname, and we choose the billing cycle as either a static choice of hourly or monthly. How can we get the other details about our deployment? You can gather some data using a browser such as browsing to your project and then pulling the project ID from the URL. That still leaves us in search of the plan type, operating_system, and facility.

For completeness, let’s learn how to simply gather all four items (operating system list, project ID, plan types, facility) from the Packet.net API.

You’ll need a terminal session, your API key to query the Packet.net API, and the JQ tool for parsing out JSON results into something a little more friendly.

Querying the API is as easy as sending your token to the API using the cURL command and selecting which entities you want to query. This is the basic framework:

curl -s -X GET -H 'X-Auth-Token: YOURAPITOKEN' 'https://api.packet.net/OBJECT'

Now we can dig into the four easy examples we have.

Finding the Packet.net Facility Name

The simple one-liner will pull a JSON result that gives you the locations and subsequent Facility name that you can use and then parses out just the location codes to use. If you remove the '.facilities[].code' portion of the command it will show you the full pretty-printed JSON results including the full facility descriptions.

curl -s -X GET -H 'X-Auth-Token: YOURAPITOKEN' 'https://api.packet.net/facilities' | jq '.facilities[].code'

Finding the Packet.net Project ID

You’ll want the full JSON result so you can choose from your active projects if you have more than one. Just drill into the JSON results and you can locate the id field:

curl -s -X GET -H 'X-Auth-Token: YOURAPITOKEN' 'https://api.packet.net/projects' | jq

Finding the Packet.net Plan Names

Plans don’t shift around too much, just like facilities. Here is the simple query to get all the plan names and match them to what node type you want to use:

curl -s -X GET -H 'X-Auth-Token: YOURAPITOKEN' 'https://api.packet.net/plans' | jq '.plans[].slug'

Finding the Packet.net Operating System Types

By now, you can guess where we are going wth the next one. Query the API, parse out the results, and provide the slugs for the Operating System names which we will use for Terraform and other provisioning tools which consume the Packet API.

curl -s -X GET -H 'X-Auth-Token: YOURAPITOKEN' 'https://api.packet.net/operating-systems' | jq '.operating_systems[].slug'

The result will give you all of the slug names that are usable as the operating_system parameter. In the case of vSphere 6.5, it happens to be vmware_esxi_6_5 which may not have been obvious if you were to try guessing it out.

Now you can take those easy JSON results and feed them into a Terraform file or you may also use these raw queries as part of other configuration management and provisioning solutions. Hope you find this helpful!

Also, you can sign up for Packet.net to kick the tires on this goodness and you can use VDM25 as a referral code to get a 25$ credit to use. Make sure you tell them DiscoPosse and the Virtual Design Master crew sent you!




Setting up Turbonomic Action Notifications to Slack Channels

An interesting use-case that I’ve bumped into lately is where folks want to enable automation, but they also need to know when automated things happen. Email was the common platform for notifications, and still is, but there are many more organizations adoption Slack for day-to-day activity monitoring and building out interesting interactive ways to enable the ChatOps approach to IT operations management.

Since you may have followed along my first article which showed you how to set up a custom WebHook integration for your Slack team channel, we will take that one step further and show you how to configure Turbonomic to send notifications of actions to your Slack channel.

Setting up Action Scripts in Turbonomic

One of the cool features within Turbonomic is something called Action Scripts. These are scripts that are run when a particular actions happens on a particular entity within the environment. Action scripts run at different times in the process including before (PRE) and after (POST) the action so that you can either get notification or to trigger some interaction with the action.

Action Scripts run for every action time available including moves, scale/resize, and more. The naming of each Action Script is relative to the timing (PRE/POST) and the action type. You only need to create one Action Script which is hosted on your Turbonomic control instance and launched by the Turbonomic engine as actions are triggered.

The official documentation on using Action Scripts is here, but for our purposes here I will give you a crash course in creating a PRE move script so that we can send Slack notifications when an application workload is about to move.

Variables Accessible during Action Script Execution

There are a number of environment variables which are generated when a Turbonomic action is instantiated. Some of these include:

$VMT_TARGET_NAME – the entity which is subject to the move action
$VMT_CURRENT_NAME – the source location where the entity is located
$VMT_NEW_NAME – the destination where the entity will be moved
$VMT_ACTION_NAME – the unique ID for the action

These are the ones that I’ve chosen to include for my Slack notifications because I will want to know the workload which is subject to the move, the source location, target location, and then having the ID of the action is helpful for auditing and also for more deeper integration with a true ChatOps approach that we will dive into further in another post.

For now, the Slack notifications will be simply to log for us using our Slack channel whenever there are moves occurring. You can select from any of the different actions in the Action Scripts, so this is a good place to start.

The PRE_MOVE_VirtualMachine.sh Script

The simplest view of the script is as follows. Simply create a file named PRE_MOVE_VirtualMachine.sh which is the one that is called by a move action. This could be anything from a VM migration across hosts, clusters, or also container pod changes and more.

We need to leverage the action variables that we have been given and pass them into the our Slack API call. The simplest method for this is to inject a cURL command into the Action Script that will run using the native cURL command available on your Turbonomic instance.

The command to post to the API for Slack requires your WebHook URL which you can get by following this guide that helps you get the WebHook set up.

This is the full GitHub Gist of the code. If you have existing Action Scripts in the folder, you can simply append these lines to your existing script.

Take note of the use of quotes within the command line as we need to pass the variables into the cURL command which requires additional double-quotes around the entire command.

Last step – Enable Action Script for Moves in Turbonomic

At the time of this writing, the Action Scripts features are still in the traditional flash UI. Go to the Policy view in your Turbonomic instance and expand the Action | VM section where we will enable the Action Scripts for Virtual Machines in this case.

Simply check off the Action Script Settings setting for the PreMove action and you are all set. In the image above we can see that I also have Move actions automated which may be set to Manual for your environment.

NOTE: Enabling policy changes within Turbonomic will trigger a refresh of the actions. This is because the state of your policies has changed and the entities in the environment must shop for the appropriate resources to satisfy their demand based on the newly formed policy configuration. This is the nature of the system being real-time so that no actions are held when they could be stale or unnecessary due to other environmental changes that have occurred.

The Slack View

Under your Slack channel, you will now begin seeing notifications whenever an action occurs. This is what your channel will start to look like as the moves take place:

In my case, I have enabled full automation; This means that these actions are triggered and the notification is done as the action is about to occur. We can also do POST_MOVE script which is handy if we are building out other hooks.

The goal with Action Scripts is to be able to integrate with any application lifecycle management process or product.  Look for much more in the coming weeks as we walk through some more integrations that can be done with this method.




Why Google Needs Consistency for Enterprise Cloud Customers

Remember Google Buzz? Orkut? Wave? Reader? Google Talk? Then there was Google Picasa…which became photos…so far. There are sites dedicated to what we call the Google Graveyard. This doesn’t even get into the Google Glass, Site Search, Search Appliance and others. I logged into my Google Analytics platform today and found it to be a completely different UI and UX than I have ever seen before…without warning. I used to use Google Hangouts On Air for the Virtual Design Master event every year until this year when HOA no longer works, so I have had to move to using Zoom and pushing to a Youtube Live Event.

The reason that I bring these up is that we have an optics problem with Google which may affect how many potential enterprise cloud customers choose to adopt, or rather to not adopt, Google Cloud Platform. One of the big things that traditional enterprise customers enjoy is the warm embrace of platforms that have consistency. Google has tended to have some challenges around product changes and the public face of those changes. Google most likely has lots of data backing the decision to shift or sunset a product.

Can GCP make Enterprises Greene with Envy?

Diane Greene has come over to Google by way of her most recent startup Bebop being acquired. It’s my opinion that the startup was the packaging in which they could acquire the real value, which is Diane herself. Diane has a proven past success in launching a little virtualization concept into the juggernaut that became VMware. The most recent Google Cloud Next event featured a strong presence of a new focus on the enterprise with an aim to become the number 1 public cloud provider within five years.

A quote that stood out from the event was “I actually think we have a huge advantage in our data centers, in our infrastructure, availability, security and how we automate things. We just haven’t packaged it up perfectly yet.” which highlights the challenge that Google will face. The need for many enterprises is a packaged and neatly consumable product that we know we can adopt and maintain with long support plans and clean deprecation.

There is little doubt of the ability of Google to develop incredible products which will give birth to next-generation application infrastructure that few can rival. The only doubt comes around whether enterprise audiences are going to be ready to adapt to the speed at which Google innovates their product set. If Kubernetes is any sign of how well we are leaning in, then it is very easy to see that Google can take the market on and win a significant share.

Google Cloud Platform will be a juggernaut in the public cloud realm. That is a fact which is being proven out by some major customers moving into the platform already and many more dabbling. Multi-cloud is the new cloud, so GCP will inevitably become a key player in that strategy because of it’s underlying GKE product to support Kubernetes workloads. In my opinion, the multi-cloud approach enabled by containerized workloads with an enterprise-grade scheduler is going to become the goal we should strive for.

The only question is how long it will take before we can all put our trust in one product that Google has lacked in, which is consistency.




MSPOG – Accepting the Reality of Multiple Single Panes of Glass

You probably dread the phrase as much as I do. We hear it all the time on a sales call or a product demo: “this is the single pane of glass for you and your team”. The problem is that I’ve been working in the industry a long time and have been using a lot of single panes of glass…at the same time. Many of my presentations have been centered around the idea that we must embrace the right tool for the right task, and not try to force everything through one proverbial funnel because the reality is that we cannot do everything with any single product.

For this reason, it’s time to embrace MSPOG: Multiple Single Panes of Glass

Many Tools, Many Tasks, One Approach

Using a unified approach to something is far more important than the requirement to using a single product to do it. I’m not saying that you should just willy nilly glue together dozens of products and accept it. What I am saying is that we have to dig into the core requirements of any task that we performa and think about things in a very Theory of Constraints (ToC) way. Before we even dive into some use-cases, think about what we are taught as architects: use the requirements to define the conceptual, logical, and then physical solution. All the while, understanding and making our decisions based on risks and constraints.

If you have a process that requires two or three different processes within it, you may be able to use a single tool for those processes. What if one of the processes is best solved with a different tool? This becomes the question of the requirements. Is it a risk if we embrace a second tool? More importantly, is it a risk or a constraint to use a single tool? This is the big question we should be asking ourselves continuously.

Imagine a virtual machine lifecycle process. We need to spawn the VM from a template, give it a network address, deploy an application into it, and then make sure it is continuously managed by a patch management and configuration management system. I know that you’re already evaluating how we should do this at the physical level by saying “use Ansible!” or “use Puppet!” or “use vRealize Automation!”. Stop and think about what the process is from end-to-end.

Our constraints on this is that we are using a VMware vSphere 6.5 hypervisor, a Windows 2016 guest, and using NGINX and a Ruby on Rails application within the guest.

  1. Deploy a VM from template – You can do this with any number of tools. Choose one and think about how we move forward from here
  2. Define IP address – We can use vRO, vRA, Puppet, Chef, or any of a number of tools. You can also even do some rudimentary PowerCLI or other automation once the machine is up and running
  3. Deploy your application – App deployment can be done with something like Chef, Puppet, or Ansible, as well as the native vRO and vRA with some care and feeding
  4. Patch management – Now we get more narrow. Most likely, you are going to want to use SCCM for this one, so this is definitely bringing another pane of glass in
  5. Configuration management – Provided you use SCCM because of the Windows environment, you can use that as well for configuration management…but what about the nested applications and configurations such as websites and other deeper node-specific stuff. Argh!!!

Even if you came out of the bottom of those 5 steps with just two tools, I would be thinking you may need to reevaluate because you have have overshot on the capabilities of those two tools. It is easy to see that if we start narrowing to a single pane of glass approach, that we are now jamming square blocks into round holes just to satisfy our supposed need to use a single product.

What we do need to do look for the platforms within that subset of options that has the widest and deepest set of capabilities to ensure we aren’t stacking up too many products to achieve our overall goals.

The solution: Heads up Display for your Single Pane of Glass

Automate the background and display in the foreground. We need to think more about having the proverbial single pane of glass be a visible layer on top of the real-time activity that is happening. Make your toolkit a fully-featured solution together with focus on how you can do as much as possible within each product. Also, reevaluate regularly. I can’t even count how many times i’ve been caught out by using something a specific way, only to find out that in a later version that the functionality was extended and I was using a less-desirable method, or even a deprecated method.

There is a reason that we have a mainframe at the centre of many large infrastructure shops. You wouldn’t tell them to shed their mainframe just to deploy all their data on NoSQL, right? That would be lunacy. Let’s embrace our Multiple Single Panes of Glass and learn to create better summary screens to annotate the activity. This way we also train ourselves to automate under the covers and trust the underlayers.

I, for one, welcome our Multiple Single Panes of Glass.

 

Image source:  https://hudwayglass.com