Installing PowerCLI 6.5.x on Windows Server 2012 R2 after Find-Module Error

Now that PowerCLI is part of the PowerShell Gallery, you can install it using the native module installer…but there’s a catch. Windows Server 2012 R2 requires a couple of minor updates to get this process underway. You’ll know really quickly if you open up your PowerShell terminal or PowerShell ISE (as Administrator) and try the following command:

Find-Module -name VMware.PowerCLI

The issue is easily solved be deploying a more recent installer for the PackageManagement PowerShell Modules. Download the installer using this link and run the install:

https://www.microsoft.com/en-us/download/details.aspx?id=51451

Select the Download within the page once you’re there:

Choose the x64 version (assuming you’re running a 64-bit OS):

Run through the installation and accept the defaults. Nothing significant to worry about with this file as it’s a necessary update for what we need to do.

If you run the Find-Module command again, you’ll see a much better result. You’ll be prompted to update your NuGet components which are used to pull resources from the PowerShell Gallery. Accept the update and then we can keep going:

Time to get back to the issues. Just relaunch your PowerShell terminal or ISE as an Administrator. We are running as Administrator so that we install the module for all users of the server. If you only want to run for your user then run your PowerShell session as your regular user and add -scope CurrentUser to the Install-Module command. Run the following to install for all users:

Install-Module -name VMware.PowerCLI

Now we have to import the module into our session using the Import-Module -name VMware.PowerCLI command:

Just like that, you’re up to date and running the latest and greatest PowerCLI goodness. Happy scripting!




Getting Terraform Provisioning Parameters from the Packet.net API

Provisioning on Packet.net is super easy using Terraform. One of the tricks you will need to know up front is that for Terraform and for many other provisioning tools, you need to provide a minimum set of parameters to launch.

As a minimum, you need to provide these following parameters as shown in the Terraform docs for the Packet provisioner:

  • hostname – gotta name ’em all
  • project_id – you need to know, or create the project to launch into
  • facility – which location are you deploying into? (EWR1, SJC1, etc.)
  • plan – which node type?
  • billing_cycle – hourly or monthly
  • operating_system – which OS will the node run?

Some are simple to use because they are your criteria. We choose the hostname, and we choose the billing cycle as either a static choice of hourly or monthly. How can we get the other details about our deployment? You can gather some data using a browser such as browsing to your project and then pulling the project ID from the URL. That still leaves us in search of the plan type, operating_system, and facility.

For completeness, let’s learn how to simply gather all four items (operating system list, project ID, plan types, facility) from the Packet.net API.

You’ll need a terminal session, your API key to query the Packet.net API, and the JQ tool for parsing out JSON results into something a little more friendly.

Querying the API is as easy as sending your token to the API using the cURL command and selecting which entities you want to query. This is the basic framework:

curl -s -X GET -H 'X-Auth-Token: YOURAPITOKEN' 'https://api.packet.net/OBJECT'

Now we can dig into the four easy examples we have.

Finding the Packet.net Facility Name

The simple one-liner will pull a JSON result that gives you the locations and subsequent Facility name that you can use and then parses out just the location codes to use. If you remove the '.facilities[].code' portion of the command it will show you the full pretty-printed JSON results including the full facility descriptions.

curl -s -X GET -H 'X-Auth-Token: YOURAPITOKEN' 'https://api.packet.net/facilities' | jq '.facilities[].code'

Finding the Packet.net Project ID

You’ll want the full JSON result so you can choose from your active projects if you have more than one. Just drill into the JSON results and you can locate the id field:

curl -s -X GET -H 'X-Auth-Token: YOURAPITOKEN' 'https://api.packet.net/projects' | jq

Finding the Packet.net Plan Names

Plans don’t shift around too much, just like facilities. Here is the simple query to get all the plan names and match them to what node type you want to use:

curl -s -X GET -H 'X-Auth-Token: YOURAPITOKEN' 'https://api.packet.net/plans' | jq '.plans[].slug'

Finding the Packet.net Operating System Types

By now, you can guess where we are going wth the next one. Query the API, parse out the results, and provide the slugs for the Operating System names which we will use for Terraform and other provisioning tools which consume the Packet API.

curl -s -X GET -H 'X-Auth-Token: YOURAPITOKEN' 'https://api.packet.net/operating-systems' | jq '.operating_systems[].slug'

The result will give you all of the slug names that are usable as the operating_system parameter. In the case of vSphere 6.5, it happens to be vmware_esxi_6_5 which may not have been obvious if you were to try guessing it out.

Now you can take those easy JSON results and feed them into a Terraform file or you may also use these raw queries as part of other configuration management and provisioning solutions. Hope you find this helpful!

Also, you can sign up for Packet.net to kick the tires on this goodness and you can use VDM25 as a referral code to get a 25$ credit to use. Make sure you tell them DiscoPosse and the Virtual Design Master crew sent you!




Setting up a Slack WebHook to Post Notifications to a Team Channel

If ChatOps is something you’ve been hearing a lot about, there is is a reason. Slack is fast becoming the de facto standard in what we are calling ChatOps. Before we go full out into making chatbots and such, the first cool use-case I explored is enabling notifications for different systems.

In order to do any notifications to Slack, you need to enable a WebHook. This is super easy but it made sense for me to give you the quick example so that you can see the flow yourself.

Setting up the Slack Webhook

First, we login to your Slack team in the web interface. From there we can open up the management view of the team to be able to get to the apps and integrations. Choose Additional Options under the settings icon:

You can also get there by using the droplets in left-hand pane and selecting Apps and Integrations from the menu:

Next, click the Manage button in the upper right portion of the screen near the team name:

Select Custom Integrations and then from there click the Incoming WebHooks option:

Choose the channel you want to post to and then click the Add Incoming WebHooks Integration button:

It’s really just that easy! You will see a results page with a bunch of documentation such as showing your WebHook URL:

Other parts of the documentation also show you how to configure some customizations and even an example cURL command to show how to do a post using the new WebHook integration:

If you go out to a command line where you have the cURL command available, you can run the example command and you should see the results right in your Slack UI:

There are many other customization options such as which avatar to use, and the specifics of the command text and such. You can get at the WebHook any time under the Incoming WebHooks area within the Slack admin UI:

Now all you have to do is configure whatever script or function you have that you want to send notifications to Slack with and you are off to the races.




Why I Aeropress Coffee but Automate Everything Else

Many of my presentations start with me quoting the Rule of Three. Then I tell you three things about myself:

  1. I’m lazy
  2. I despise inconsistency
  3. Did I mention I’m lazy?

The reason that these are important things to know is that being lazy is a fundamental reason why I have leapt into automation from early on in my career.

Being the Right Kind of Lazy

The word lazy can sound like a bad thing. In the case of automation, it is a good thing. Clarence Bleicher of Chrysler was once quoted in the early days of the company as saying:

“When I have a tough job in the plant and can’t find an easy way to do it,” Mr. Bleicher said, “I have a lazy man put on it. He’ll find an easy way to do it in 10 days. Then we adopt that method.”

That pretty much sums it up. Laziness in the sense of not wanting to do repetitive, mundane tasks is the kind of laziness that we are aspiring for here. Not just lay down and do nothing lazy, as tempting as that is.

There was a key moment that happened in my first year of my work. When I saw a way to make something faster, or more efficient by taking safe and appropriate shortcuts, I took them. When I made the leap into a technology career, it didn’t take long to find the shortcuts. That was the whole idea of technology after all!

I drive a Stick Shift and Aeropress my Coffee

The reason that I personally do a lot of automation is so that I can choose to take the additional time I have to put towards the removal of other technical debt or even just enjoying myself. Part of the fun dichotomy of many of the very pro-automation technologists is that a lot of us tend to also be huge coffee enthusiasts. I’m talking about the hand-grind, slow steep, Aeropress and nearly scientific recipe kind of coffee people. Shouldn’t a lazy, automation-oriented person try to eliminate the time being spend on that effort? Ahhhh…there is the interesting part.

Automation needs to be in service of the goal. The goal is quality. I could increase my output by putting a Keurgig or a Nespresso, or some kind of automated espresso machine. That rolls up to additional costs, and then the quality of the taste. My choice is to take the lower cost to get a handcrafted taste that I know I enjoy. I have also done the math and realize that it may amortize over the long run to buy the machine, but I can also use my Aeropress on road trips and such. That is the consistency target I choose.

My choice to drive a manual transmission was primarily about cost, and secondarily around the enjoyment of it. That is really all there is to it. The time/effort savings of having an automatic impacts my personal enjoyment of the experience which I don’t necessarily feel is worth it.

Knowing and Measuring your Goal

How we define quality is as important as how we achieve it. Without a tangible way to measure the results of automation and the net effect on quality, we can end up just acquiring more technical debt, or in spending time on tasks that don’t remove constraints at the right level. Just like with my Aeropress and my manual transmission, I have chosen my measurement of where I can achieve quality through automation to attack other constraints.

Personal coffee taste is somewhat intangible. The time you spend deploying servers to the cloud and running patch management routines and other repetitive tasks is very tangible. Between time and quality, the effort to automate many operational tasks pays off rather quickly. Having had a background in desktop support at the onset of my enterprise IT career, I quickly created scripts and processes to avoid doing those repetitive tasks. Put all of that on a server and then all I need to do manually is connect to the server. Voila!

Measure your quality in either time, cost, or sometimes in those intangible ways such as just personal enjoyment of the work. If you are spending hours in a week doing repetitive work, you could spend a little time automating it and then spending that time you gain back the following weeks into new work and more exciting tasks.

Measure always, and not just for you, but for your team and your organization. Then you can sit back and make a nice Aeropress coffee while you watch your automation work happening for you.




Deploying a Turbonomic Instance on DigitalOcean using Terraform

This is one of those posts that has to start with a whole bunch of disclaimers because this is a fun project that I worked on this week, but is NOT an officially supported deployment for Turbonomic. This is done as much as an example of how to run a Terraform deployment using a cloud-init script as it is anything you would use in reality. I do use a DigitalOcean droplet to run for my public cloud resources that are controlled by Turbonomic.

I recently wrote at the ON:Technology blog about how to deploy a simple DigitalOcean droplet using Terraform which gave the initial setup steps for both your DigitalOcean API configuration and the Terraform product. You will need to run droplets which will incur a cost, so I’m assuming that there is an understanding of pricing and allocation within your DigitalOcean environment.

Before you Get Started

You’ll need a few things to get started which include:

That is all that you need to get rolling. Next up, we will show how to pull down the Terraform configuration files to do the deployment.

Creating a DigitalOcean Droplet and Deploying a Turbonomic Lab Instance

The content that we are going to be using is a Terraform configuration file and a script which will be passed to DigitalOcean as userdata, which becomes a part of the cloud-init process. This is a post-deploy script that is run when an instance is launched and runs before the console is available to log into.

Here are the specific files we are using: https://github.com/discoposse/terraform-samples/tree/master/Turbonomic/TurboDigitalOcean

To bring them down to your local machine to launch with Terraform, use the git clone https://github.com/discoposse/terraform-samples command:

Change directory into the Turbonomic/TurboDigitalOcean folder:

We can see the nyc2-turbo.tf file contains our Terraform build information:

Assuming you’ve got all of the bits working under the covers, you can simply launch with terraform apply and you’ll see this appear in your window:

There is a big section at the bottom where the script contents are pushed as a user_data field. You’ll see the updates within the console window as it launches:

Once completed, you can go to the IP address which appears at the end of the console output. This is provided by the Terraform output variable portion of the script:

output "address_turbonomic" {
value = "${digitalocean_droplet.turbonomic.ipv4_address}"
}

That will give you the front end of the Turbonomic UI to prove that we’ve launched our instance correctly:

Terraform also lets us take a look at what we’ve done using the terraform show command which gives a full output of our environment:

You see the IP address, image, disk size, region, status, and much more in there. All of these fields can be managed using Terraform as you’ll discover in future examples.

Cleaning up – aka Destroy the Droplet

Since we probably don’t want to leave this running for the long term as it’s costing 80$ a month if you do, let’s take the environment down using the terraform destroy command which will look at our current Terraform state and remove any active resources:

If you did happen to take a look at your DigitalOcean web console, you would have seen the instance show up and be removed as a part of the process. Terraform simply uses the API but everything we do will be illustrated in the web UI as well if you were to look there.

Why I used this as an example

You can do any similar type of script launch into cloud-init on DigitalOcean. The reason this was a little different than the article I pointed to in the ON:Technology blog is that we used a CentOS image, and a cloud-init script as little add-ons. We can interchange other image types and other scripts using the similar format. That is going to be our next steps as we dive further into some Terraform examples.

The Turbonomic build script will also be something that gets some focus in other posts, but you will need a production or NFR license to launch the full UI, so that will be handled in separate posts because of that.




Using jq to pretty print JSON output

If you haven’t already discovered jq, you definitely need to take a look.  This nifty little tool is handy for manipulating JSON content at the command line and within scripts.  The first quick thing I think will be helpful is showing how to pipe raw JSON output to jq to pretty print it (aka show it in the nice nested view).

Once you’ve installed jq, you can run the raw command to get the help output:

03-jq-help

Here is some raw JSON output that we get from a basic cURL command:

01-curl-json

It’s not super easy to read when it is all packed on one line, so let’s pipe the output to the jq command and see the same results:

02-curl-json-jq

You can see the nice nested layout of the JSON output there.  This is a small example, so let’s take something a little larger.

UPDATED PRO TIP:  If you add the -s directive to the cURL command to get rid of the download output as per @shmick (https://twitter.com/shmick/status/777506873041756160)

I’ll use the William Lam Github example here for the VMworld fans.  William has posted JSON content for the VMworld session content from the US event at his Github page:

04-vmworld-top-sessions

Let’s click the Raw button on the page to render the real content URL which we will consume:

05-vmworld-sessions-raw

It’s not too readable in the browser, or the command line as you can see when we run the cURL command:

06-curl-raw

All we have to do to fix that up is to pipe the output from our cURL command to jq and we are able to see the pretty printed version of the JSON:

07-curl-pretty-print

There is much, much more to what you can do with the jq tool, but this was something that I thought was a good start.  Make sure to download it at the jq site, and it is already included in some platforms like CoreOS out of the box.