Deploying a Turbonomic Instance on DigitalOcean using Terraform

This is one of those posts that has to start with a whole bunch of disclaimers because this is a fun project that I worked on this week, but is NOT an officially supported deployment for Turbonomic. This is done as much as an example of how to run a Terraform deployment using a cloud-init script as it is anything you would use in reality. I do use a DigitalOcean droplet to run for my public cloud resources that are controlled by Turbonomic.

I recently wrote at the ON:Technology blog about how to deploy a simple DigitalOcean droplet using Terraform which gave the initial setup steps for both your DigitalOcean API configuration and the Terraform product. You will need to run droplets which will incur a cost, so I’m assuming that there is an understanding of pricing and allocation within your DigitalOcean environment.

Before you Get Started

You’ll need a few things to get started which include:

That is all that you need to get rolling. Next up, we will show how to pull down the Terraform configuration files to do the deployment.

Creating a DigitalOcean Droplet and Deploying a Turbonomic Lab Instance

The content that we are going to be using is a Terraform configuration file and a script which will be passed to DigitalOcean as userdata, which becomes a part of the cloud-init process. This is a post-deploy script that is run when an instance is launched and runs before the console is available to log into.

Here are the specific files we are using: https://github.com/discoposse/terraform-samples/tree/master/Turbonomic/TurboDigitalOcean

To bring them down to your local machine to launch with Terraform, use the git clone https://github.com/discoposse/terraform-samples command:

Change directory into the Turbonomic/TurboDigitalOcean folder:

We can see the nyc2-turbo.tf file contains our Terraform build information:

Assuming you’ve got all of the bits working under the covers, you can simply launch with terraform apply and you’ll see this appear in your window:

There is a big section at the bottom where the script contents are pushed as a user_data field. You’ll see the updates within the console window as it launches:

Once completed, you can go to the IP address which appears at the end of the console output. This is provided by the Terraform output variable portion of the script:

output "address_turbonomic" {
value = "${digitalocean_droplet.turbonomic.ipv4_address}"
}

That will give you the front end of the Turbonomic UI to prove that we’ve launched our instance correctly:

Terraform also lets us take a look at what we’ve done using the terraform show command which gives a full output of our environment:

You see the IP address, image, disk size, region, status, and much more in there. All of these fields can be managed using Terraform as you’ll discover in future examples.

Cleaning up – aka Destroy the Droplet

Since we probably don’t want to leave this running for the long term as it’s costing 80$ a month if you do, let’s take the environment down using the terraform destroy command which will look at our current Terraform state and remove any active resources:

If you did happen to take a look at your DigitalOcean web console, you would have seen the instance show up and be removed as a part of the process. Terraform simply uses the API but everything we do will be illustrated in the web UI as well if you were to look there.

Why I used this as an example

You can do any similar type of script launch into cloud-init on DigitalOcean. The reason this was a little different than the article I pointed to in the ON:Technology blog is that we used a CentOS image, and a cloud-init script as little add-ons. We can interchange other image types and other scripts using the similar format. That is going to be our next steps as we dive further into some Terraform examples.

The Turbonomic build script will also be something that gets some focus in other posts, but you will need a production or NFR license to launch the full UI, so that will be handled in separate posts because of that.




Using jq to pretty print JSON output

If you haven’t already discovered jq, you definitely need to take a look.  This nifty little tool is handy for manipulating JSON content at the command line and within scripts.  The first quick thing I think will be helpful is showing how to pipe raw JSON output to jq to pretty print it (aka show it in the nice nested view).

Once you’ve installed jq, you can run the raw command to get the help output:

03-jq-help

Here is some raw JSON output that we get from a basic cURL command:

01-curl-json

It’s not super easy to read when it is all packed on one line, so let’s pipe the output to the jq command and see the same results:

02-curl-json-jq

You can see the nice nested layout of the JSON output there.  This is a small example, so let’s take something a little larger.

UPDATED PRO TIP:  If you add the -s directive to the cURL command to get rid of the download output as per @shmick (https://twitter.com/shmick/status/777506873041756160)

I’ll use the William Lam Github example here for the VMworld fans.  William has posted JSON content for the VMworld session content from the US event at his Github page:

04-vmworld-top-sessions

Let’s click the Raw button on the page to render the real content URL which we will consume:

05-vmworld-sessions-raw

It’s not too readable in the browser, or the command line as you can see when we run the cURL command:

06-curl-raw

All we have to do to fix that up is to pipe the output from our cURL command to jq and we are able to see the pretty printed version of the JSON:

07-curl-pretty-print

There is much, much more to what you can do with the jq tool, but this was something that I thought was a good start.  Make sure to download it at the jq site, and it is already included in some platforms like CoreOS out of the box.




Using the –no-provision directive for Vagrant when resuming machines

It seems that there is an active issue when resuming machines using Vagrant that triggers provisioning scripts on resume and not just when doing the original vagrant up command.

EXAMPLE:

vagrant-resume-provisioned

The issue here is that I already have the build script executed on the original vagrant build of the machine.  The scripts may not be idempotent, and could overwrite content or damage the active machine.

In our Vagrant file, we use the provision capability regularly, so we would not want to have to build all sorts of logic around that unless necessary because Vagrant did this natively in the past.

provision-script

Workaround Using the –no-provision Parameter

Rather than run a vagrant resume as you saw above which triggered the build script again, you can simply use a vagrant up --no-provision which will bring the machine up and reconnect any SSH connections and NFS shares, but it will ignore any provision directives from the Vagrantfile:

vagrant-noprovision

Hopefully this will be solved in a patch or future update to Vagrant.  The post deals specifically with version 1.8.1 that presented the problem.  It may also be present in other versions.




Downloading Text and Binary Objects with cURL

Many orchestration and automation processes will need to download content from external or internal sources over protocols like HTTP and FTP. The simple way to do this is to leverage lightweight, commonly supported and available tools. The most common and popular tool that I’ve found for managing scripting of downloads inside configuration management systems and other script management tools is with cURL.

What is cURL?

cURL stands for command Line URL and is a simple, yet powerful, command line utility that gives the ability to download content using a lightweight executable that provides cross-platform support. cURL is community supported and is often a packaged part of some *nix systems already.

You can download revisions of cURL for a varying set of platforms from https://curl.haxx.se/download.html even including AmigaOS if you so desire 🙂

Why use cURL?

The most common comparative tool to cURL is wget. There is a fully featured matrix of options that are available across a number of different tools, but for simplicity, cURL and wget tend to be the goto standards for *nix and Windows systems because of the small footprint and flexibility.

cURL and wget have many similarities including:

  • download via HTTP, HTTPS, and FTP
  • both command line tools with multiple platforms supported
  • support for HTTP POST requests

cURL does provide additional feature support that isn’t available from wget including:

  • many protocols including DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMTP, SMTPS, Telnet and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, HTTP/2, cookies, user+password authentication (Basic, Plain, Digest, CRAM-MD5, NTLM, Negotiate and Kerberos), file transfer resume, proxy tunneling and more. (source: curl.haxx.se)
  • API support with using libcurl across platforms

Let’s take a look at our example code to see how to make use of cURL.

Downloading HTML or Text with cURL

It’s frighteningly simple to download text and non-binary objects with cURL. You simply use this format:

curl <source URL>

This will download the target URL and output to STDOUT, which will be to the console in most cases.

curl-google

If you wanted to output it to a file, you just add -o to the command line with a target file name (note: that is a lower case o):

curl-google-file

Downloading Binary content with cURL

If we had a binary file, we obviously can’t write it to STDOUT on the console, or else we would get garbage output. Let’s use this image as an example:

target-still-real

The raw URL for this file is https://raw.githubusercontent.com/discoposse/memes/master/StillReal.jpg to use for our examples.

Using the same format, we would try to do a curl https://raw.githubusercontent.com/discoposse/memes/master/StillReal.jpg which gives us this rather ugly result:

curl-binary-fail

To download a binary file, we can use the -O parameter which pulls down the content exactly as the source file specified dictates, including the name as such:

curl -O https://raw.githubusercontent.com/discoposse/memes/master/StillReal.jpg

curl-binary-success

It isn’t that the -O was required to succeed, but it meant that it treated the output exactly as the input and forced it to output to a file with the same name as the source on the target filesystem. We can achieve the same result by using the -o parameter option also and specifying a target filename:

curl https://raw.githubusercontent.com/discoposse/memes/master/StillReal.jpg -o StillReal.jpg

binary-specified-name-same

This is handy if you want to change the name of the file on the target filesystem. Let’s imagine that we want to download something and force a different name like anyimagename.jpg for example:

curl https://raw.githubusercontent.com/discoposse/memes/master/StillReal.jpg -o anyimagename.jpg

binary-specified-any-name

You can pass along all sorts of variable goodness into the name for output when you want to do something programmatically, which is all sorts of awesome as you get to using cURL for automated file management and other neat functions.

We will tackle more cURL options again in the future, but hopefully this is a good start!




Updating Forked Git Repository from Upstream Source (aka Rebase all the things!)

As you can imagine, the world of IaC (Infrastructure-as-Code) means that we are going to have to dabble a lot more in the world of code repositories. Git is the most common tool I have found in use for code version control, and along with it, Github.com is the most common place that people (including myself https://github.com/DiscoPosse) store their project code.

All Forked Up

When the first work happens with using Github is that you may find yourself forking a repository to create a point-in-time snapshot under your own repositories. This is done for a variety of reasons like contributing upstream code, and keeping a “safe” stable release of a particular codebase that could change and affect other work you are doing with it.

repo-before

As you can see in the image above, I have a forked copy of the Lattice framework from Cloud Foundry. Nice and easy to do, but as you look at the bottom of the image, you will also see that I’ve been falling behind on updates.

repo-behind

So, how does someone fix this situation? Let’s assume that we are testing locally and find a bug, but then realize that the upstream repository has already fixed the bug. Rather than wiping out the repository altogether and re-cloning, let’s fix it in place!

Updating a Forked Repository from Upstream Source

Let’s jump in the command line, and see what’s happening. In my lattice folder, I will do a git status to see the current state of things:

git-before

We can see that we show as up to date, but I know that I am 649 commits behind the upstream source. Time to get ourselves up to date for real.

The way we do this is by syncing up our repositories here locally, and then pushing the changes back up to our forked repository. First, let’s check our remote source by typing git remote to see what’s happening locally:

git-remote

We have one source called origin which is our forked repository. We are going to add one more source called upstream to point to the original repo using the command git remote add upstream https://github.com/cloudfoundry-incubator/lattice.git in my case and then run our git remote again to confirm the change:

git-add-upstream

Now we can see both of our sources. We are assuming that you are using the master branch of your repo, but just in case, we can also do a git checkout master first for safety. As you can see in my case, it will complain that I am already on ‘master’ and nothing will happen:

checkout-master

Now let’s do the next steps which is to fetch the upstream and rebase our local repo. Yes, these are funny sounding terms to some, but you will get used to them. This is done by using the git fetch upstream command followed by the git rebase upstream/master to sync them up:

rebase-all-the-things

Lots of updates came down, and you can see that our rebase has done all the changes locally and if we had any updates, they would be left in place with the underlying repo updates done at the same time.

We need to check our status first using the git status and as you can see here, it will show the 649 commits ahead of origin/master which is my forked repo on Github:

git-after

Now it’s time to push all the updates! This will commit the changes to the Github forked repo for you and then we are up to date with the upstream source. We will use the git push origin master which pushes the local changes to the master branch of our origin source (in our case discoposse/lattice) and then we can confirm the changes are committed with a git status afterwards:

push

There you go! Now your forked repository is up to date with the upstream source and you can get back to the coding!

If you check your Gitub page also, you will see the change there:

repo-after

Hopefully this is helpful, because I have been asked a few times recently about dealing with this issue as people get started with Git. I hope to bring some more quick Git tips as I hear questions come from the community, so feel free to drop me a comment with any question you have!




Who has a conference? Everyone DOES! – DevOps Enterprise Summit 2014

It seems sometimes that we have a lot of conferences happening. This is a good sign for the strength of the technology sector, and the size of the audience who is prepared to consume this content. As a massive fan of DevOps and the great community wrapped around it, I was very happy to watch some of the recent DevOps Enterprise Summit 2014 which was

The great thing about this conference was that it can be watched virtually, which is exactly how I did it. Luckily the team has been kind enough to post the sessions on YouTube (https://www.youtube.com/user/DOES2014) for our enjoyment.

If I could suggest something to begin with, it’s my usual thing which is BUY THE PHOENIX PROJECT 🙂

Back to the conference though! The reason that I’m bringing up the book is that the Phoenix Project is that one of the co-authors, Gene Kim led sessions and was part of the opening session also. Gene is a phenomenal resource and a great speaker, so if you have to start anywhere, you should absolutely spend some time watching the opening of the Tuesday session here:

From there, I would say that any session is a great place to start. There are a lot of great user stories about how they had success and challenges with adoption of DevOps practices. This is all great stuff and even if you aren’t already on the road towards implementing a DevOps methodology in your organization, this is a good opportunity to spend a little time finding out how it may benefit you.