Deploying a Turbonomic Instance on DigitalOcean using Terraform

This is one of those posts that has to start with a whole bunch of disclaimers because this is a fun project that I worked on this week, but is NOT an officially supported deployment for Turbonomic. This is done as much as an example of how to run a Terraform deployment using a cloud-init script as it is anything you would use in reality. I do use a DigitalOcean droplet to run for my public cloud resources that are controlled by Turbonomic.

I recently wrote at the ON:Technology blog about how to deploy a simple DigitalOcean droplet using Terraform which gave the initial setup steps for both your DigitalOcean API configuration and the Terraform product. You will need to run droplets which will incur a cost, so I’m assuming that there is an understanding of pricing and allocation within your DigitalOcean environment.

Before you Get Started

You’ll need a few things to get started which include:

That is all that you need to get rolling. Next up, we will show how to pull down the Terraform configuration files to do the deployment.

Creating a DigitalOcean Droplet and Deploying a Turbonomic Lab Instance

The content that we are going to be using is a Terraform configuration file and a script which will be passed to DigitalOcean as userdata, which becomes a part of the cloud-init process. This is a post-deploy script that is run when an instance is launched and runs before the console is available to log into.

Here are the specific files we are using:

To bring them down to your local machine to launch with Terraform, use the git clone command:

Change directory into the Turbonomic/TurboDigitalOcean folder:

We can see the file contains our Terraform build information:

Assuming you’ve got all of the bits working under the covers, you can simply launch with terraform apply and you’ll see this appear in your window:

There is a big section at the bottom where the script contents are pushed as a user_data field. You’ll see the updates within the console window as it launches:

Once completed, you can go to the IP address which appears at the end of the console output. This is provided by the Terraform output variable portion of the script:

output "address_turbonomic" {
value = "${digitalocean_droplet.turbonomic.ipv4_address}"

That will give you the front end of the Turbonomic UI to prove that we’ve launched our instance correctly:

Terraform also lets us take a look at what we’ve done using the terraform show command which gives a full output of our environment:

You see the IP address, image, disk size, region, status, and much more in there. All of these fields can be managed using Terraform as you’ll discover in future examples.

Cleaning up – aka Destroy the Droplet

Since we probably don’t want to leave this running for the long term as it’s costing 80$ a month if you do, let’s take the environment down using the terraform destroy command which will look at our current Terraform state and remove any active resources:

If you did happen to take a look at your DigitalOcean web console, you would have seen the instance show up and be removed as a part of the process. Terraform simply uses the API but everything we do will be illustrated in the web UI as well if you were to look there.

Why I used this as an example

You can do any similar type of script launch into cloud-init on DigitalOcean. The reason this was a little different than the article I pointed to in the ON:Technology blog is that we used a CentOS image, and a cloud-init script as little add-ons. We can interchange other image types and other scripts using the similar format. That is going to be our next steps as we dive further into some Terraform examples.

The Turbonomic build script will also be something that gets some focus in other posts, but you will need a production or NFR license to launch the full UI, so that will be handled in separate posts because of that.

Appliances and OVF: The packaged approach to deployment

Appliances are a great way to approach system deployments for your VMware environment. We are seeing more and more of this method of delivery of application environments, and with the increased number of vendors jumping on board, this is a sign that something is being done right.

What is an Appliance?

The appliance in a VMware sense is a packaged server with a pre-installed application environment that is provided in OVF format which we can deploy directly into our vSphere environment without having to build the base machine and do all of the installation process ourselves.

Using appliances is much like the physical, black-box approach which can ease the pain of getting a new application environment up and running. Configuration is done minimally through the console, and a web application is almost always the way to complete additional customization and configuration.

Using the OVF Format

OVF, or Open Virtualization Format, is an industry standard for packaging defined by the DMTF (Distributed Management Task Force) to provide vendors and software providers with a supportable and standards based delivery format. Details on the OVF standard can be found here:

A virtual appliance will come with a pre-configured virtual disk file and a configuration file to provide the vSphere with what is required to craft together the virtual machine including any additional hardware items such as network cards, drive controllers and anything that can be delivered as virtual hardware via the vSphere platform.

What wizardry is this?

The OVF deployment wizard in vSphere walks you through the configuration of specific virtual hardware including the selection of disk type, location, network type and location, IP configuration and also the VM location including resource pool, cluster and folder.

Very simply, you open the vSphere Client and under the File menu, select Deploy OVF Template. This will bring up the deployment wizard as you can see below.

The end result of the wizard is a fully functioning virtual guest. It doesn’t get much simpler than that.

Why OVF?

A question that people may ask is “why use an OVF instead of just a VMDK and VMX file?”. The reason for the use of an OVF is to apply standardized deployment techniques to allow the deployer to make the choice about hardware specifics within their own environment.

Another clear reason is that the OVF standard is defined for open use across hypervisors. Yup, some people use other ones 😉

Using OVF makes the build and deployment of virtual appliances a snap, and for any vendor, enabling their customers is key when getting products to market.

What is the difference between OVF and a virtual appliance?

OVF is a virtual machine packaging method. The virtual appliance is the product itself. It is typically a blackbox type of build which would be not much different than a 1U server that you would have put into your physical network rack.

Using VMware Studio ( you can build your appliance using a supported operating system (Supported OS list here) and package your application build.

You can also deploy a vApp with an OVF so you are not limited to a single machine deployment. Using the first-boot configuration script you  have plenty of options for post deployment configuration. There are also a lot of great resources for building your OVF so getting help is just a few clicks away.

Should OVF replace my VM Templates?

Unless you have a lot of pre-defined applications and packaged builds, you probably won’t see the benefit of using OVF machines for your corporate environment. What you should concentrate on is the automation of your builds with other great tools like vCenter Orchestrator.

There may be some cases in your corporate environment where OVF is appropriate so perhaps you could give it a try and the worst case scenario is that you will become more familiar with build automation and deployment scripting, and nobody will lose in that scenario 🙂

What about upgrades?

The upgrade process can be scary with any environment. Many virtual appliance vendors will provide a simple configuration backup process (export to file, FTP etc…) and you can simply deploy an upgrade in place. It is even possible to use Update Manager to upgrade your virtual appliances.

The long and the short of it is that virtual appliances are fast becoming a part of many production environments and it may be a great way for you to speed deployment and ease the installation process for products. Anything to reduce the workload is definitely a positive step in my opinion.

The VMware Appliance Marketplace

Now that you are ready to take a look, there is no better place to begin than the VMware Virtual Appliance Marketplace. Through the marketplace site you can search for various application solutions from the VMware Certified vendors and from other supported parters who provide appliance based solutions.

So have a look around, and happy deploying!