Building a HashiCorp Nomad Cluster Lab using Vagrant and VirtualBox

One of the greatest things about open source and free tools is that they are…well, open source, and free!  Today I’m sharing a simple HashiCorp Nomad lab build that I use for a variety of things.  The reason I want to be able to quickly spin up local lab infrastructure is so I can get rid of the mundane repetitive tasks.  Being able to provision, de-provision, start, and stop servers easily means you can save a ton of time and

Now for the walk through to get you to show just how easy this can be!  Skip ahead to whatever part of the steps you need below.  I’ve built every step so that you can easily do this if you’ve got no experience.

Step 1:  Getting the Tools

You’re going to need Vagrant and VirtualBox.  Each are available free for many platforms.  Click these links to reach the downloads if you don’t already have them.  There is no special configuration needed.  My setup is completely default.

HashiCorp Vagrant: https://www.vagrantup.com/downloads.html

Oracle VM VirtualBox: https://www.oracle.com/virtualization/technologies/vm/downloads/virtualbox-downloads.html or https://www.virtualbox.org/wiki/Downloads

Step 2:  Getting the Code

The GitHub repository which contains all the necessary code is here: https://github.com/discoposse/nomad-vagrant-lab

Just go there and use the Clone or Download button to get the URL and then do a git clone command:

git clone https://github.com/discoposse/nomad-vagrant-lab.git

If you want to contribute to the repository, you can also fork the repository and work from your own fork and then submit pull requests if you wish.

Step 3:  Configuring the lab for 3-node or 6-node

Making the choice of your cluster and lab size is easy.  One configuration (Vagrantfile.3node) is a simple 3-node configuration with one region and a single virtual datacenter.  The second configuration (Vagrantfile.6node) will create two 3-node clusters across two virtual datacenters (toronto and Vancouver) and across two regions (east and west).

Picking your deployment pattern is done by renaming the configuration file (Vagrantfile.3node or Vagrantfile.6node) to Vagrantfile

Seriously, it’s that easy.  If you want to update the names of the regions or datacenter then you will do that by editing the server-east.hcl and server-west.hcl files.  The 3-node configuration only uses server-east.hcl by default.

Step 4:  Launching your Nomad Lab Cluster

Launch a terminal shell session (or command prompt for Windows) and change directory to the folder where you’ve cloned the code locally on your machine.  Make sure that everything is all good by checking with the vagrant status command:


Next up is starting the deployment using the vagrant up command.  The whole build takes anywhere from 5-15 minutes depending on your speed of network and local machine resources.  Once you run it once you also have the Vagrant box image cached locally so that saves time in the future for rebuilds.

Once deployment is finished you’re back at the shell prompt and ready to start your cluster!

Step 5:  Starting the Nomad Cluster

The process is easy for this because we are using pre-made shell scripts that are included in the code.  Each VM comes up with the local Github repository mapped to the /vagrant folder inside the VM.  Each node has a file named after the node name.  It’s ideal if you have three (or six for a 6-node configuration) terminal sessions active so you can work

  1. Connect via ssh to the first node using the command vagrant ssh nomad-a-1
  2. Change to the code folder with the command cd /vagrant
  3. Launch the Nomad script with this command:  sudo sh launch-a-1.sh
  4. Connect via ssh to the second node using the command vagrant ssh nomad-a-2
  5. Change to the code folder with the command cd /vagrant
  6. Launch the Nomad script with this command:  sudo sh launch-a-2.sh
  7. Connect via ssh to the third node using the command vagrant ssh nomad-a-3
  8. Change to the code folder with the command cd /vagrant
  9. Launch the Nomad script with this command:  sudo sh launch-a-3.sh

The reason I’m detailing the manual clustering process is that this is meant for folks getting started.  I will run a different process to start services automatically in another blog.

Step 6 (Optional 6-node configuration):  Starting the Second Nomad Cluster

The process is just as easy for the second cluster because we are using pre-made shell scripts that are included in the code.  Each VM comes up with the local Github repository mapped to the /vagrant folder inside the VM.  Each node has a file named after the node name.  It’s ideal if you have three (or six for a 6-node configuration) terminal sessions active so you can work

  1. Connect via ssh to the first node using the command vagrant ssh nomad-b-1
  2. Change to the code folder with the command cd /vagrant
  3. Launch the Nomad script with this command:  sudo sh launch-b-1.sh
  4. Connect via ssh to the second node using the command vagrant ssh nomad-b-2
  5. Change to the code folder with the command cd /vagrant
  6. Launch the Nomad script with this command:  sudo sh launch-b-2.sh
  7. Connect via ssh to the third node using the command vagrant ssh nomad-b-3
  8. Change to the code folder with the command cd /vagrant
  9. Launch the Nomad script with this command:  sudo sh launch-b-3.sh

Step 7:  Confirming your Cluster Status

Checking the state of the cluster is also super easy. Just use the following command from any node in the cluster:

nomad server members

This is the output showing you have the 3 node cluster active:

Step 8:  More learning!

There are lots of resources to begin your journey with HashiCorp Nomad.  Not least of which is my freshly released Pluralsight course Getting Started with Nomad which you can view as a Pluralsight member.  If you’re not already a member, you can also sign up for a free trial and get a sample of the course.

Hopefully this is helpful and keep watching the blog for updates with more lab learning based on this handy local installation.




Deploying a Turbonomic Control Instance on a VirtualBox Lab

The lab is an ideal way to kick the tires on new ways to do things.  This means that you may want to use the same platforms that you use in your data center right in your local environment.  Recently, I wanted to test out a lot of the integration that Turbonomic has with CloudFoundry and AWS, but I didn’t want to spin up my whole vSphere environment which has a nested instance running on it.

My quick solution was to deploy the Turbonomic OVA onto VirtualBox and running it as a standalone Turbonomic Control Instance along side my other VirtualBox VMs.  This is actually an upgrade from the way I had build the TurboStack environment to run the OpenStack Cookbook and Turbonomic all together in a VirtualBox nested lab.

It’s super easy, but comes with a couple of caveats.  First, don’t run your production infrastructure this way.  VirtualBox is meant to host your lab content, and isn’t really geared for high performance to the underlying instances.  Secondly, your networks will differ depending on how you deploy VirtualBox.  This sample uses a bridged networking deployment.

You’ll need VirtualBox and a Turbonomic OVA image which you can get from the Downloads page.  If you don’t already have a license, you will get a 30 day trial.  If you are a blogger and want to join our Turbonomic Blogger and Community Engagement program (totally free!), then you can also get extended NFR licenses.  Please add a comment to this article if you need access and I will get you added to the program.

Deploying Turbonomic from the VMware OVA Image

Luckily, VirtualBox has all of the goodness needed to host a vSphere OVA image.  Simply find your OVA file, right click (note: this is on Max OSX, but works the same for Windows) and choose to open wiht the VirtualBox application:

01-vmturbo-open-with-virtualbox

That will spawn the OVF wizard within VirtualBox.  Make sure to adjust the memory and CPU as needed for your machine.  In my case I want to trim back from the 16GB and 4 vCPU which is the production recommendation.  For performance, it is best to run with a minimum of 8GB and 2 vCPU.

Set your CPU count (minimum 2 vCPU recommended):

02-deploy-cpu-change

Set your Memory capacity (minimum 8192 MB recommended):

03-deploy-mem-change

Click the Import button and you will be prompted for the EULA which you need to click Agree to complete:

04-eula

The wizard will show a progress bar as it deploys the instance for you:

05-deploying

Once completed, launch the networking properties for your Turbonomic instance and set as needed to match what your other machines are on.  If you need to connect to AWS, Azure, or Softlayer, you will need internet access as well.  In my case, I want to run DHCP and leave the machine on a Bridged network to access the local Wifi network directly:

06-networking

Next up, just launch using a Normal Start so that you can get to the console:

07-normal-start

Once the instance has booted up, log in as root with the password provided in your download instructions and you can get the IP information of the machine:

08-console

Now you are all set to launch the web UI by using the IP address of the VirtualBox instance over HTTP and follow the wizard to apply your existing license to the machine, or to request your 30 day trial.

You have a working Turbonomic control instance in your lab and we can use this for testing various features such as those I’m going to explore here around the AWS and Cloud Foundry.

Let me know if you need any help, and you can also consult the Green Circle Community for any questions related to Turbonomic itself.




Resizing your AWS VPC NAT Instance to a Lower Cost Instance Type

Let’s say that you wanted to run a lab using AWS and you need to set up a VPC. Thats a very common design that takes advantage of creating a secured Virtual Private Cloud within your AWS environment to isolate resources. There are 2 options for setting up your VPC networking for those who are going to access it directly, or with a software VPN.

There is VPC with a Single Public Subnet:

vpc-single-public

Then there is VPC with Public and Private Subnets:

vpc-public-with-private

There are also 2 hardware VPN options, but that is a different configuration that is less likely for many smaller lab configurations or for many small production environments.

VPC is Free…sort of

Setting up your VPC resources is entirely free. The costs will only come when you deploy your EC2 instances and if you attach Elastic IP addresses within the environment. Elastic IP addresses are also only charged when the are allocated but not associated, but that’s a blog post all unto itself that will come later.

One of the features you need to enable in the case of running a VPC with public and private subnets NAT (Network Address Translation) so that your EC2 instances can reach the outside world for updates and other internet resources. That is because you will have to bridge the private network to the internet segment in order to gain access. The access is only for retrieval of data, and is not what is used for the internet to access your privately hosted instances.

When you create your VPC, the NAT options are presented in the VPC wizard:

nat-gateway

Using a NAT Gateway will be done for those who want to use a software VPN and a consistent Elastic IP address. Details on the pricing configuration of software VPN is here (http://aws.amazon.com/vpc/pricing/)

The other link in the right side allows us to configure a NAT instance instead:

nat-instance

The only catch here is that when you select from the drop down, we only have sizes m1.small or other larger (aka more expensive) options available:

nat-instance-sizing

Prices for m1.small range by region, and in this case, we also have the ability to use reserved instances (pre-purchased at lower rate) or on-demand. Since many of us will want a lab environment, on-demand instances are the ideal way to go.

Once we spin up our environment, we will have an EC2 instance running for the NAT Instance. I’ve labeled mine NAT Instance – DiscoDemo so that I remember what it is:

nat-instance-discodemo

I’m looking to reduce the size of the instance to the smallest possible, which happens to be t1.micro so this is where we do that.

Stopping and Resizing your EC2 NAT Instance

Note that you will only be shutting off your EC2 instance for a minute, and this does not affect inbound connectivity to the VPC at all. This only affects the access from your private subnet EC2 resources out to the internet.

To resize our instance, we have to stop it first. We do this by selecting the instance in the list, and using the Actions button to select the Instance State | Stop option.

Make sure to choose stop and DO NOT SELECT TERMINATE. Terminating will destroy the instance, whereas stopping it just powers it down temporarily.

stop-nat-instance

There will be a warning about losing ephemeral storage data, but because this is only a NAT Instance, we don’t need to worry at all.

stop-warning

It takes a minute or two for the instance to stop, and the progress will be indicated in the EC2 console:

stopping

Once the instance is stopped, go back to the Actions button and choose Instance Settings | Change Instance Type:

change-instance-type

In the drop-list, change the selection to t1.micro:

instance-sizes

Now, you can start the instance which will start it using the t1.micro flavor size instead of m1.small:

start-nat-instance

You’ll be asked to confirm starting the instance:

start-sure-0

The instance will start up as a t1.micro and will stay that size to save you a few dollars on your instance costs. The only reason that you would need a larger size is if you have serious throughput because each flavor has network and storage capabilities attached to it.

What’s interesting is that the m1 series of instances is what we call “previous generation”, but the other options in the current M3 flavor sizes doesn’t include a small or tiny instance flavor. This is the reason that we may want to opt for the t1.micro option.

Now you are all set with the smallest size and lowest cost NAT instance option for your VPC.




Setting up Docker Datacenter on VMware Fusion

With the release of Docker Datacenter, it seemed like a good idea to kick the tires on this new system to get a handle on what the experience is like installing, configuring, and managing the new packaged Docker offering. There are a few pre-requisites that you have to get sorted out which include:

  • Base system to run Docker Datacenter
  • About 30-45 minutes
  • Internet access
  • Use-cases to try within 30 days
  • No previous 30 day trials of Docker Datacenter (it’s a one-time trial)

IMPORTANT NOTE: I use code examples from the install process, but some may alter over time from the Docker Datacenter site. Please use the original code in place of the samples to ensure consistency with the available version.

Make sure you have a system ready to go. In my case, I’m going to use an Ubuntu 15.10 system built with a single NIC on VMware Fusion. The NIC is configured for NAT on Fusion which is the Share with my Max setting:

nic-setting

I’m using 4 GB of RAM and a 40 GB disk set for thin provisioning and to use a single file. Because I often archive my VMs off of the laptop I use, having a single VMDK is easy for moving them around.

Let’s get started with the download and install!

Getting Docker Datacenter

First, you go to the main site for Docker Datacenter to start the process:  https://www.docker.com/products/docker-datacenter

You will need your Docker Hub account to log in and attach the license. As you can guess, there is a form to fill out for further contact because this is a commercial product that will have some nurturing to bring you towards purchasing.

docker-datacenter-steps

Now that we are in step 1, we have to install the Docker Engine. This is done on Ubuntu using the following shell commands, but there’s a catch!!

Notice that the original code says to add ubuntu-trusty main to the repo source list. Because I’m using 15.10 I updated the real code to say wily instead of trusty: echo "deb https://packages.docker.com/1.10/apt/repo ubuntu-wily main" | sudo tee /etc/apt/sources.list.d/docker.list

I used the original code from the Docker site in the example here to stay with their instructions:


wget -qO- 'https://pgp.mit.edu/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e' | sudo apt-key add --import
sudo apt-get update && sudo apt-get install apt-transport-https
sudo apt-get install -y linux-image-extra-virtual
echo "deb https://packages.docker.com/1.10/apt/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list
sudo apt-get update && sudo apt-get install docker-engine

All the updates went according to plan, including a reboot along the way. Once that is done it’s on to step 2.

Installing Docker Trusted Registry

This is a simple step. As we hit the next step in our install, we are ready to deploy the Docker Trusted Registry (DTR) and Universal Control Plane (UCP).

dtr-ucp

This is done with a nifty one-liner:

sudo bash -c "$(sudo docker run docker/trusted-registry install)”

The console will show that the DTR can’t be found locally in the container image cache, so it will start the download the images. This may take a little while depending on your network speed:

dtr-install

Once that’s done, just open up a browser and go to the https instance on the virtual machine. There will be a warning about the site being insecure which is true because the certificate is not trusted based on the URL you are accessing. No worries though, this is the early part of the configuration.

https-error

The launch screen will come up and as if by magic you are starting your first configuration steps:

first-launch

You can see that there are some errors highlighted on the right-hand side of the window. These are because of:

  • No domain set up – accessed the site by IP address
  • Unlicensed instance – you have to request in the next steps
  • Authentication – default is set up with no authentication at all

Because security should be first on our mind, let’s set up authentication by clicking on the Settings menu option and then the Auth submenu. This brings us to the drop list where you can choose your authentication type:

sec-droplist

For my example, I’m going to go with a Managed authentication using a local database. I’m going to create a discoposse user with full access:

user-add

As soon as you save the admin user, you will be logged out of the environment and be forced to login again:

login-page

Now we have removed one of the configuration errors listed at the main page. Next step is to get our demo instance licensed. Because you signed up at the start of the process, you have had the trial license added to your Docker Hub account.

Go to https://hub.docker.com with your log in used to request the trial, click on your account icon in the upper right and go to Settings menu. From there click on Licenses and then click the download icon in the trial section at the bottom left:

get-license

Head back to your Docker Datacenter instance and upload the docker_subscription.lic on the Settings menu in the License submenu:

license-install

Once you’ve set the file to upload, click on Save and Restart button.

You will be brought back to the same page after the restart. The License ID section will be populated with a string of characters now that the trial license is applied.

Our last configuration step is to set up the domain name. Go to the Settings section and the click the General submenu. Set the Domain Name option to a fully qualified domain name that you have configured in your DNS.

The security admin inside me wants to configure a whole bunch of other goodness here with the certificates. In this case. we are going to just scroll down and click the Save and Restart button.

Welcome to your Docker Datacenter!

Now that we have configure the minimum settings to use DTR in your Docker Datacenter, we are ready to log in using the new configuration. Go to your new URL as set on the General settings page. In my case, I chose dtr.discoposse.com and will be prompted to log in with my admin credentials as set in the first steps.

This is just the start line of our journey.  Next up we will be installing the UCP (Universal Control Plane), and our upcoming blogs will cover a few different things we can do using our newly configured Docker Datacenter. We love learning the possibilities!