Building a HashiCorp Nomad Cluster Lab using Vagrant and VirtualBox

One of the greatest things about open source and free tools is that they are…well, open source, and free!  Today I’m sharing a simple HashiCorp Nomad lab build that I use for a variety of things.  The reason I want to be able to quickly spin up local lab infrastructure is so I can get rid of the mundane repetitive tasks.  Being able to provision, de-provision, start, and stop servers easily means you can save a ton of time and

Now for the walk through to get you to show just how easy this can be!  Skip ahead to whatever part of the steps you need below.  I’ve built every step so that you can easily do this if you’ve got no experience.

Step 1:  Getting the Tools

You’re going to need Vagrant and VirtualBox.  Each are available free for many platforms.  Click these links to reach the downloads if you don’t already have them.  There is no special configuration needed.  My setup is completely default.

HashiCorp Vagrant:

Oracle VM VirtualBox: or

Step 2:  Getting the Code

The GitHub repository which contains all the necessary code is here:

Just go there and use the Clone or Download button to get the URL and then do a git clone command:

git clone

If you want to contribute to the repository, you can also fork the repository and work from your own fork and then submit pull requests if you wish.

Step 3:  Configuring the lab for 3-node or 6-node

Making the choice of your cluster and lab size is easy.  One configuration (Vagrantfile.3node) is a simple 3-node configuration with one region and a single virtual datacenter.  The second configuration (Vagrantfile.6node) will create two 3-node clusters across two virtual datacenters (toronto and Vancouver) and across two regions (east and west).

Picking your deployment pattern is done by renaming the configuration file (Vagrantfile.3node or Vagrantfile.6node) to Vagrantfile

Seriously, it’s that easy.  If you want to update the names of the regions or datacenter then you will do that by editing the server-east.hcl and server-west.hcl files.  The 3-node configuration only uses server-east.hcl by default.

Step 4:  Launching your Nomad Lab Cluster

Launch a terminal shell session (or command prompt for Windows) and change directory to the folder where you’ve cloned the code locally on your machine.  Make sure that everything is all good by checking with the vagrant status command:

Next up is starting the deployment using the vagrant up command.  The whole build takes anywhere from 5-15 minutes depending on your speed of network and local machine resources.  Once you run it once you also have the Vagrant box image cached locally so that saves time in the future for rebuilds.

Once deployment is finished you’re back at the shell prompt and ready to start your cluster!

Step 5:  Starting the Nomad Cluster

The process is easy for this because we are using pre-made shell scripts that are included in the code.  Each VM comes up with the local Github repository mapped to the /vagrant folder inside the VM.  Each node has a file named after the node name.  It’s ideal if you have three (or six for a 6-node configuration) terminal sessions active so you can work

  1. Connect via ssh to the first node using the command vagrant ssh nomad-a-1
  2. Change to the code folder with the command cd /vagrant
  3. Launch the Nomad script with this command:  sudo sh
  4. Connect via ssh to the second node using the command vagrant ssh nomad-a-2
  5. Change to the code folder with the command cd /vagrant
  6. Launch the Nomad script with this command:  sudo sh
  7. Connect via ssh to the third node using the command vagrant ssh nomad-a-3
  8. Change to the code folder with the command cd /vagrant
  9. Launch the Nomad script with this command:  sudo sh

The reason I’m detailing the manual clustering process is that this is meant for folks getting started.  I will run a different process to start services automatically in another blog.

Step 6 (Optional 6-node configuration):  Starting the Second Nomad Cluster

The process is just as easy for the second cluster because we are using pre-made shell scripts that are included in the code.  Each VM comes up with the local Github repository mapped to the /vagrant folder inside the VM.  Each node has a file named after the node name.  It’s ideal if you have three (or six for a 6-node configuration) terminal sessions active so you can work

  1. Connect via ssh to the first node using the command vagrant ssh nomad-b-1
  2. Change to the code folder with the command cd /vagrant
  3. Launch the Nomad script with this command:  sudo sh
  4. Connect via ssh to the second node using the command vagrant ssh nomad-b-2
  5. Change to the code folder with the command cd /vagrant
  6. Launch the Nomad script with this command:  sudo sh
  7. Connect via ssh to the third node using the command vagrant ssh nomad-b-3
  8. Change to the code folder with the command cd /vagrant
  9. Launch the Nomad script with this command:  sudo sh

Step 7:  Confirming your Cluster Status

Checking the state of the cluster is also super easy. Just use the following command from any node in the cluster:

nomad server members

This is the output showing you have the 3 node cluster active:

Step 8:  More learning!

There are lots of resources to begin your journey with HashiCorp Nomad.  Not least of which is my freshly released Pluralsight course Getting Started with Nomad which you can view as a Pluralsight member.  If you’re not already a member, you can also sign up for a free trial and get a sample of the course.

Hopefully this is helpful and keep watching the blog for updates with more lab learning based on this handy local installation.

Rancher Part 4:  Using the Catalog Example with GlusterFS

As promised, it’s time to get to the catalog goodness. Since GlusterFS hasn’t gotten enough love from me lately, it’s time to bring that into the process as part of my Rancher demo. The goal with this exercise is to spin up a GlusterFS environment and to expand it using the built-in capabilities provided by the catalog.

Deploying an Application from the Rancher Catalog

Go to the Applications menu option and select Catalog from the secondary menu. That will bring you to the built-in application catalog screen:


There are a neat array of applications to choose from which target a number of typical core services. In the left hand pane, you can see that they can be searched and sorted by category. The default shows the full list.

We want to try out a GlusterFS demo because it will show us how a multi-container stack is deployed, plus we get to expand it as well using the Rancher UI. Find the GlusterFS link in the catalog and click the Launch button:


We have a few options available as you would expect. We can choose the name of the stack, a free-form description text field, and also the


What’s even cooler is that you can expand the Preview link to expand the window. This will give us the two YML files that could be used from the command line interface to create this same cluster:


You can see the contents of the sample YML files here for each of the docker-compose and rancher-compose that I’ve copied out to GitHub gists for better viewing:

Docker Compose:

Rancher Compose:

Once you complete the wizard, the GlusterFS stack will spin up and you will see the results within a couple of minutes”


You can look at your running stack by clicking on the name in the Stacks view. This will bring up the screen showing some details about the application, commands, status, and also the containers further down the page.


Yes, it is just that easy.


Scaling up the GlusterFS Cluster in Rancher

Now that we have our GlusterFS environment running, we can expand it by a couple of nodes to illustrate how easy it is to scale the application.

In the Stacks view, you simply click the +Scale Up button


As you wait and watch, the view will update showing the network address, and container status. Once completed, it will look something like this:


You can do it again to show just how easy it is to add another node:


To confirm that the application itself has recognized the new nodes, let’s spark up a terminal shell and see how the health of the GlusterFS cluster is. You can do this easily by hovering on the container in the view and the clicking the context menu which brings up the Execute Shell option:



We can use the glusterfs pool list which will tell us which servers are actively attached to the GlusterFS pool. As you can see from the example here, there are 5 nodes in the pool:


To manage the scale level, including reducing the scale manually, you can use the + and – buttons that appear in the Stack details view:


Was that both fun and easy? Why, yes it was.


Join us in our upcoming 5th post in the series as we explore the advanced container configuration options. This is a great opportunity to explore core Docker concepts as well, so put on your thinking caps for that one!

Building your own CoreOS Army with Vagrant, because Orchestration Rocks

I’m all about saving time, money, and making things simple to use. Luckily, CoreOS and Vagrant have become a big part of making that happen for me lately with a lot of work I’ve been doing in and out of my day-to-day lab experiences. It was especially exciting for me today as I had an opportunity to meet with Alex Polvi, CEO of CoreOS at a Turbonomic event in San Francisco.


I’m very pleased with the work happening at CoreOS, around both the core product, as well as the work happening on the container side of the ecosystem with Rocket.

This is a quick primer to allow you to get some CoreOS goodness going in your lab using the simple and freely available Vagrant and VirtualBox tools that you may already be running.

We will need 3 things:

  1. Git
  2. Vagrant 1.6 or higher
  3. VirtualBox 4.3.10 or higher

The goal of what this great little Vagrant build does is to enable you to spin up a lab running one or more CoreOS instances to kick the tires on what you can do with CoreOS. Not only that, but you can also build infrastructure which is clustered and distributed using the very easy configuration available right in the Vagrant configuration.

Step 1 – Pull down the Vagrant configuration to your system

It’s as simple as running a git clone as shown here:

git clone

cd coreos-vagrant

Now that you have the code on your system, we can edit the config.rb file to decide how many instances you want to run. By default, there will be a single instance launched.

In this case, let’s spin up a 3-node configuration just to see what a multi-node environment looks like. There is a lot that we can do, but it is important that we take the first steps to just see how to get to the start line.

Step 2 – Edit your Vagrantfile to set your network

I’m assuming that you’re aware of your available networks on your lab, so you will want to pick one to put in place for your CoreOS cluster. In my case, I’m editing the Vagrantfile and using the network. You will find the line in the file towards the bottom as you can see here as ip = "10.200.1.#{i+100}":


Now that we have our network defined, we have one more simple step to get closer to our CoreOS cluster deployment.

Step 3 – Edit your config.rb file

Almost there! Just open up your config.rb file and set the $num_instances=3 to define the number of CoreOS instances you are going to launch. I’ve done this for a lot to confirm it works, and it’s really almost too simple. That’s a good thing 🙂


Step 4 – Vagrant up

You’ve made it! Now that we have our config.rb set for the number of instances, and the Vagrantfile configured for our network of choice, it’s time to launch with the vagrant up command. You’re going to see some messages scrolling by during the process, and then in the end we will have 3 brand new CoreOS instances in a matter of a couple of minutes (or less!).


We want to confirm what’s been done as usual, and luckily all of the vagrant commands are applicable for our CoreOS instances too. We can confirm the status with our vagrant status command:


Next we will run our vagrant ssh core-01 to confirm we have an active network. This gives us an SSH connection to the first of the three instances so that we are running commands from there. Once we are connected, you can see that the shell prompt is changed to core@localhost ~ $ which is our first indicator. My preference is to always run an ifconfig to ensure the network interfaces are showing the right IP address:


Let’s do one more quick test to confirm our IP connectivity to the gateway and external network by doing a ping to illustrate a working network connection.


As if by magic, or in this case Vagrant, we have a running set of CoreOS instances which we can now use for any purpose. This brings us to the start line as i like to say.

Why CoreOS?

CoreOS has a lot of very interesting use-cases. You may have one, two, or many reasons why you will want to use CoreOS as a part of your infrastructure offering. What we are going to explore in the next posts from here is a few distinct use-cases, how to deal with those using CoreOS as the solution, and from there you can get more comfortable with the abilities baked into CoreOS.

In what we’ve done here, these are a set of worker instances.  The important next step will be to create the cluster management configuration and put our little CoreOS army to work.

We will also do a little extended work with both Docker on CoreOS, as well as with the new Rocket offering from CoreOS. In fact, there was a brand new update at the time of this writing.

I hope that this is a helpful start, and I look forward to bringing more CoreOS content. Feel free to drop me a line by reaching out on Twitter (@DiscoPosse) or leaving a comment here so that I can help to answer any questions about what we can do.

PowerCLI – Add Multiple VLAN Port Groups to vSphere Cluster Standard vSwitches

powercli-logoA recent change to my networking environment added the requirement to put different VM guests onto different VLANs according to their role. This is a fairly common configuration for a lot of virtualized datacenters. What is also common is that we don’t have the design fully prepared when we build our virtual infrastructure.

Limitations of Standard vSwitches

One of the negative points of standard vSwitches is that there are as many of them as there are ESX hosts and each must be separately created and managed. Add to that, multiple port groups on multiple uplinks and you have a growing administrative challenge.

While we don’t make these types of changes often, it is a change that can take a significant effort. More importantly, it is prone to human error.

Enter PowerCLI

With PowerCLI we have lots of great options. Because our hosts are in a cluster, the process flow will be fairly simple:

  1. Query a file with our VLAN information
  2. Query our vSphere cluster for hosts
  3. Add vSwitch port groups to each host in each cluster

Three easy steps, but the amount of clicks and typing to manually configure all of these port groups would be a real challenge. With a script we ensure consistency of the process.

Your Input File

We are working with a file that is named MyVLANs.csv and is located somewhere that the script can read it from as defined in the script itself. We assign a variable $InputFile and use the full path to find it in our script file.

The file has a header row and the data is organized as shown in our example here:


As we can see, we name the cluster, the vSwitch, the name to apply to the port group (VLANname) and the VLAN number to assign to that port group.

In your environment you could have numerous clusters, and you can use the same source file to manage additions to all of them at the same time.

The Script

I did say it was a three-step process, but one of those steps has a few more activities to perform. That being said, it still isn’t too much to do thanks to PowerCLI.

We assume that you are running your PowerCLI shell with privileges to administer your vCenter environment. This requires access to read the file and to add port groups to the vSphere vSwitches.

First we setup our input file:

$InputFile = “C:UsersewrightDocumentsSCRIPTS-TEMPMyVLANs.csv”

Next we import the file using the handy Import-CSV CmdLet:

$MyVLANFile = Import-CSV $InputFile

Now we just have to parse the contents. Because we have our header row, the columns are already assigned and we just loop through each line using ForEach to read the info, create our PowerCLI command to add the vSwitch and execute.

Because we have to read the cluster information for each line there is another loop inside to add the vSwitch to each host for the cluster.

ForEach ($VLAN in $MyVLANFile) {
$MyCluster = $VLAN.cluster
$MyvSwitch = $VLAN.vSwitch
$MyVLANname = $VLAN.VLANname

We define variables for each column in the file, then query the cluster for hosts and assign it to the $MyVMHosts variable:

$MyVMHosts = Get-Cluster $MyCluster | Get-VMHost | sort Name | % {$_.Name}

Next we loop through each host, query the vSwitch and create a new port group with the New-VirtualPortGroup CmdLet:

ForEach ($VMHost in $MyVMHosts) {
Get-VirtualSwitch -VMHost $VMHost -Name $MyvSwitch | New-VirtualPortGroup -Name $MyVLANname -VLanId $MyVLANid

Here is the view of the whole script, and here is the link to the file:



 About the Errors

One of the things I haven’t done with this script is any error handling. Because I’m only adding switches on occasion it doesn’t need a lot of cleanliness. If you re-run the same file on an existing set of port groups it will throw an error because the port group exists. If I get some extra time I may add a little error housekeeping to clean things up.

Hope this saves you some time. Happy scripting!