Rancher Part 4:  Using the Catalog Example with GlusterFS

As promised, it’s time to get to the catalog goodness. Since GlusterFS hasn’t gotten enough love from me lately, it’s time to bring that into the process as part of my Rancher demo. The goal with this exercise is to spin up a GlusterFS environment and to expand it using the built-in capabilities provided by the catalog.

Deploying an Application from the Rancher Catalog

Go to the Applications menu option and select Catalog from the secondary menu. That will bring you to the built-in application catalog screen:

applications-catalog

There are a neat array of applications to choose from which target a number of typical core services. In the left hand pane, you can see that they can be searched and sorted by category. The default shows the full list.

We want to try out a GlusterFS demo because it will show us how a multi-container stack is deployed, plus we get to expand it as well using the Rancher UI. Find the GlusterFS link in the catalog and click the Launch button:

glusterfs

We have a few options available as you would expect. We can choose the name of the stack, a free-form description text field, and also the

add-glusterfs-stack

What’s even cooler is that you can expand the Preview link to expand the window. This will give us the two YML files that could be used from the command line interface to create this same cluster:

compose-yml

You can see the contents of the sample YML files here for each of the docker-compose and rancher-compose that I’ve copied out to GitHub gists for better viewing:

Docker Compose: https://gist.github.com/discoposse/4cb03c3abfa60d1d9a40

Rancher Compose: https://gist.github.com/discoposse/2f7c1a24aca645a93567

Once you complete the wizard, the GlusterFS stack will spin up and you will see the results within a couple of minutes”

glusterfs-running

You can look at your running stack by clicking on the name in the Stacks view. This will bring up the screen showing some details about the application, commands, status, and also the containers further down the page.

glusterfs-stack-details

Yes, it is just that easy.

wait-there-is-more

Scaling up the GlusterFS Cluster in Rancher

Now that we have our GlusterFS environment running, we can expand it by a couple of nodes to illustrate how easy it is to scale the application.

In the Stacks view, you simply click the +Scale Up button

scale-up

As you wait and watch, the view will update showing the network address, and container status. Once completed, it will look something like this:

add-glusterfs-node-1

You can do it again to show just how easy it is to add another node:

add-glusterfs-node-2

To confirm that the application itself has recognized the new nodes, let’s spark up a terminal shell and see how the health of the GlusterFS cluster is. You can do this easily by hovering on the container in the view and the clicking the context menu which brings up the Execute Shell option:

execute-shell

 

We can use the glusterfs pool list which will tell us which servers are actively attached to the GlusterFS pool. As you can see from the example here, there are 5 nodes in the pool:

glusterfs-pool-list

To manage the scale level, including reducing the scale manually, you can use the + and – buttons that appear in the Stack details view:

scale-details

Was that both fun and easy? Why, yes it was.

jack-yes

Join us in our upcoming 5th post in the series as we explore the advanced container configuration options. This is a great opportunity to explore core Docker concepts as well, so put on your thinking caps for that one!




Building your own CoreOS Army with Vagrant, because Orchestration Rocks

I’m all about saving time, money, and making things simple to use. Luckily, CoreOS and Vagrant have become a big part of making that happen for me lately with a lot of work I’ve been doing in and out of my day-to-day lab experiences. It was especially exciting for me today as I had an opportunity to meet with Alex Polvi, CEO of CoreOS at a Turbonomic event in San Francisco.

coreos-alex-polvi

I’m very pleased with the work happening at CoreOS, around both the core product, as well as the work happening on the container side of the ecosystem with Rocket.

This is a quick primer to allow you to get some CoreOS goodness going in your lab using the simple and freely available Vagrant and VirtualBox tools that you may already be running.

We will need 3 things:

  1. Git
  2. Vagrant 1.6 or higher
  3. VirtualBox 4.3.10 or higher

The goal of what this great little Vagrant build does is to enable you to spin up a lab running one or more CoreOS instances to kick the tires on what you can do with CoreOS. Not only that, but you can also build infrastructure which is clustered and distributed using the very easy configuration available right in the Vagrant configuration.

Step 1 – Pull down the Vagrant configuration to your system

It’s as simple as running a git clone as shown here:

git clone https://github.com/coreos/coreos-vagrant/

cd coreos-vagrant

Now that you have the code on your system, we can edit the config.rb file to decide how many instances you want to run. By default, there will be a single instance launched.

In this case, let’s spin up a 3-node configuration just to see what a multi-node environment looks like. There is a lot that we can do, but it is important that we take the first steps to just see how to get to the start line.

Step 2 – Edit your Vagrantfile to set your network

I’m assuming that you’re aware of your available networks on your lab, so you will want to pick one to put in place for your CoreOS cluster. In my case, I’m editing the Vagrantfile and using the 10.200.1.0/24 network. You will find the line in the file towards the bottom as you can see here as ip = "10.200.1.#{i+100}":

vagrantfile-network

Now that we have our network defined, we have one more simple step to get closer to our CoreOS cluster deployment.

Step 3 – Edit your config.rb file

Almost there! Just open up your config.rb file and set the $num_instances=3 to define the number of CoreOS instances you are going to launch. I’ve done this for a lot to confirm it works, and it’s really almost too simple. That’s a good thing 🙂

config-rb-file-3-node

Step 4 – Vagrant up

You’ve made it! Now that we have our config.rb set for the number of instances, and the Vagrantfile configured for our network of choice, it’s time to launch with the vagrant up command. You’re going to see some messages scrolling by during the process, and then in the end we will have 3 brand new CoreOS instances in a matter of a couple of minutes (or less!).

vagrant-up

We want to confirm what’s been done as usual, and luckily all of the vagrant commands are applicable for our CoreOS instances too. We can confirm the status with our vagrant status command:

vagrant-status

Next we will run our vagrant ssh core-01 to confirm we have an active network. This gives us an SSH connection to the first of the three instances so that we are running commands from there. Once we are connected, you can see that the shell prompt is changed to core@localhost ~ $ which is our first indicator. My preference is to always run an ifconfig to ensure the network interfaces are showing the right IP address:

vagrant-ssh

Let’s do one more quick test to confirm our IP connectivity to the gateway and external network by doing a ping 8.8.8.8 to illustrate a working network connection.

ping

As if by magic, or in this case Vagrant, we have a running set of CoreOS instances which we can now use for any purpose. This brings us to the start line as i like to say.

Why CoreOS?

CoreOS has a lot of very interesting use-cases. You may have one, two, or many reasons why you will want to use CoreOS as a part of your infrastructure offering. What we are going to explore in the next posts from here is a few distinct use-cases, how to deal with those using CoreOS as the solution, and from there you can get more comfortable with the abilities baked into CoreOS.

In what we’ve done here, these are a set of worker instances.  The important next step will be to create the cluster management configuration and put our little CoreOS army to work.

We will also do a little extended work with both Docker on CoreOS, as well as with the new Rocket offering from CoreOS. In fact, there was a brand new update at the time of this writing.

I hope that this is a helpful start, and I look forward to bringing more CoreOS content. Feel free to drop me a line by reaching out on Twitter (@DiscoPosse) or leaving a comment here so that I can help to answer any questions about what we can do.




PowerCLI – Add Multiple VLAN Port Groups to vSphere Cluster Standard vSwitches

powercli-logoA recent change to my networking environment added the requirement to put different VM guests onto different VLANs according to their role. This is a fairly common configuration for a lot of virtualized datacenters. What is also common is that we don’t have the design fully prepared when we build our virtual infrastructure.

Limitations of Standard vSwitches

One of the negative points of standard vSwitches is that there are as many of them as there are ESX hosts and each must be separately created and managed. Add to that, multiple port groups on multiple uplinks and you have a growing administrative challenge.

While we don’t make these types of changes often, it is a change that can take a significant effort. More importantly, it is prone to human error.

Enter PowerCLI

With PowerCLI we have lots of great options. Because our hosts are in a cluster, the process flow will be fairly simple:

  1. Query a file with our VLAN information
  2. Query our vSphere cluster for hosts
  3. Add vSwitch port groups to each host in each cluster

Three easy steps, but the amount of clicks and typing to manually configure all of these port groups would be a real challenge. With a script we ensure consistency of the process.

Your Input File

We are working with a file that is named MyVLANs.csv and is located somewhere that the script can read it from as defined in the script itself. We assign a variable $InputFile and use the full path to find it in our script file.

The file has a header row and the data is organized as shown in our example here:

inputfile

As we can see, we name the cluster, the vSwitch, the name to apply to the port group (VLANname) and the VLAN number to assign to that port group.

In your environment you could have numerous clusters, and you can use the same source file to manage additions to all of them at the same time.

The Script

I did say it was a three-step process, but one of those steps has a few more activities to perform. That being said, it still isn’t too much to do thanks to PowerCLI.

We assume that you are running your PowerCLI shell with privileges to administer your vCenter environment. This requires access to read the file and to add port groups to the vSphere vSwitches.

First we setup our input file:

$InputFile = “C:UsersewrightDocumentsSCRIPTS-TEMPMyVLANs.csv”

Next we import the file using the handy Import-CSV CmdLet:

$MyVLANFile = Import-CSV $InputFile

Now we just have to parse the contents. Because we have our header row, the columns are already assigned and we just loop through each line using ForEach to read the info, create our PowerCLI command to add the vSwitch and execute.

Because we have to read the cluster information for each line there is another loop inside to add the vSwitch to each host for the cluster.

ForEach ($VLAN in $MyVLANFile) {
$MyCluster = $VLAN.cluster
$MyvSwitch = $VLAN.vSwitch
$MyVLANname = $VLAN.VLANname
$MyVLANid = $VLAN.VLANid

We define variables for each column in the file, then query the cluster for hosts and assign it to the $MyVMHosts variable:

$MyVMHosts = Get-Cluster $MyCluster | Get-VMHost | sort Name | % {$_.Name}

Next we loop through each host, query the vSwitch and create a new port group with the New-VirtualPortGroup CmdLet:

ForEach ($VMHost in $MyVMHosts) {
Get-VirtualSwitch -VMHost $VMHost -Name $MyvSwitch | New-VirtualPortGroup -Name $MyVLANname -VLanId $MyVLANid
}

Here is the view of the whole script, and here is the link to the file: http://www.discoposse.com/wp-content/uploads/AddVLANsToClusterHosts.ps1

 

scriptfile

 About the Errors

One of the things I haven’t done with this script is any error handling. Because I’m only adding switches on occasion it doesn’t need a lot of cleanliness. If you re-run the same file on an existing set of port groups it will throw an error because the port group exists. If I get some extra time I may add a little error housekeeping to clean things up.

Hope this saves you some time. Happy scripting!