One Vault to Secure Them All: HashiCorp Releases Vault Enterprise 0.7

There are a few key reasons that you need to look at Vault by HashiCorp. If you’re in the business of IT on the Operations or the Development side of the aisle, you should already be looking at the entire HashiCorp ecosystem of tools. Vault is probably one that has my eye the most lately other than Terraform. Here is why I think it’s important:

  • Secret management is difficult
  • People are not good at secret management
  • Did I mention that secret management was difficult?

There are deeper technical reasons around handling secrets with automated deployments and introducing full multi-environment CI/CD, but the reality for many of the folks who read my blog and who I speak to in the community is that we are really early in our traditional application management to next-generation application management evolution. What I mean is that we are doing some things to enable better flow of applications and better management of infrastructure with some lingering bad practices.

Let’s get to the good stuff about HashiCorp Vault that we are talking about today.

Announcing HashiCorp Vault Enterprise version 0.7!

This is a very big deal as far as release go for a few reasons:

  • Secure multi-datacenter replication
  • Expanded granularity with Access Control policies
  • Enhanced UI to manage existing and new Vault capabilities

Many of the development and operations teams are struggling to find the right platform for secret management. Each public cloud provider has their own self-contained secret management tool. Many of the other platform providers such as Docker Datacenter also have their own version. The challenge with a solution that is vendor or platform specific is that you’re locked into the ecosystem.

Vault Enterprise as your All Around Secret Management

The reason that I’ve been digging into lots of the HashiCorp tools over the last few years is that they provide a really important abstraction from the underlying vendor platforms which are integrated through the open source providers. As I’ve moved up the stack from Vagrant for local builds and deployment to Terraform for IaaS and cloud provider builds, the secret management has leapt to the fore as an important next step.

Vault has both the traditional open source version and also the Vault Enterprise offering. Enterprise gives you support, and a few nifty additions that the regular Vault product don’t have. This update includes the very easy-to-use UI:

Under the replication area in the UI we can see where our replicas are enabled and the status of each of them. The replication can ben configured right in the UI by administrators which eases the process quite a bit:

Replication across environments ensures that you have the resiliency of a distributed environment, and that you can keep the secret backends close to where they are being consumed by your applications and infrastructure.  This is a big win over standalone version which required opening up VPNs, or serving over HTTPS which was the way many have been doing it in the past.  Or, worse, they were running multiple vaults in order to host one on each cloud or on-prem environment.

We have response wrapping very easily accessible in the UI:

As mentioned above, we also have the more granular policy management in Vault Enterprise 0.7 as you can see here:

If you want to get some more info on what HashiCorp is all about, I highly suggest that you have a listen to the recent podcasts I published over at the GC On-Demand site including the first with founder Mitchell Hashimoto, and the second with co-foudner Armon Dadgar. Both episodes will open up a lot of detail on what’s happening at HashiCorp, in the industry in general, and hopefully get you excited to kick the tires on some of these cool tools!

Congratulations to the HashiCorp team and community on the release of Vault Enterprise 0.7 today!  You can read up on the full press release of the Vault Enterprise update here at the HashiCorp website.




Customizing the Turbonomic HTML5 Login Screen Background

DISCLAIMER:  This is currently unsupported as any changes made to your Turbonomic login page may be removed with subsequent Turbonomic application updates.  This is meant to be a little bit of fun and can be easily repeated and reversed in the case of any updates or issues. Sometimes you want to spice up your web view for your application platforms.

This inspiration came from William Lam  as a little fun add on when you have a chance to update your login screen imagery. With the new HTML5 UI in Turbonomic it is as easy as one simple line of code to add a nice background to your login screen. Here is the before:

Since I’m a bit of a space fanatic, I want to use a little star-inspired look:

To add your own custom flavor, you simply need to remotely attach to your TAP instance over SSH, browse to the

/srv/www/htdocs/com.vmturbo.UX/app directory, and then modify the BODY tag in the index.html file.

Scroll down to the very bottom of the file because it’s the last few lines you need to access. Here is the before view:

Here is the updated code to use in your BODY tag:

body style="background-image: url(BACKGROUNDIMAGEFILENAME);background-size: contain;background-repeat: no-repeat;background-color: #000000"‍‍‍‍‍‍‍

This is the code that I’ve used for a web-hosted image:

body style="background-image: url(https://static.pexels.com/photos/107958/pexels-photo-107958.jpeg);background-size: contain;background-repeat: no-repeat;background-color: #000000"‍‍‍‍‍‍‍‍

Note the background-color tag as well.  That is for the overflow on the screen when your image doesn’t fill the full screen height and width.  I’ve set the background to be black for the image I’ve chosen. You can also upload your own custom image to your Turbonomic instance into the same folder, but as warned above, you may find that this update has to happen manually as you do future application updates to the Turbonomic environment.

For custom local images, the code would be using a local directory reference.  For ease of use, upload the image file right to the same folder and you can simply use the filename in the CSS code. The real fun is when you get to share your result.

I’d love to see your own version of the custom login screen. Drop in a commend below with your example and show how you liven up your Turbonomic instance with a little personalized view.




Git Remove Multiple Deleted Files

When working in the Git version control system, you may find yourself doing some handling of large numbers of files in a single commit. The commit part is the easy part. Adding files is very simple by using the git add * command which adds all of the new files that appear as new since the most recent commit.

Running a git status shows a few files to be added. We add them all using a git add * command, and see that the files are added and ready for a commit:

git-status-add

When you remove a large number of files, you would think that the same process would work for removing from the previous state of the repository. Removing a single file is done with the git rm filename command. You can use wildcards, but that’s going to do a lot more than you would hope.

WARNING: Seriously, don’t try this on a repository that you care about. If you run a git rm * just like you did with the git add * process, you will see that it could be nothing is removed from the local copy of your repo. In worst situations, you may also find that a lot is removed. A new commit will leave you with a rather unfortunate situation.

How to Safely Remove Deleted Local Files From a Git Repo

There is a simple one-liner that will help you safely remove your local deletions from your repository. This is done by using the git ls-files command with a --deleted -z parameter. This is piped to a git rm command using the filename and full path into the git rm command.

The Magical One-Liner

This is the full one-liner:

git ls-files --deleted -z | xargs -0 git rm

This is the result:

git-rm-xargs

Using that command is much safer. This lets you remove all of the files marked as deleted to ensure your next commit is cleaned of your deleted files and nothing that you unexpectedly removed by a slip of a wildcard statement.




Getting Started with Kubernetes using Minikube

Kubernetes has become the next big thing lately. It didn’t happen overnight, even though it may seem like it. It all depends on which news trend site you follow. Kubernetes is the container orchestration system that came out of the Google environment. It’s a result of the learnings of development on Borg, and subsequently on Omega.

History aside, Kubernetes is one of the fastest growing open source platforms in the world today. There is a huge ecosystem wrapped around it on governance and development. It’s gaining momentum in every way. The only thing that makes it challenging for many of today’s virtualization admins and architects is that it is a new way to look at infrastructure.  This brings up the classic questions about what problems Kubernetes solves, how to map it to business requirements, and then to work out all of the architectural needs of a Kubernetes deployment and administration plan.

Before you even go down that road, you should at least get some early views into how to run a small Kubernetes environment as quickly and simply as possible.  The community has solved that for is with Minikube!

Getting Started with Kubernetes Using Minikube

Minikube is a quick and easy way to kick the tires on using Kubernetes. It’s not designed for scalability or resiliency. It’s designed to let you try out the Kubernetes CLI and API tools on a small single-node lab. When you want to do your first couple of commands with Kubernetes, there really isn’t an easier way.

You need a couple of simple things. You’ll need VirtualBox running on OS X or Linux. There is an experimental Windows build, but I haven’t tested it out. This environment can be done on a nested VM, or you can do this on your native system. For Windows hosts, you obviously need to use a nested VM as the source machine to launch Minikube. These instructions are on an OS X system.

Installing Kubectl

You also need the kubectl command line utility. This can be gotten quite easily by following instructions which link to the latest build:

http://kubernetes.io/docs/getting-started-guides/minikube/#install-kubectl

First, clone the GitHub repo of Minikube which you can find here by using the git clone https://github.com/kubernetes/minikube.git command:

01-minikube-clone

Change directory into the minikube folder and we are ready to get started. It’s as easy as running the minikube start command which will run the first download of the Minikube VM image and to set up the running machine:

02-minikube-start

You can confirm the IP address of your Minikube system by running the minikube ip which will return the IP of your demo system:

03-minikube-ip

Running the Kubernetes Dashboard with Minikube

There is a nifty web dashboard that works along with the Kubernetes environment. Using Minikube also means that you have the dashboard available just by running the minikube dashboard command:

04-dashboard-cli

That also launches your default browser to the URL of the Kubernetes dashboard service which is running on port 30000 on the Minikube VM:

05-dashboard-web

There is not much to see her just yet, but it is good to have both the web and CLI access ready as we launch a quick test pod.

If we look at the current configuration, you can see the node running is our Minikube VM with the kubectl get nodes command:

06-getnodes

Running the kubectl get pods --all-namespaces will show us the running pods including our management and dashboard:

07-allpods

Your Hello World Pod

Running the sample app is super easy. Here are the simple steps that will do the following:

  • Deploy the Hello Minikube application
  • Expose the Hello Minikube port to your local machine
  • Stop your Minikube VM

Start out by running the deployment using the kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080 command. This pulled the container image by the URL you see, including the version number (1.4) and assigns the port to it (8080):

07-hello-minikube

You can check the status using the kubectl get pods command to see your hello-minikube pod instance:

09-getpods

Next, we will expose the application using the kubectl expose deployment hello-minikube --type=NodePort command:

10-expose

That creates a service which exposes the external access via the port that is defined in the pod configuration. Because we had defined it to run on port 8080, that is what will show as the exposed port if you run the kubectl get service command:

11-get-service

Now we will check the details of the hello-minikube pod itself using the curl $(minikube service hello-minikube --url) command:

12-pod-details

What’s great about this is that we’ve simply queried the API and passed the parameters from the service. This gave us the details we need to test out the exposed port to confirm everything is working. The highlighted red area in my particular example shows that we are mapped to port 31707 from the internal port 8080.

Just open up your browser to the URL provided in your system:

13-hello-http

Voila! You’re now all connected. Now, let’s look at the dashboard view. Refresh your browser where the Kubernetes dashboard is running and you’ll see lots of details suddenly available. We will explore this more in future, but feel free to click around as you wish in the mean time.

dashboard-view

Stopping your Minikube System

Halting your Minikube safely is the next step. You can spend more time digging around with your Minikube by using the Minikube command which include start and stop plus many more of the available commands:

14-minikube-command

Let’s stop our Minikube now so that we can make sure that it’s preserved for more experimentation later. It also makes sure that it is in a clean state and that it isn’t using up resources in the background when you aren’t using it. This is done with the minikube stop command:

15-stop

That’s your first look at the Minikube Kubernetes lab.  Hopefully this gives you a chance to experiment with your local environment on your own.  Look for more posts here on how to get some Kubernetes goodness under your belt which will prepare you for a journey into the world of container orchestration.




Installing Hashicorp Vault on Ubuntu 16.04

Hashicorp is all kinds of awesome. That’s the real story here, but this is meant to highlight just one portion of the overall Hashicorp ecosystem. We are going to install Vault on Ubuntu in order to create a platform for storing secrets. This is part of the foundation of much of the 12-factor app concept, and the only way to truly get to a point of fully automated deployments without risking credential storage in code, or through manual input.

My example is running on a server in Digital Ocean. You can do the same with a simple 512 MB RAM/1 CPU instance which is the lowest cost alternative that they offer.

NOTE: This is a simple deployment in dev mode for a quick test. This is NOT a production-style deployment. Make sure to treat the server and any secrets you store inside it accordingly.

Installing from the Vault Binary

Many people are already cringing at the binary versus build from source decision. I’m going with the fast track just to get you up and running.

First you go to the Vault website to find the latest binary available. At the time of this writing, it is version 0.6.2 and is available by going to the download site here.

01-vault-downloadsite

We want the Linux 64-bit binary. Just right-click the link to get the source URL which we will download right into the Linux box using cURL:

02-get-address

Log into your Ubuntu system and download the zip file using the following command (replace the URL to match the latest build):

curl -O https://releases.hashicorp.com/vault/0.6.2/vault_0.6.2_linux_amd64.zip

03-download

We need to unzip the file. If you don’t have unzip already on the system, you can install by using the sudo apt-get install unzip or just apt-get install unzip if you’re running as root.

Unzip the file using the unzip vault_0.6.2_linux_amd64.zip command:

04-unzip

Move the binary to a binary folder in the path by using the mv vault /usr/local/bin command and we will run the vault command to see that it’s working ok:

05-vault-command

In order to use the same terminal session for running the server and also running the client, we are using Screen.  If you aren’t already familiar with this nifty utility, I have a quick blog on it here.

Start up a Screen session named vault using the screen -S vault command. This launches a separate terminal session within the terminal window:

06-screen-vault

Launch the vault server using the vault server -dev:

07-start-vault

You will see the console log as it launches:

08-screenserver-running

Use the Ctrl-A C sequence to open up a new interactive shell. That’s the Control key with A simultaneously, then the C key. That brings you to a regular shell. You can use the Ctrl-A key to switch back and forth as needed from now on.

Vault runs on an HTTP port instead of HTTPS when in dev mode. You’ll have to set up the vault URL using an environment variable on the localhost address: export VAULT_ADDR='http://127.0.0.1:8200’

09-export-address

Run the vault status command to make sure things are working as expected:

10-vault-status

We can see the cluster ID, cluster name, seal status and other details in the results. We will create our test secret to the vault using the vault write secret/disco value=posse as a quick sample. You can change the value after the secret/ but you have to make sure to prepend it with secret/ like this:

11-write-secret

Writing the secret is one thing. Let’s test the reverse process and read the secret. It is as easy as you imagine. Just use the vault read secret/disco command to see what value comes back:

12-read-secret

Boom! We have our secret stored in the vault and were able to pull it back out. This is the most basic sort of test, but will set us up to do more experimentation in future posts and to set up a proper production implementation.

Use the Ctrl-A to get back to the console of your vault server. Once there, you can Ctrl-C to halt the server. You’ll see the Vault shutdown triggered in the console and then the sealing of the vault occurs. Even when it is shut down suddenly, the vault is safely sealed so that it can’t be compromised.

13-stop-server

Try the same command of vault read secret/disco and you will see the expected result:

14-stopped-server

As expected, the vault does not reply because it’s been shut down and is sealed. Like I mentioned, this was just a quick test and was not meant to do anything big. We just wanted to show how easy it is to get a basic implementation up.

This is just the start.




A Quick Intro Screen on Linux

For fans of RDP on Windows, one of the great utilities that you have available on Linux is one called Screen. Using Screen lets you start up a terminal session within your Linux host that can be left persistently in the background for you to to re-enter at another time from another location.

Screen Basics

This is even available on a Mac, but for my example, I’m using an Ubuntu Linux server. Launch a Screen session and give it a name so that you will be able to easily identify it as you reconnect later on. This is done using the screen -S session name command:

01-screen-launch

Let’s launch vi to open a file. I’ll use my /network/hosts/interfaces file in this example.

02-vi-file

We can see the file is open now on our screen. This opens up a window that you know you’ll have to exit in order to return to a terminal session. It’s a great use of Screen because we can jump out to the system as needed:

03-vi-editor

Use the Ctrl-A C sequence. That means to use the Ctrl-A followed by the C key. This will bring you back to an interactive shell.  If we run a who command, you can see the there are two active sessions:

04-who

In order to switch back and forth between the different Screen sessions, use the Ctrl-A sequence. Sometimes you will find that you have to tap the key sequence more than once to get the sessions to switch. Let’s try it a couple of times to see that you can go back and forth from your vi window to your interactive bash shell.

Pretty cool, right!

Now let’s test out the ultimate process. We are going to detach the Screen session. Use the screen -d command which will detach us altogether from Screen.  You will see the detach message when it works:

05-detached

Exit from your terminal session altogether to prove out our concept. Re-enter a remote session into the same server and then we can re-attach to the screen session using the screen -R vi-test command:

06-reattach

This is why we choose meaningful names for our Screen sessions. It makes it much easier to know which sessions we are using as we re-attach later on.  Now that re-attached, you will see a familiar looking session. Yay!

03-vi-editor

To close out each session, make sure to return to the shell and use the exit command. That ensures that you free up the unused session when you are done with it.

Screen sessions do not survive reboots. Just like a Windows RDP session, they will be usable as long as the system is in the current boot session.

You can use the man screen command to find out all about the Screen utility. This is hopefully a good start to enjoying Screen for folks who haven’t known about it in the past.