Getting Started with Kubernetes using Minikube

Kubernetes has become the next big thing lately. It didn’t happen overnight, even though it may seem like it. It all depends on which news trend site you follow. Kubernetes is the container orchestration system that came out of the Google environment. It’s a result of the learnings of development on Borg, and subsequently on Omega.

History aside, Kubernetes is one of the fastest growing open source platforms in the world today. There is a huge ecosystem wrapped around it on governance and development. It’s gaining momentum in every way. The only thing that makes it challenging for many of today’s virtualization admins and architects is that it is a new way to look at infrastructure.  This brings up the classic questions about what problems Kubernetes solves, how to map it to business requirements, and then to work out all of the architectural needs of a Kubernetes deployment and administration plan.

Before you even go down that road, you should at least get some early views into how to run a small Kubernetes environment as quickly and simply as possible.  The community has solved that for is with Minikube!

Getting Started with Kubernetes Using Minikube

Minikube is a quick and easy way to kick the tires on using Kubernetes. It’s not designed for scalability or resiliency. It’s designed to let you try out the Kubernetes CLI and API tools on a small single-node lab. When you want to do your first couple of commands with Kubernetes, there really isn’t an easier way.

You need a couple of simple things. You’ll need VirtualBox running on OS X or Linux. There is an experimental Windows build, but I haven’t tested it out. This environment can be done on a nested VM, or you can do this on your native system. For Windows hosts, you obviously need to use a nested VM as the source machine to launch Minikube. These instructions are on an OS X system.

Installing Kubectl

You also need the kubectl command line utility. This can be gotten quite easily by following instructions which link to the latest build:

First, clone the GitHub repo of Minikube which you can find here by using the git clone command:


Change directory into the minikube folder and we are ready to get started. It’s as easy as running the minikube start command which will run the first download of the Minikube VM image and to set up the running machine:


You can confirm the IP address of your Minikube system by running the minikube ip which will return the IP of your demo system:


Running the Kubernetes Dashboard with Minikube

There is a nifty web dashboard that works along with the Kubernetes environment. Using Minikube also means that you have the dashboard available just by running the minikube dashboard command:


That also launches your default browser to the URL of the Kubernetes dashboard service which is running on port 30000 on the Minikube VM:


There is not much to see her just yet, but it is good to have both the web and CLI access ready as we launch a quick test pod.

If we look at the current configuration, you can see the node running is our Minikube VM with the kubectl get nodes command:


Running the kubectl get pods --all-namespaces will show us the running pods including our management and dashboard:


Your Hello World Pod

Running the sample app is super easy. Here are the simple steps that will do the following:

  • Deploy the Hello Minikube application
  • Expose the Hello Minikube port to your local machine
  • Stop your Minikube VM

Start out by running the deployment using the kubectl run hello-minikube --port=8080 command. This pulled the container image by the URL you see, including the version number (1.4) and assigns the port to it (8080):


You can check the status using the kubectl get pods command to see your hello-minikube pod instance:


Next, we will expose the application using the kubectl expose deployment hello-minikube --type=NodePort command:


That creates a service which exposes the external access via the port that is defined in the pod configuration. Because we had defined it to run on port 8080, that is what will show as the exposed port if you run the kubectl get service command:


Now we will check the details of the hello-minikube pod itself using the curl $(minikube service hello-minikube --url) command:


What’s great about this is that we’ve simply queried the API and passed the parameters from the service. This gave us the details we need to test out the exposed port to confirm everything is working. The highlighted red area in my particular example shows that we are mapped to port 31707 from the internal port 8080.

Just open up your browser to the URL provided in your system:


Voila! You’re now all connected. Now, let’s look at the dashboard view. Refresh your browser where the Kubernetes dashboard is running and you’ll see lots of details suddenly available. We will explore this more in future, but feel free to click around as you wish in the mean time.


Stopping your Minikube System

Halting your Minikube safely is the next step. You can spend more time digging around with your Minikube by using the Minikube command which include start and stop plus many more of the available commands:


Let’s stop our Minikube now so that we can make sure that it’s preserved for more experimentation later. It also makes sure that it is in a clean state and that it isn’t using up resources in the background when you aren’t using it. This is done with the minikube stop command:


That’s your first look at the Minikube Kubernetes lab.  Hopefully this gives you a chance to experiment with your local environment on your own.  Look for more posts here on how to get some Kubernetes goodness under your belt which will prepare you for a journey into the world of container orchestration.

Installing Hashicorp Vault on Ubuntu 16.04

Hashicorp is all kinds of awesome. That’s the real story here, but this is meant to highlight just one portion of the overall Hashicorp ecosystem. We are going to install Vault on Ubuntu in order to create a platform for storing secrets. This is part of the foundation of much of the 12-factor app concept, and the only way to truly get to a point of fully automated deployments without risking credential storage in code, or through manual input.

My example is running on a server in Digital Ocean. You can do the same with a simple 512 MB RAM/1 CPU instance which is the lowest cost alternative that they offer.

NOTE: This is a simple deployment in dev mode for a quick test. This is NOT a production-style deployment. Make sure to treat the server and any secrets you store inside it accordingly.

Installing from the Vault Binary

Many people are already cringing at the binary versus build from source decision. I’m going with the fast track just to get you up and running.

First you go to the Vault website to find the latest binary available. At the time of this writing, it is version 0.6.2 and is available by going to the download site here.


We want the Linux 64-bit binary. Just right-click the link to get the source URL which we will download right into the Linux box using cURL:


Log into your Ubuntu system and download the zip file using the following command (replace the URL to match the latest build):

curl -O


We need to unzip the file. If you don’t have unzip already on the system, you can install by using the sudo apt-get install unzip or just apt-get install unzip if you’re running as root.

Unzip the file using the unzip command:


Move the binary to a binary folder in the path by using the mv vault /usr/local/bin command and we will run the vault command to see that it’s working ok:


In order to use the same terminal session for running the server and also running the client, we are using Screen.  If you aren’t already familiar with this nifty utility, I have a quick blog on it here.

Start up a Screen session named vault using the screen -S vault command. This launches a separate terminal session within the terminal window:


Launch the vault server using the vault server -dev:


You will see the console log as it launches:


Use the Ctrl-A C sequence to open up a new interactive shell. That’s the Control key with A simultaneously, then the C key. That brings you to a regular shell. You can use the Ctrl-A key to switch back and forth as needed from now on.

Vault runs on an HTTP port instead of HTTPS when in dev mode. You’ll have to set up the vault URL using an environment variable on the localhost address: export VAULT_ADDR='’


Run the vault status command to make sure things are working as expected:


We can see the cluster ID, cluster name, seal status and other details in the results. We will create our test secret to the vault using the vault write secret/disco value=posse as a quick sample. You can change the value after the secret/ but you have to make sure to prepend it with secret/ like this:


Writing the secret is one thing. Let’s test the reverse process and read the secret. It is as easy as you imagine. Just use the vault read secret/disco command to see what value comes back:


Boom! We have our secret stored in the vault and were able to pull it back out. This is the most basic sort of test, but will set us up to do more experimentation in future posts and to set up a proper production implementation.

Use the Ctrl-A to get back to the console of your vault server. Once there, you can Ctrl-C to halt the server. You’ll see the Vault shutdown triggered in the console and then the sealing of the vault occurs. Even when it is shut down suddenly, the vault is safely sealed so that it can’t be compromised.


Try the same command of vault read secret/disco and you will see the expected result:


As expected, the vault does not reply because it’s been shut down and is sealed. Like I mentioned, this was just a quick test and was not meant to do anything big. We just wanted to show how easy it is to get a basic implementation up.

This is just the start.

A Quick Intro Screen on Linux

For fans of RDP on Windows, one of the great utilities that you have available on Linux is one called Screen. Using Screen lets you start up a terminal session within your Linux host that can be left persistently in the background for you to to re-enter at another time from another location.

Screen Basics

This is even available on a Mac, but for my example, I’m using an Ubuntu Linux server. Launch a Screen session and give it a name so that you will be able to easily identify it as you reconnect later on. This is done using the screen -S session name command:


Let’s launch vi to open a file. I’ll use my /network/hosts/interfaces file in this example.


We can see the file is open now on our screen. This opens up a window that you know you’ll have to exit in order to return to a terminal session. It’s a great use of Screen because we can jump out to the system as needed:


Use the Ctrl-A C sequence. That means to use the Ctrl-A followed by the C key. This will bring you back to an interactive shell.  If we run a who command, you can see the there are two active sessions:


In order to switch back and forth between the different Screen sessions, use the Ctrl-A sequence. Sometimes you will find that you have to tap the key sequence more than once to get the sessions to switch. Let’s try it a couple of times to see that you can go back and forth from your vi window to your interactive bash shell.

Pretty cool, right!

Now let’s test out the ultimate process. We are going to detach the Screen session. Use the screen -d command which will detach us altogether from Screen.  You will see the detach message when it works:


Exit from your terminal session altogether to prove out our concept. Re-enter a remote session into the same server and then we can re-attach to the screen session using the screen -R vi-test command:


This is why we choose meaningful names for our Screen sessions. It makes it much easier to know which sessions we are using as we re-attach later on.  Now that re-attached, you will see a familiar looking session. Yay!


To close out each session, make sure to return to the shell and use the exit command. That ensures that you free up the unused session when you are done with it.

Screen sessions do not survive reboots. Just like a Windows RDP session, they will be usable as long as the system is in the current boot session.

You can use the man screen command to find out all about the Screen utility. This is hopefully a good start to enjoying Screen for folks who haven’t known about it in the past.

Migrating MySQL to AWS RDS Aurora

Let’s just say that you have a standalone MySQL instance that you want to put on something more resilient. You’ve got a few choices on how to do that, and Amazon Web Services RDS using Aurora DB is a great place to host it. Here are the steps that I’ve taken to migrate from a Digital Ocean one-click WordPress instance to running the data are on Aurora DB.

Things to think about during this transition include:

  • Single AZ (Availability Zone) or Multi-AZ deployment
  • RDS instance size (price and performance will matter)

One of the great things about AWS is that you can scale dynamically to meet your needs.  There is always a tradeoff (price/performance/resiliency) in your architecture, but that’s a different discussion that we can have in another post.

Cost and performance of operating RDS

AWS is super easy to run infrastructure, but my shift from 10$ a month on Digital Ocean to a Multi-AZ RDS instance is based on performance over cost. It’s a tradeoff that I chose to make. Make sure that you are fully aware of the implications of your database hosting choice.

Prerequisites Needed:

  • AWS account
  • AWS RDS Cluster configured
  • Root credentials for source and target databases

Migrating MySQL to RDS Aurora DB using mysqldump

The full instructions as provided by AWS are here, but these are my quick notes on the transition to prove out that it works as simply as AWS says so.

First, find out your current RDS cluster endpoint address by going to your RDS console:


We can see that in this case, there is a writer endpoint and a second reader endpoint. We will use the writer endpoint to migrate the data:


I’m using the root account on both the source and target, so make sure you have the credentials for both instances to be able to do the same.

The export/import one-liner is as follows. Replace the CAPITALIZED sections with the appropriate information:

mysqldump -u root -pSOURCEPASSWORD --database SOURCEDATABASE --single-transaction --compress --order-by-primary | mysql -u root -pTARGETPASSWORD --port=3306 —

Once you’ve created the database by populating it from the source data, you have to create a user and allow access to the database. Launch the MySQL client to attach to your target database:

mysql -u root -pTARGETPASSWORD —

Now you can create the user and give the appropriate admin privileges on the database needed. Replace the CAPITALIZED sections with the appropriate information:

grant all privileges on YOURDATABASE.* to ‘YOURUSER'@'%' identified by ‘YOURPASSWORD’;

Once you’ve done that, simply point your application towards the new database using the configuration file. For a WordPress database connection, this is found in your wp-config.php file in the root folder of your site.

I know it works, because you’re reading this from my site which was transferred from an all-in-one WordPress deployment in Digital Ocean and is now running on RDS inside AWS.