Using Terraform to Install DevStack on DigitalOcean

There are a few times where having a persistent OpenStack lab on a shared infrastructure is handy. I’ve been revisiting DevStack a lot more lately in order to help a few folks get their labs up and running. DevStack is the OpenStack project which lets you run non-production OpenStack using either a single or a multi-node configuration. Running on DigitalOcean means that I can have a lab that can spin up quickly (about 40 minutes) and also lets me find another handy use for Terraform.

NOTE: This uses an 80$/month DigitalOcean droplet, so please keep that in mind as you experiment.

Requirements for this are:

Getting the Code

All of the scripts and configuration are on GitHub for free use and are also open for contributions and updates if you see anything that you’re keen to add to. Remember that Terraform uses state files to manage your environment, so when you pull down the GitHub repo and launch your environment, it will create the .tfstate and .tfstatebackup files after you launch for the first time.

Grab the code using git clone https://github.com/discoposse/terraform-samples to bring it down locally:

Change directory into the /terraform-samples/DigitalOcean/devstack folder where we will be working:

Make sure you have the environment variables setup including the DigitalOcean API token, SSH key file locations, and your SSH fingerprint. These can be exported into your environment using a script or as one-off commands:

The process that is run by the code is to:

  • Pull the DigitalOcean environment needs (API and SSH info)
  • Launch an 8 GB RAM droplet in the NYC2 region and attach your SSH fingerprint
  • Insert the DevStack build script (files/devstack-install.sh) as a cloud-init script

Those are the pre-requirements. Now it’s time to get started!

Launching the DevStack Build on DigitalOcean with Terraform

It’s always good to use a health check flow of your Terraform builds. Start by validating, running the plan, and then launching. This ensures that you have a good environment configuration and the process should work smoothly.

terraform validate

No news is good news. The code validated fine and we are ready to run the terraform plan command to see what will transpire when we launch the build:

We can see a single droplet will be created because we have nothing to start with. There are a number of parameters that are dynamic and will be populated when the environment launches. Time to go for it!

terraform apply

This is where you need a little bit of patience. The build takes approximately 45-60 minutes. We know the IP address of the environment because we requested it via the Terraform outputs. You can confirm this at any time by running the terraform output command:

Checking the DevStack Install Progress using the Cloud-Init Log

Let’s connect via SSH to our DigitalOcean droplet so we can monitor the build progress. We use the build script as a cloud-init script so that it launches as root during the deployment. This means you can keep track of the results using the /var/log/cloud-init.log and the /var/log/cloud-init-output.log files.

Install completion is indicated by a set of log results like this:

Let’s try it out to confirm using the OpenStack Horizon dashboard URL as indicated in the cloud-init output. There are two accounts created by the script which are admin and demo, both of which have secret-do as the default password.

NOTE: Please change your OpenStack passwords right away! These are simple, plain-text passwords that are packaged with the build and you are vulnerable to attack

That gets us up and running. You are incurring charges as long as the environment is up, so when you’re ready to bring the environment down and destroy the droplet, it’s as easy as it was to launch it.

Destroying the DevStack DigitalOcean Build Using Terraform Destroy

In just two quick words and a confirmation we can remove all of the environment: terraform destroy

Just like that we have installed an all-in-one OpenStack DevStack node on DigitalOcean and learned another nifty way to leverage Hashicorp Terraform to do it.




Adding SSH Access for DigitalOcean when Using Terraform

We’ve been looking at how to add a little Terraform into your IT infrastructure provisioning toolkit lately. Using DigitalOcean is also super easy and inexpensive for testing out processes and doing things like repetitive builds using Terraform.

The first post where we saw how to do a simple Terraform environment build on DigitalOcean appeared at my ON:Technology blog hosted at Turbonomic. That gave us the initial steps for a quick droplet deployment.

We also talked about how to access your DigitalOcean droplets via the command line using SSH keys here which is very important. The reason that it is important is that without SSH keys, you are relying on using the root account with a password. DigitalOcean will create a complex password for you when deploying your droplet. This is not something you can find out without actually resetting the root password and restarting your droplet. This is both insecure (reverting to password access instead of SSH key pair) and also disruptive because you are rebooting the instance to do the password reset.

Now it’s time to merge these two things together!

Adding SSH key details to the Terraform DigitalOcean provider

We are going to add a few things to what we have already done in those two other posts. You will need the following:

Getting your SSH fingerprint is a simple process. Start by going to the top right of your DigialOcean console to the icon which has a dropdown for your account settings:

In the profile page, choose the Settings option from the menu on the left-hand panel:

The SSH fingerprint that you’ll need is in the security settings page. Keep this somewhere as safe as you would your SSH keys themselves because this is an important piece of security information.

Using the SSH Details in Environment Variables

Our settings are going to be stored using local environment variables just like with our DigitalOcean key was in the first example blog. Because we have a few other things to keep track of now we will see the changes in the provider.tf file:

Our environments variables are going to have the same format which is TF_VAR_digitalocean_ssh_fingerprint which is your fingerprint you got from the security settings. The other two things we need are the TF_VAR_digitalocean_pub_key and TF_VAR_digitalocean_private_key parameters which are the paths to your local SSH key files.

NOTE: The use of the file locations is actually not needed for basic key configuration using Terraform. I just thought we should set that up which will come to use later on in other blogs around using Terraform with DigitalOcean.

Use the export command to sett up your variables.  Our Terraform file contains an extra config parameter now which you’ll see here:

These new parameters will read in all that we need to launch a new droplet, attach the appropriate SSH key by the fingerprint in DigitalOcean, and then to allow us to manage the infrastructure with Terraform.

Time for our routine, which should always be: terraform validate to confirm our syntax is good followed by a terraform plan to test the environment:

Now we run our terraform apply to launch the droplet:

Now we have launched a droplet on DigitalOcean with Terraform. Use the SSH command line to connect to the droplet as the root account. Make sure you’ve done all the steps in the previous blog to set up your ssh-agent and then you should be all set:

This is the next step in making more secure, repeatable, and compassable infrastructure using Terraform on DigitalOcean. These same methods will also be showing up as we walk through future more complex examples on DigitalOcean and other providers.

Let’s clean up after ourselves to make sure that we take advantage of the disposable and elastic nature of our public cloud infrastructure by very easily running the terraform destroy command to remove the droplet:

Hopefully this is helpful!




Accessing DigitalOcean Droplets via Command Line Using SSH Keys on OSX

As you get rolling with using DigitalOcean and other VPS providers, one of the features that many folks see in the configuration is the “user SSH key to access your instance” options. The trick is that many newcomers to using cloud instances aren’t totally comfortable or fully understand setting up an SSH key for password-less access to your instance.

Is it Secure Without a Password?

A resounding yes! In fact, it’s much more secure. You’ve uploaded the public side of your key to the instance already from within the cloud infrastructure and you’re now using the private side to match up for access. By not using a password, you’re removing the risk of sending authentication information over the public network. Brute force attacks are not as effective with public/private key pairs whereas they are successful in password hacking attempts.

It’s assumed that you’ve already uploaded your key. I won’t dig into all the different providers and ways to upload the keys. Make sure to do that for your individual provider to create and upload a key from your machine.

Adding your key to the SSH agent from the command line for OSX

When you launch your instance through the GUI, make sure that you have a SSH key selected to match the private key you have on your local machine. I’ve nicknamed mine as Eric-MacbookPro. For extra safety, I also keep copies of the keys in an offsite vault to ensure that I never lose access to the instances that are attached to that key.

When your DigitalOcean droplet is launched, the key is added as part of the init process. Once you have your IP address, you just have a quick process to run to set your key up. Because I use a key that is stored in a folder that isn’t the default, it has to be added to the ssh agent.

Run the eval `ssh-agent -s` command. NOTE: those are backpacks, not apostrophes. That character is found on the same key as the tilde (~) symbol.

The second command you run is ssh-add [yourkeyname] where [yourkeyname] is the full filename and path of your private key. IN my case, I have it stored in my Documents folder under a keys subfolder. This is my process:

ssh-add ~/Documents/keys/id_rsa

Connecting to your DigitalOcean Droplet via SSH with your Private Key

Now we simply run the command line ssh using the administrative account. For CentOS and Ubuntu on DigitaiOcean, it is the root account. For CoreOS instances, you use the core account.

My Ubuntu instance is accessible now by the ssh root@ip-address:

Now you’re in! Keep your keys safe, and keep your DigitalOcean droplets safe with those keys. Happy SSHing!




Platform9 Announces General Availability of Managed Kubernetes and Fission Project!

Since the launch of the company, which I was pleased to be able to cover a lot of exciting changes and growth with Platform9 over the last couple of years. What began as the OpenStack-as-a-Service focus has expanded to embrace both feature additions within the OpenStack offering as well as the addition of Docker and Kubernetes management.

Platform9 Announces General Availability of Managed Kubernetes

Kubernetes is gaining momentum in a way that has been unseen since Docker stormed onto the containerization scene in recent years. From much of what I’ve seen in the market and among customers and community members investigating container orchestration, Kubernetes has emerged as the de facto standard from what it seems at this point.

So, what does Platform9 bring to the table with managed Kubernetes? This is the ideal merger of bringing the k8s platform to an organization without the pain and overhead to manage:

  • complexity of architecting the infrastructure
  • operational overhead and engineering for resiliency
  • operational processes to maintain and upgrade the k8s control plane
  • risk of embracing the k8s platform

In the same way that Platform9 has simplified and delivered OpenStack using a SaaS model, we are seeing the same opportunity arise for folks to put container orchestration into their IT portfolio. The candidates who have been actively using the beta program for managed Kubernetes were a combination of both traditional virtualization shops, and more forward leaning container and cloud friendly organizations.

Having taken a few test drives with alternative products like the Amazon Elastic Container Service (ECS), I can easily see the attractiveness of Kubernetes, and even more so with a managed service approach. ECS gives the option for containerized workloads on your AWS environment, but it also means:

  • IAM integration that can be challenging (or poorly implemented)
  • proprietary nature of the container lifecycle on ECS
  • “lock-in” which is a result of the proprietary stack and workflows
  • one-destination for your infrastructure (build on ECS…for ECS)

Container and cloud friendly organizations are already embracing the value of automation, and will be likely to also have many more open technologies as a part of their IT portfolio.

The full details on the new offering are available here at the Platform9 website.

Platform9 Announces the Fission Project

The Serverless phase of infrastructure is becoming one that is getting a lot of attention. Many push back on the importance of it mostly out of the fear that it is only going to be available as a cloud-based service, or that running the infrastructure requires a lot of care and feeding which may offset the benefits in some ways.

What if we just want to run code, and not have to worry about all of the tooling underneath the covers? With K8s already on board, there is now an excellent option to provide Functions-as-a-Service (FaaS), or what we know as Serverless infrastructure, using Fission for Kubernetes.

When you installed Docker the first time and typed “docker run”, you saw a little magic happening. We call that the “Aha! moment” where you realize that this is something very cool that is also only the beginning of the possibility for it. Kubernetes takes a lot more care and feeding to get to that point, but once you’re there you are going to realize how easy it is to consume as an abstraction layer above the infrastructure.

Now that Kubernetes is under the covers, you can also add Fission into the mix and bring another very interesting open source platform into your arsenal of tools.

The challenges being solved by Fission include:

  • moving to code-only deployments for development
  • Lambda-like functionality on-premises or wherever you have k8s running
  • Bring Your Own REST functions capability

Because this is open source, we are already seeing innovation leading up to the official launch. Python and Node.js were built in as out of the box languages to be supported, and soon after the publishing of the project to GitHub there was a Pull Request to add C# support when running .NET core. That, my friends, is the power of community!

The full details on the Fission project are available at the Platform9 site and we can look forward to lots of activity in this area in the coming months if my predictions are correct. I sure know that I’ll be digging into it myself!

Thanks to Sirish Raghuram for the briefing on the announcement, and congratulations to the Platform9 team on this very cool release.

Keep watching here as we take a deep-dive into the managed Kubernetes offering over the next couple of weeks.




Fixing URL-based Redirect Errors for AWS Route 53 and S3

Many of the typical DNS providers offer what we call URL-based redirects. This is something where a 301 HTTP response is applied to the DNS query and an answer of a new DNS name is sent back in order to forward you to the new URL.

This is not something that is natively available in AWS Route 53, but I’ve written about the solution here in the past where we can use S3 buckets and the website hosting and redirection option. The flow that the client will see is:

  • request URL (e.g. gcondemand.io)
  • DNS responds with record from Route 53 with an S3 alias
  • S3 redirects to the new URL using HTTP 301 response

Domains No Longer Forwarding using Route 53 and S3

The only challenge comes when you set up a domain on Route 53 and you are asked which DNS servers are the authoritative NS records to inject. New zones work automatically by injecting AWS name servers which will be dynamic within the AWS environment.

When a zone is transferred, the question will be presented if you want to keep your existing NS settings from the original registrar or if you want to specify your own NS settings which means setting up some NS records that you assume are all good. Here’s the trick: your NS records must contain the same entry as the SOA (Start of Authority) or else bad things will happen in time.

This is an example of a domain that was transferred over, given NS entries, and worked for quite a while. Suddenly, this is what will happen:

But, it worked for a while…

DNS is a magical thing (spoiler: it’s not really magic), and will work for quite a while as the internet continues to dynamically find your zone redirect on the previous name servers. At some point, you will bump into the issue where the records will age out on other servers and when downstream DNS servers go hunting for your records, the zone is pointing to differing NS records.

Fixing your Simple Redirects using Route 53 and S3

Fire up your Route 53 console, choose your hosted zone, and then select the Go To Record Sets button to edit the zone.

You can see by our entries here that we have a mismatch from the SOA and the NS records:

I’ve gotten four AWS DNS servers that I will use here:

ns-1881.awsdns-43.co.uk
ns-875.awsdns-45.net
ns-134.awsdns-16.com
ns-1457.awsdns-54.org

That will fix the first issue of the mismatched NS records and the SOA being different:

For the second part of the fix, go to the Registered Domains section in the Route 53 console, and select the Add or Edit Name Servers section under your zone.

Now, make sure to replace the four records with the matching set of four NS records you’ve used within the Hosted Zone section:

That will get you all sorted out. Don’t forget that DNS is a cached both locally and on remote DNS servers, so it may take up to 5-15 minutes for your local cache to expire and it may take up to a few hours for the remote entries to become corrected.

Hopefully that gets you all fixed up if you’ve had a similar issue!




EC2Instances.info – A Handy Interactive Guide to AWS EC2 Instance Sizing and Pricing

One of the most challenging aspects of the AWS ecosystem is navigating the pricing and sizing options when looking at EC2 instances.  Luckily, there is a rather nifty tool out there which has been created by a community member and hosted on GitHub which you can find at http://ec2Instances.info

The ec2Instances.info site lets you dig around all of the different configuration options including (at the time of this blog):

  • EC2 Instance types by region
  • Reserved Instance options
  • RDS Instance types (also at http://rdsinstances.info)
  • Pricing for On-Demand licenses such as Windows and SQL
  • Hourly/Daily/Weekly/Monthly/Yearly pricing detail

You can also see and contribute to the code directly on GitHub by visiting the source repository.

This is a very helpful resource that you should bookmark for reference. The project is being updated by 53 contributors (at the time of this blog) and has well over a 1000 stars on the GitHub project.

You can see from the column selector that there is a lot of potential data to show:

Big thanks go out to Garret Heaton for putting this together and sharing it out with the community.  Nicely done!