In the first post (OpenStack Havana All-in-One Lab on VMware Workstation) we built our VM which is running OpenStack Havana from the standard repository on Ubuntu 12.04 LTS server using the handy dandy DiscoPosse Github script repo: https://github.com/discoposse/OpenStack-All-in-One-Havana
As promised, we need to not just get it running, but to actually test it out and learn how to use OpenStack with some first steps in using your OpenStack Havana All-in-One VM.
I like to start things here with the command line booting of instances so that you can see the back-end in action while it works. The next post will jump into the OpenStack Dashboard (Horizon) usage.
STEP 1 – Download PuTTY
We will use putty for remote access to the VM because it makes the interaction a bit easier because we don’t have to click in and out of the console using VMware Workstation, plus we will gain the ability to copy and paste. Trust me, you will see very quickly why we need that as we boot our system.
Go here and download the appropriate version of putty for your Windows version: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
I use the standalone EXE and just put a shortcut on my Taskbar. This link will get you what you need:
Launch the putty tool and type in the IP information of your VM. In my case, this was 192.168.79.50 as shown here:
Use your credentials to login and then once you are authenticated you will enter a sudo session which elevates your privileges. Cover your eyes if you’re a Unix admin. This is the no-no of administration because it means that every command you run in this session is run with elevated privileges. For newcomers, it is actually easier than remembering to type sudo in front of every command.
I equate running sudo for each command as the Simon Says for admins. Trust me, that you will forget somewhere along the way and that will create some challenges, so we take the short cut. Tread carefully!
sudo su –
Now we can use some of our Nova command line tools (often referred to as the python Nova client) to gather information about booting an instance.
We need to know a few things to boot our instance:
- Instance flavor – not quite Baskin Robbins, we only have 5 flavors in our default build
- SSH Key name – this was created in our first post, but we will confirm again here
- Image name – This is the boot image that we will use for our VM. We loaded a CirrOS Linux machine image in the install script
- Security Group – this will be default in our case because we only have one group created which has our access rules applied
- Instance name – we get to make this one up. I’ll use something totally creative like myinstance
Gathering the info for our Nova boot command
Here is the sequence we will go in:
You can see each command and it’s output. Some of the info will be different for you than the example including the SSH Keypair Fingerprint and the Nova image ID field. These are dynamically generated for each installation.
For our Flavor, we will use the ID field, for our SSH Keypair, we use the Name field, and for the Nova image, we use the ID field. This is the part where having putty is much easier.
Our command (remember, the Nova ID is different for yours) will be as follows:
nova boot –flavor 1 –key_name mykey –image 3d9ab4a4-f172-4158-8fff-6ec6f136f3d1 –security_group default myinstance
To copy the ID for your Nova image, just highlight the text in your putty window when you are ready to type it, and then right-click which copies and pastes it:
Your full command line should look something like this:
So, we are ready to press Enter and launch our very first Nova instance. When you do, there will be a bunch of information that displays about the instance that is launching:
That is a lot of information, and we will dive much more into what it all means in further posts. But first, let’s track the Nova boot process while it is working. You can do this by listing your Nova instances with the nova list command:
In the nova list output, we can see the name of our instance which we chose from our nova boot command. We also see the Status and Task State fields which will change as the instance is spawned and then booted. Lastly, we can see the Networks field which shows us what the IP address is of our instance.
Keep running nova list until you see the status changes to Active and the Power State shows as Running:
Congratulations! You have now launched your first Nova instance in your very own OpenStack private cloud 🙂
Connecting to your Nova Instance
To confirm that our instance is actually running, we will use SSH in our host VM to remote into the new machine. This is a CirrOS boot image, so the default username is cirros with no password. Our command line will be as follows (make sure to use the vmnet IP address that shows up for your instance):
You will be prompted to accept the SSH fingerprint. Just type yes and press Enter and it will be added to your SSH host list and the remote login will continue.
Notice that you are now at a $ prompt instead of the root@aio-havana prompt. This is the first indicator of success, but to be sure, let’s run a few commands:
Great googly moogly it worked! Now we can run one more test to test our IP connection from the nested instance to the public internet. This is easy by just issuing a ping command to 188.8.131.52. Remember that this is Linux, so the ping command will run until we stop it, so I like to add the -c 4 option which sets the ICMP packet count to 4:
ping 184.108.40.206 -c 4
It worked! Now that we know we can successfully launch our instance, we should clean up after ourselves and delete the instance to free up resources. This is easily done by exiting out of the SSH session to our nested guest and then using the nova delete command:
nova delete myinstance
Once the nova delete myinstance command is issued, it only takes a few seconds for our running instance to be terminated and removed altogether. This is confirmed by our nova list command which shows no results because we have no remaining running instances.
SSH Fingerprint conflicts
If you perform this process more than once, you may receive an error in your SSH client because the fingerprint of the instance differs from one that is cached for that IP address. You will get the following error if this happens:
Don’t worry, this isn’t the NSA snooping into your private cloud. They wouldn’t give you a warning 😉
This is actually a common issue, and the error actually gives you the fix right in the screen. The command to fix this up is to run the following:
ssh-keygen -f “/root/.ssh/known_hosts” -R 10.10.100.2
Now when you clean up the SSH known hosts entry, you can re-issue your SSH session request and you will be prompted to accept the new fingerprint for that host.
What’s Next for us?
The next step we will dive into in our upcoming post is to log in to the OpenStack Dashboard (Horizon) environment to see how to use the nifty OpenStack GUI.
Feel free to comment on how this is going, and I hope that you’re finding these posts helpful!