In the first post (OpenStack Havana All-in-One Lab on VMware Workstation) we built our VM which is running OpenStack Havana from the standard repository on Ubuntu 12.04 LTS server using the handy dandy DiscoPosse Github script repo: https://github.com/discoposse/OpenStack-All-in-One-Havana
As promised, we need to not just get it running, but to actually test it out and learn how to use OpenStack with some first steps in using your OpenStack Havana All-in-One VM.
I like to start things here with the command line booting of instances so that you can see the back-end in action while it works. The next post will jump into the OpenStack Dashboard (Horizon) usage.
STEP 1 – Download PuTTY
We will use putty for remote access to the VM because it makes the interaction a bit easier because we don’t have to click in and out of the console using VMware Workstation, plus we will gain the ability to copy and paste. Trust me, you will see very quickly why we need that as we boot our system.
Go here and download the appropriate version of putty for your Windows version: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
I use the standalone EXE and just put a shortcut on my Taskbar. This link will get you what you need:
Launch the putty tool and type in the IP information of your VM. In my case, this was 192.168.79.50 as shown here:
Use your credentials to login and then once you are authenticated you will enter a sudo session which elevates your privileges. Cover your eyes if you’re a Unix admin. This is the no-no of administration because it means that every command you run in this session is run with elevated privileges. For newcomers, it is actually easier than remembering to type sudo in front of every command.
I equate running sudo for each command as the Simon Says for admins. Trust me, that you will forget somewhere along the way and that will create some challenges, so we take the short cut. Tread carefully!
sudo su –
Now we can use some of our Nova command line tools (often referred to as the python Nova client) to gather information about booting an instance.
We need to know a few things to boot our instance:
- Instance flavor – not quite Baskin Robbins, we only have 5 flavors in our default build
- SSH Key name – this was created in our first post, but we will confirm again here
- Image name – This is the boot image that we will use for our VM. We loaded a CirrOS Linux machine image in the install script
- Security Group – this will be default in our case because we only have one group created which has our access rules applied
- Instance name – we get to make this one up. I’ll use something totally creative like myinstance
Gathering the info for our Nova boot command
Here is the sequence we will go in:
nova flavor-list
nova keypair-list
nova image-list
You can see each command and it’s output. Some of the info will be different for you than the example including the SSH Keypair Fingerprint and the Nova image ID field. These are dynamically generated for each installation.
For our Flavor, we will use the ID field, for our SSH Keypair, we use the Name field, and for the Nova image, we use the ID field. This is the part where having putty is much easier.
Our command (remember, the Nova ID is different for yours) will be as follows:
nova boot –flavor 1 –key_name mykey –image 3d9ab4a4-f172-4158-8fff-6ec6f136f3d1 –security_group default myinstance
To copy the ID for your Nova image, just highlight the text in your putty window when you are ready to type it, and then right-click which copies and pastes it:
Your full command line should look something like this:
So, we are ready to press Enter and launch our very first Nova instance. When you do, there will be a bunch of information that displays about the instance that is launching:
That is a lot of information, and we will dive much more into what it all means in further posts. But first, let’s track the Nova boot process while it is working. You can do this by listing your Nova instances with the nova list command:
nova list
In the nova list output, we can see the name of our instance which we chose from our nova boot command. We also see the Status and Task State fields which will change as the instance is spawned and then booted. Lastly, we can see the Networks field which shows us what the IP address is of our instance.
Keep running nova list until you see the status changes to Active and the Power State shows as Running:
Congratulations! You have now launched your first Nova instance in your very own OpenStack private cloud 🙂
Connecting to your Nova Instance
To confirm that our instance is actually running, we will use SSH in our host VM to remote into the new machine. This is a CirrOS boot image, so the default username is cirros with no password. Our command line will be as follows (make sure to use the vmnet IP address that shows up for your instance):
ssh cirros@10.10.100.2
You will be prompted to accept the SSH fingerprint. Just type yes and press Enter and it will be added to your SSH host list and the remote login will continue.
Notice that you are now at a $ prompt instead of the root@aio-havana prompt. This is the first indicator of success, but to be sure, let’s run a few commands:
whoami
ifconfig
Great googly moogly it worked! Now we can run one more test to test our IP connection from the nested instance to the public internet. This is easy by just issuing a ping command to 8.8.8.8. Remember that this is Linux, so the ping command will run until we stop it, so I like to add the -c 4 option which sets the ICMP packet count to 4:
ping 8.8.8.8 -c 4
It worked! Now that we know we can successfully launch our instance, we should clean up after ourselves and delete the instance to free up resources. This is easily done by exiting out of the SSH session to our nested guest and then using the nova delete command:
exit
nova delete myinstance
nova list
Once the nova delete myinstance command is issued, it only takes a few seconds for our running instance to be terminated and removed altogether. This is confirmed by our nova list command which shows no results because we have no remaining running instances.
SSH Fingerprint conflicts
If you perform this process more than once, you may receive an error in your SSH client because the fingerprint of the instance differs from one that is cached for that IP address. You will get the following error if this happens:
Don’t worry, this isn’t the NSA snooping into your private cloud. They wouldn’t give you a warning 😉
This is actually a common issue, and the error actually gives you the fix right in the screen. The command to fix this up is to run the following:
ssh-keygen -f “/root/.ssh/known_hosts” -R 10.10.100.2
Now when you clean up the SSH known hosts entry, you can re-issue your SSH session request and you will be prompted to accept the new fingerprint for that host.
What’s Next for us?
The next step we will dive into in our upcoming post is to log in to the OpenStack Dashboard (Horizon) environment to see how to use the nifty OpenStack GUI.
Feel free to comment on how this is going, and I hope that you’re finding these posts helpful!
I have successfully created instance but unable to connect Instance.
root@aio-havana:/# nova list
+————————————–+————+——–+————+————-+——————-+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+————+——–+————+————-+——————-+
| d1ee111d-0902-446b-a8cf-431a980d331a | myinstance | ACTIVE | None | Running | vmnet=10.10.100.2 |
| 714a9ad5-2961-46fb-85c1-bb3d201040bb | test | ACTIVE | None | Running | vmnet=10.10.100.3 |
+————————————–+————+——–+————+————-+——————-+
and also unable to ping vmnet 10.10.100.2 or 10.10.100.3 and here i didn’t see any ip on vnet0 or vnet1
vnet0 Link encap:Ethernet HWaddr fe:16:3e:bc:71:74
inet6 addr: fe80::fc16:3eff:febc:7174/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
vnet1 Link encap:Ethernet HWaddr fe:16:3e:63:69:57
inet6 addr: fe80::fc16:3eff:fe63:6957/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
please guide me….
Hi Nazrul,
Would you also be able to send me the output of these commands:
nova network-list
nova secgroup-list-rules default
Thanks…Eric
Thanks Eric for your command. actually after re-installed server the problem was solved.
Great news Nazrul!
For some reason I cannot ping 8.8.8.8 from my instance…Any ideas (or better yet what do you need to see to help me out)
If you could attach the output from these commands within your OpenStack Host, and then your instance we can confirm some stuff:
ON THE HOST:
nova network-list
nova list
nova secgroup-list-rules default
ON THE GUEST:
traceroute 8.8.8.8
ifconfig
Thanks!
Hi.
Eric.I am going to create a instance stage.
but when I typing ‘nova list’, it give me this:
root@aio-havana:/home/giant# nova list
+————————————–+————+——–+————+————-+———-+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+————+——–+————+————-+———-+
| 02ce12ed-f271-43bc-b93c-b4e7a2267894 | myinstance | ERROR | None | NOSTATE | |
+————————————–+————+——–+————+————-+———-+
I am wondering if that my computer physical CPU is not supporting Intel VT-x technology, because my computer has just one CPU with two core and I can only give one core to the VM, when I boot my VM, I always get a prompt of your computer is not support Intel VT-x…so on. Is this matter?
I am glad if I can receive your advice!
The VT-x emulation should be happening in the virtual CPU presentation by VMware workstation. That said, I am running on a fairly new laptop. I’ll check further and see if there are any limits.
Would you be able to let me know what model/make your machine is?
It looks like I missed out on this issue. There is a requirement to have a physical CPU to pass through the Intel VT-x or AMD 64 bit virtualization. Sorry for not including that in the requirements. It seems that there is no workaround for this without the underlying support.
The advantage the Workstation or Fusion gives is the presentation of HCL compatible presentation to guest VMs but there is still a requirement to have it at the hardware layer. Wish I had a better answer to this for you.
Hi,
Eric,
when I try to login into the cirros. It requires a password.
I don`t know what password it is.
Is there any password for this image?
The user name is cirros and the password is cubswin:) for that one.
Hi,
Eric
I think I have find the password!
user:cirros
password:cubswin:)
Hope it will help the other friends!
Hi,
Can you please help??
The following is the execution and response of the AIO system. Nova does not take –flavor nor –key_name nor –image…
root@aio-havana:~# nova boot –flavor 1 –key_name mykey -image 26efa398-5d4b-4f9c-8119-6ba989afdcb6 -security_group default myinstance
usage: nova [–version] [–debug] [–os-cache] [–timings]
[–timeout ] [–os-username ]
[–os-password ]
[–os-tenant-name ]
[–os-tenant-id ] [–os-auth-url ]
[–os-region-name ] [–os-auth-system ]
[–service-type ] [–service-name ]
[–volume-service-name ]
[–endpoint-type ]
[–os-compute-api-version ]
[–os-cacert ] [–insecure]
[–bypass-url ]
…
error: unrecognized arguments: 1 –key_name mykey -image 26efa398-5d4b-4f9c-8119-6ba989afdcb6 -security_group default myinstance
Try ‘nova help ‘ for more information.
Hi There,
Can you make sure that you use a double hyphen for the parameters? It doesn’t always show in the web format on my blog, but you will need –key_name and –flavor. Give that a try and let me know if there are still issues.
Thanks!
Im so sry, but I did use double hyphens, I just also tried single hyphens, and neither works.
root@aio-havana:~# nova boot –-flavor 1 –-key_name mykey –image 26efa398-5d4b-4f9c-8119-6ba989afdcb6 –security_group default myinstance
usage: nova [–version] [–debug] [–os-cache] [–timings]
[–timeout ] [–os-username ]
[–os-password ]
[–os-tenant-name ]
[–os-tenant-id ] [–os-auth-url ]
[–os-region-name ] [–os-auth-system ]
[–service-type ] [–service-name ]
[–volume-service-name ]
[–endpoint-type ]
[–os-compute-api-version ]
[–os-cacert ] [–insecure]
[–bypass-url ]
…
error: unrecognized arguments: 1 –-key_name mykey myinstance
Try ‘nova help ‘ for more information.
Please note that all the single hyphens in the previous post were doubles.
and just wondering, your post was based on Havana, but the newest is Icehouse, does that matter?
I followed all your steps for the lab(your first post), and got the lab ready.
Thanks for the update. I just wanted to double check about the hyphens as I’ve had that issue reported before.
I’ll spin up my lab with a new build to see if there are any updated libraries that are coming in and changing the parameters on us.
Btw, my nova –version : 2.15.0
Hi Eric,
I have found the problem.
The parameters have been changed.
–-key_name => –-key-name
–-security_group => –-security-groups
New:
boot –flavor 1 –key-name mykey –image 26efa398-5d4b-4f9c-8119-6ba989afdcb6 –security-groups default myinstancenova boot –flavor 1 –key-name mykey –image 26efa398-5d4b-4f9c-8119-6ba989afdcb6 –security-groups default myinstance
Hope this helps.
However, after the instance have been boot, I encountered no-stopping spamming msg:
kvm [3858]: vcpu0 unhandled wrmsr: 0x38f data f
in my vm console.
This did NOT happen for my putty thou.
New:
boot –flavor 1 –key-name mykey –image 26efa398-5d4b-4f9c-8119-6ba989afdcb6 –security-groups default myinstance
Thanks so much for sharing this!! It has been a super busy week and I apologize for not being able to confirm. Your feedback is greatly appreciated! 🙂
Can you please help me check:
kvm [3858]: vcpu0 unhandled wrmsr: 0x38f data f
?
This is spamming my vm console 24*7 (not putty)
This happens after I launch the nova instance, and disappear after i kill the instance.
This looks like it is pending a bug fix from Redhat which will also get rolled into the CentOS core also: https://bugzilla.redhat.com/show_bug.cgi?id=874627
but seems that they gonna update fedora not ubuntu >.<
BTW do u think you might be able to make AIO labs for ubuntu 14.04 and/or Openstack Icehouse?
The Icehouse on 14.04 is my next project. I got things started and hope to have a tested model up within a couple of weeks. I’ve had a few other big projects that have pulled back some of my time.
Thank you for the great feedback and info on the current build. I anticipate the Icehouse will be easier as it is quite a bit more stable and there are less steps needed to get the system started up.
Actually, if you would set up a multinode would be much nicer. A controller node, a network node and compute nodes. Since I am still trying to learn how openstack works and how to build it, a detailed guide(not a 3 sh done ’em all guide) would really help me to understand key steps and key configs.
Thanks again for this AIO guide.
hello Eric,
i am facing an error :
putty fatal error: connection timed out
how to resolve this ?
Have you been able to ping the instance from the host? There may be an issue with the IP address and routing.
thanks for such good instruction Eric
I can’t ping 8.8.8.8 from my instance
the output of these command:
nova network-list
nova list
nova security-group-rules
ifconfig
traceroute 8.8.8.8
as follows:
root@aio-havana:~# nova network-list
+————————————–+——-+—————-+
| ID | Label | Cidr |
+————————————–+——-+—————-+
| 25b7763b-94d4-408c-8e7e-999f1a3edebd | vmnet | 10.10.100.0/24 |
+————————————–+——-+—————-+
root@aio-havana:~# nova list
n+————————————–+————+——–+————+—- ———+——————-+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+————+——–+————+—– ——–+——————-+
| eebac127-c893-4102-a37f-0d6106617f6e | test2 | ACTIVE | None | Running | vmnet=10.10.100.2 |
+————————————–+————+——–+————+—– ——–+——————-+
root@aio-havana:~# nova secgroup-list-rules default
+————-+———–+———+———–+————–+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+————-+———–+———+———–+————–+
| tcp | 22 | 22 | 0.0.0.0/0 | |
| icmp | -1 | -1 | 0.0.0.0/0 | |
+————-+———–+———+———–+————–+
root@aio-havana:~# ssh cirros@10.10.100.2
$ ifconfig
eth0 Link encap:Ethernet HWaddr FA:16:3E:C0:14:49
inet addr:10.10.100.2 Bcast:10.10.100.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fec0:1449/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5770 errors:0 dropped:0 overruns:0 frame:0
TX packets:1038 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:691358 (675.1 KiB) TX bytes:136510 (133.3 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
$ traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 46 byte packets
1 10.10.100.1 (10.10.100.1) 0.372 ms 0.253 ms 0.292 ms
2 * * *
3 * * *
4 * * *
5 * *