OpenStack Havana All-in-One Lab: Cinder addition for Block Storage

It seems like forever ago that we began, and it’s time to keep the ball rolling with our OpenStack Havana All-in-One build on VMware Workstation series that had become idle in the last little while.

Lot’s of great feedback has been coming in and one of the key things that we wanted to do to get to our next step was to enable OpenStack Block Storage, also known as Cinder.

I’m going to assume that you’ve gotten through the first few steps, and just in case you haven’t already gotten on board with our lab build, here are the posts to use to get caught up:

Now that we are all up to the same point, let’s get some block storage happening!

We can enable Cinder internally on our existing machine, but I wanted to show you how to do this by using a secondary volume. Since we only have one virtual disk on our virtual machine, we need to power down your virtual machine and then we will follow these simple steps to add our volume before installing Cinder.

Once powered down, open your Virtual Machine Settings (right click: Edit Settings) and click the Add… button to add a new device:


We select Hard Disk as the hardware type and click Next


Leave the default option of SCSI and click Next


Choose Create a new virtual disk and click Next


Increase the Maximum disk size to 40 GB, and choose Store virtual disk as a single file before clicking Next


Let’s name this as AIO-HAVANA-CINDER01.vmdk so that we know what it’s for and then click Finish


Now we will boot our AIO-HAVANA virtual machine and do the following:

  • Log in to the console using your regular account
  • Enter the elevated access with sudo su –
  • Change directory into our GitHub source folder with cd OpenStack-All-in-One-Havana
  • Update our GitHub source by running git pull


What we have now with our updated source code is the addition of the shell script which will deploy and configure our Cinder services and dependencies. You can confirm the file is there with the ls command.

Launch the script by typing this:



The installation will take a few minutes, and when it is all done you will see the final commands which show the services being restarted as shown here:


Now we need to test out that our Cinder services are working which we will be doing with the Python Cinder client that was deployed as part of our script.

The command to create a Cinder volume is cinder create –display_name <volumename> <size> which for our example to do a quick test is going to be a 1 GB volume named test:

cinder create –display_name test 1


Provided everything goes properly, you will see the results above and you can confirm the results with an easy command:

cinder list


Now we have confirmed our volume has been created and the status will show as available once it is fully built. Because we want to clean up after ourselves, we will use the following command to delete our test volume:

cinder delete test


By running cinder list we can see the volume as it is being deleted. This happens quickly with a 1 GB volume, so you may not see it as above, but as we continue to use our environment these are the typical commands for creating and listing volumes.

Horizon and Cinder

Let’s recap where we have come from. We installed our virtual lab on the all-in-one virtual machine, created a Nova instance, viewed our OpenStack cloud in our Horizon dashboard, and now we have installed our OpenStack Cinder Block Storage environment. Not bad with only a few configuration steps thanks to our deployment scripts.

The command line is only one part of our Cinder work of course, so next we will look at the same process of creating a Cinder volume in the Horizon environment.

We will quickly go through a few steps in our dashboard by browsing to the site and logging in with our Admin account (remember our password is openstack for our lab build)


In your dashboard, click on the Project tab, then click the Volumes link in the left hand pane. This brings you to the Volumes screen where you will click on the Create Volume button


We don’t need much information to create a volume as we saw in the command line version of this process. For our simple example, we will use just the Volume Name option, the Size (GB) option, and the Volume Source option with simple options:

  • Name: test
  • Size: 2 GB
  • Source: No source, empty volume

Once you select those options you will click the Create Volume button. We will revisit all of the other options in a later post, but our initial goal is just getting our environment up and running to start exploring more 🙂


In your Volumes view you will see the volume show up in the list with a status of Creating


In a few short seconds (that’s why we use small volumes to test) you will see the status change to Available and once that happens you will have a button in the Actions column which lets you either do a Create Snapshot or Delete Volume action.


We also have an option here to Edit Attachment but I’m going to leave you in suspense a little tiny bit longer because that is the subject of our next post 😉

12 thoughts on “OpenStack Havana All-in-One Lab: Cinder addition for Block Storage”

  1. So far to use juju on top of openstack:
    Export openstack settings by
    Missing is
    export OS_REGION_NAME=RegionOne
    But then it stops because expects also object-store service 🙁
    Any chance to make tutorial also for that? This probably means also additional vmware disk…

    • I do have a plan for putting Swift in the system also. We can use the same host still, and we create virtual stores for the replication. It isn’t quite as full featured as a proper multi-node Swift deployment obviously, but it keeps us into the all-in-one lab and able to see the functionality.

      Next up is some use case work on putting images together with block storage. I’ll see what I can do to accelerate the object storage components for us.

  2. I manage to add object-store service by this instructions
    There are some tweaks to made, but comments on pages describe them.

    I also manage to install juju on top of it.
    Trick is that you create image for juju machines from (change m1.tiny flavor to have more resource or it will fail, 20G, 1024M, 1CPU ram I use)
    then you use image id when generating config json’s
    juju metadata generate-image -i 17d48210-2287-4861-969e-197c227eb0a7 -r “RegionOne” -u

    Config json’s must be copied on proper place before you start
    juju bootstrap -v
    Be careful. Juju destroy-environment deletes whole juju-xxx content. So you must to redeploy your config json’s again with
    swift upload juju-xxxx streams (create local directory structure streams/v1/*.json and then upload to swift juju-xxxx)
    from local stream directory (name is in juju settings as control-bucket: juju-xxxx)
    before you start new juju bootstrap -v

    Tried then with some juju deploy stuff installing mongodb, node-app. It is working well.
    What is missing in openstack environment is external IP pool.
    So I can expose juju-gui to be accessible outside of vmware machine.
    Will try also to fix this.

    Hope that helps.
    Probably I forgot some tweaks that were needed. Feel free to send me mail to help you out if you try to make same environment.

  3. Eric,
    I can’t find the “Volumes” menu in the Horizon, but the run the cinder command success,

    Can you give advise? thanks

    • Is the Volumes menu missing altogether? If so, you may need to log out and back in to refresh the session. The Horizon will not rebuild the menu dynamically, so hopefully that is all that’s required.

  4. in the script i see that there are a command like: “cinder-manager” that return an error because the right command is “cinder-manage”….right??


Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.