OpenStack Havana All-in-One Lab: Cinder addition for Block Storage

image_pdfimage_print

It seems like forever ago that we began, and it’s time to keep the ball rolling with our OpenStack Havana All-in-One build on VMware Workstation series that had become idle in the last little while.

Lot’s of great feedback has been coming in and one of the key things that we wanted to do to get to our next step was to enable OpenStack Block Storage, also known as Cinder.

I’m going to assume that you’ve gotten through the first few steps, and just in case you haven’t already gotten on board with our lab build, here are the posts to use to get caught up:

Now that we are all up to the same point, let’s get some block storage happening!

We can enable Cinder internally on our existing machine, but I wanted to show you how to do this by using a secondary volume. Since we only have one virtual disk on our virtual machine, we need to power down your virtual machine and then we will follow these simple steps to add our volume before installing Cinder.

Once powered down, open your Virtual Machine Settings (right click: Edit Settings) and click the Add… button to add a new device:

01-add

We select Hard Disk as the hardware type and click Next

02-add-harddisk

Leave the default option of SCSI and click Next

03-scsi

Choose Create a new virtual disk and click Next

04-create-new

Increase the Maximum disk size to 40 GB, and choose Store virtual disk as a single file before clicking Next

05-disk-capacity

Let’s name this as AIO-HAVANA-CINDER01.vmdk so that we know what it’s for and then click Finish

06-finish

Now we will boot our AIO-HAVANA virtual machine and do the following:

  • Log in to the console using your regular account
  • Enter the elevated access with sudo su –
  • Change directory into our GitHub source folder with cd OpenStack-All-in-One-Havana
  • Update our GitHub source by running git pull

07-git-pull

What we have now with our updated source code is the addition of the cinder.sh shell script which will deploy and configure our Cinder services and dependencies. You can confirm the file is there with the ls command.

Launch the script by typing this:

sh cinder.sh

08-cinder-install

The installation will take a few minutes, and when it is all done you will see the final commands which show the services being restarted as shown here:

09-install-done

Now we need to test out that our Cinder services are working which we will be doing with the Python Cinder client that was deployed as part of our script.

The command to create a Cinder volume is cinder create –display_name <volumename> <size> which for our example to do a quick test is going to be a 1 GB volume named test:

cinder create –display_name test 1

10-cinder-test

Provided everything goes properly, you will see the results above and you can confirm the results with an easy command:

cinder list

11-cinder-list

Now we have confirmed our volume has been created and the status will show as available once it is fully built. Because we want to clean up after ourselves, we will use the following command to delete our test volume:

cinder delete test

12-cinder-delete

By running cinder list we can see the volume as it is being deleted. This happens quickly with a 1 GB volume, so you may not see it as above, but as we continue to use our environment these are the typical commands for creating and listing volumes.

Horizon and Cinder

Let’s recap where we have come from. We installed our virtual lab on the all-in-one virtual machine, created a Nova instance, viewed our OpenStack cloud in our Horizon dashboard, and now we have installed our OpenStack Cinder Block Storage environment. Not bad with only a few configuration steps thanks to our deployment scripts.

The command line is only one part of our Cinder work of course, so next we will look at the same process of creating a Cinder volume in the Horizon environment.

We will quickly go through a few steps in our dashboard by browsing to the site and logging in with our Admin account (remember our password is openstack for our lab build)

13-horizon-login

In your dashboard, click on the Project tab, then click the Volumes link in the left hand pane. This brings you to the Volumes screen where you will click on the Create Volume button

14-horizon-create-volume

We don’t need much information to create a volume as we saw in the command line version of this process. For our simple example, we will use just the Volume Name option, the Size (GB) option, and the Volume Source option with simple options:

  • Name: test
  • Size: 2 GB
  • Source: No source, empty volume

Once you select those options you will click the Create Volume button. We will revisit all of the other options in a later post, but our initial goal is just getting our environment up and running to start exploring more 🙂

15-create-volume

In your Volumes view you will see the volume show up in the list with a status of Creating

16-creating

In a few short seconds (that’s why we use small volumes to test) you will see the status change to Available and once that happens you will have a button in the Actions column which lets you either do a Create Snapshot or Delete Volume action.

17-volume-created

We also have an option here to Edit Attachment but I’m going to leave you in suspense a little tiny bit longer because that is the subject of our next post 😉

DiscoPosse

People, Process, and Technology. Powered by Community!

You might also like

12 Comments

  • DragoMlakar
    April 3, 2014 at 8:38 am

    It works like charm! Thank you.
    Now I will try to put juju on top of it.

    • DiscoPosse
      Eric
      April 3, 2014 at 8:54 am

      Very cool! That’ what I love to see 🙂

  • DragoMlakar
    April 4, 2014 at 4:01 am

    So far to use juju on top of openstack:
    Export openstack settings by
    Project->Access&Security->APIAccess->DownloadOpenstackRCFile
    Missing is
    export OS_REGION_NAME=RegionOne
    But then it stops because expects also object-store service 🙁
    Any chance to make tutorial also for that? This probably means also additional vmware disk…

    • DiscoPosse
      Eric
      April 4, 2014 at 7:09 am

      I do have a plan for putting Swift in the system also. We can use the same host still, and we create virtual stores for the replication. It isn’t quite as full featured as a proper multi-node Swift deployment obviously, but it keeps us into the all-in-one lab and able to see the functionality.

      Next up is some use case work on putting images together with block storage. I’ll see what I can do to accelerate the object storage components for us.

  • DragoMlakar
    April 7, 2014 at 4:16 am

    I manage to add object-store service by this instructions
    http://docs.openstack.org/trunk/install-guide/install/apt/content/installing-openstack-object-storage.html
    There are some tweaks to made, but comments on pages describe them.

    I also manage to install juju on top of it.
    Trick is that you create image for juju machines from
    http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img (change m1.tiny flavor to have more resource or it will fail, 20G, 1024M, 1CPU ram I use)
    then you use image id when generating config json’s
    juju metadata generate-image -i 17d48210-2287-4861-969e-197c227eb0a7 -r “RegionOne” -u http://192.168.131.128:5000/v2.0/

    https://juju.ubuntu.com/docs/howto-privatecloud.html
    http://askubuntu.com/questions/327177/cannot-bootstrap-due-to-precise-images-in-regionone-with-arches-amd64-i386

    Config json’s must be copied on proper place before you start
    juju bootstrap -v
    Be careful. Juju destroy-environment deletes whole juju-xxx content. So you must to redeploy your config json’s again with
    swift upload juju-xxxx streams (create local directory structure streams/v1/*.json and then upload to swift juju-xxxx)
    from local stream directory (name is in juju settings as control-bucket: juju-xxxx)
    before you start new juju bootstrap -v

    Tried then with some juju deploy stuff installing mongodb, node-app. It is working well.
    What is missing in openstack environment is external IP pool.
    So I can expose juju-gui to be accessible outside of vmware machine.
    Will try also to fix this.

    Hope that helps.
    Probably I forgot some tweaks that were needed. Feel free to send me mail to help you out if you try to make same environment.

    • DiscoPosse
      Eric
      April 7, 2014 at 9:24 am

      Great work! I’ll definitely look at integrating some of this with the build where possible. Thanks for sharing 🙂

  • tioman
    April 19, 2014 at 2:23 pm

    Eric,
    I can’t find the “Volumes” menu in the Horizon, but the run the cinder command success,

    Can you give advise? thanks

    • DiscoPosse
      Eric
      April 21, 2014 at 8:38 am

      Is the Volumes menu missing altogether? If so, you may need to log out and back in to refresh the session. The Horizon will not rebuild the menu dynamically, so hopefully that is all that’s required.

  • Roberto
    June 23, 2014 at 8:00 am

    in the cinder.sh script i see that there are a command like: “cinder-manager” that return an error because the right command is “cinder-manage”….right??

    • DiscoPosse
      Eric
      June 24, 2014 at 1:00 pm

      Absolutely right! I’ll take a look and fix that. Thanks for spotting it Roberto 🙂

  • Roberto
    June 25, 2014 at 3:36 am

    no problem! 😉

LEAVE A COMMENT

Proudly Sponsored By

GC On-Demand

Subscribe to the Blog

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Upcoming events:

Archives