Adding a second Cinder-volume with Folsom

eleven_drive_monstrosity

Eleven Drive Monstruosity (Flickr, Creative Commons 2.0)

Most of the time, you have to deal with disk space. Most of the time, it’s growing. And most of the time, you run (or you plan to run) out of space. When dealing with Openstack and Cinder, the choice is : are you able to increase your backend size (maybe you can add a new PV to your LVM volume group ?) or not (do you accept downtime ?) ?

If the answer is no, then add a second Cinder-volume to your current Cinder setup. Below is how to deal with that. These below steps are corresponding to a Ubuntu 12.04 OS, LVM storage backend and RabbitMQ as AMQP broker, provided you already have a single Cinder setup with one cinder-volume.

First of all, create the LVM volume group on the new server.

sylvain@folsom03:~# sudo pvcreate /dev/sdc1 && sudo pvcreate /dev/sdd1 && sudo vgcreate cinder-volumes /dev/sdc1 /dev/sdd1

Once done, only install cinder-volume package. It will automatically include the cinder-common package, which contains all the conf and tools.

sylvain@folsom03:~# sudo apt-get -y install cinder-volume

Now, you have to modify the configuration files to fit your installation. Here, 172.16.0.1 (aka. folsom01) is my controler node IP, also hosting my current all-in-one Cinder setup. 172.16.0.3 (aka. folsom03) is the new Cinder-volume backend and 172.16.0.4 (aka. folsom04) is one of my compute nodes (for testing purpose).

sylvain@folsom03:~# sudo vi /etc/cinder/cinder.conf
[DEFAULT]
sql_connection = mysql://root:nova@172.16.0.1/cinder
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
iscsi_ip_address = 172.16.0.3

rabbit_host = 172.16.0.1
rabbit_port = 5672
glance_api_servers = 172.16.0.1:9292


sylvain@folsom03:~# sudo vi /etc/cinder/api-paste.ini
#############
# Openstack #
#############

[composite:osapi_volume]
use = call:cinder.api.openstack.urlmap:urlmap_factory
/: osvolumeversions
/v1: openstack_volume_api_v1

[composite:openstack_volume_api_v1]
use = call:cinder.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth osapi_volume_app_v1
keystone = faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1

[filter:faultwrap]
paste.filter_factory = cinder.api.openstack:FaultWrapper.factory

[filter:noauth]
paste.filter_factory = cinder.api.openstack.auth:NoAuthMiddleware.factory

[filter:sizelimit]
paste.filter_factory = cinder.api.sizelimit:RequestBodySizeLimiter.factory

[app:osapi_volume_app_v1]
paste.app_factory = cinder.api.openstack.volume:APIRouter.factory

[pipeline:osvolumeversions]
pipeline = faultwrap osvolumeversionapp

[app:osvolumeversionapp]
paste.app_factory = cinder.api.openstack.volume.versions:Versions.factory

##########
# Shared #
##########

[filter:keystonecontext]
paste.filter_factory = cinder.api.auth:CinderKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 172.16.0.1
service_port = 5000
auth_host = 172.16.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = %TENANTNAME%
admin_user = %ADMINUSERNAME%
admin_password = %ADMINPASSWORD%

You can now safely restart both tgt and cinder-volume services (they are already started at installation time).

sylvain@folsom03:~# sudo service tgt restart
sylvain@folsom03:~# sudo service cinder-volume restart

Let’s check everything is working good.

sylvain@folsom03:~# sudo cinder-manage host list
host zone
folsom01 nova
folsom03 nova

You can create a new volume and attach it to an instance :

sylvain@folsom03:~$ cinder create --display_name TEST_VOLUME 50
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| created_at | 2013-05-31T13:56:14.206466 |
| display_description | None |
| display_name | TEST_VOLUME |
| id | 40e6e60f-2b38-4938-875a-9f3831430c2c |
| metadata | {} |
| size | 50 |
| snapshot_id | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+

You can see the iscsi target here :

sylvain@folsom03:~$ sudo tgt-admin -s
Target 1: iqn.2010-10.org.openstack:volume-40e6e60f-2b38-4938-875a-9f3831430c2c
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 53687 MB, Block size: 512
Online: Yes
Removable media: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/cinder-volumes/volume-40e6e60f-2b38-4938-875a-9f3831430c2c
Backing store flags:
Account information:
ACL information:
ALL

You can also check that a logical volume has been created :

sylvain@folsom03:~$ sudo lvdisplay cinder-volumes
--- Logical volume ---
LV Name /dev/cinder-volumes/volume-40e6e60f-2b38-4938-875a-9f3831430c2c
VG Name cinder-volumes
LV UUID DKLHUm-vXjy-BsDX-C6fU-PO2l-WbL3-BSesDd
LV Write Access read/write
LV Status available
# open 1
LV Size 50,00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1

And if you want to attach it to an instance :

sylvain@folsom03:~$ nova volume-attach OSTOOLS01 40e6e60f-2b38-4938-875a-9f3831430c2c /dev/vdb
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 40e6e60f-2b38-4938-875a-9f3831430c2c |
| serverId | a2dac743-99d3-4b72-9c71-232de23cb5ec |
| volumeId | 40e6e60f-2b38-4938-875a-9f3831430c2c |
+----------+--------------------------------------+

sylvain@folsom03:~$ sudo tgt-admin -s
Target 1: iqn.2010-10.org.openstack:volume-40e6e60f-2b38-4938-875a-9f3831430c2c
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 2
Initiator: iqn.1993-08.org.debian:01:71f5e64213fe
Connection: 0
IP Address: 172.16.0.4

LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 53687 MB, Block size: 512
Online: Yes
Removable media: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/cinder-volumes/volume-40e6e60f-2b38-4938-875a-9f3831430c2c
Backing store flags:
Account information:
ACL information:
ALL

Now you’re done for today.

Note (as of 2013/06/18): I just saw Giulio Fidente blog about adding more volume nodes. My bad, I haven’t checked if my blog entry was redundent. Anyway, this one is talking about Folsom and Ubuntu, which is pretty different. Giulio, if you’re reading me, please accept this blog entry as a vibrant hommage to your own blog πŸ˜‰

Advertisements

6 thoughts on “Adding a second Cinder-volume with Folsom

  1. Hi Sylvain, we are trying to deploy a multi-backend cinder solution with grizzly but we ran into many problems. I have a couple of questions for you and this multi-host cinder deploy:
    -Can you choose where to create a volume? or you just manage this creating the volumen in the cinder node you want to host it.
    -I assume you can’t use horizon to create volumes in the different nodes. Is that correct? Because of the fact you have two APIs (one per cinder node).

    • Hi Juan,

      My tutorial only concerns and has been tested for Folsom. I assume that these instructions would also be compatible with Grizzly, but you also have to know that Grizzly added some new multi-backend capabilities. Maybe this official doc can help you ?

      Dealing with Cinder backend hosts is the matter of the Cinder-scheduler service. Should you want to tune the election of a Cinder host, you would have to play with Cinder filters. I don’t know yet how to make that point thou.
      I recently saw a blogpost from Mirantis about Nova scheduler and how to spawn instances to the closest Cinder backend here. Maybe that could also help you.

      Anyway, you can still create volumes with Horizon (luckily). There is still only one Cinder-api node, one cinder-scheduler service, so no need to change anything. The only point you can’t deal with is which Cinder backend host to choose, that’s it.

  2. I tried the grizzly multi-backends solution but there’s a bug, and the fix haven’t made it to the official ubuntu repos yet. Have you tried booting an instance from one of this volumes?

    • Hi Juan,
      Sorry for the late reply. Nope, I only tried to attach a volume from a running instance, not booting from itself directly. Nevertheless, unless there is an Openstack bug, it should be possible to boot from another cinder-volume, that’s just a matter of a correct libvirt.xml and tgtd running.
      What’s your hypervisor ? Have you tried to check if tgt is showing your volume correctly ? Is it mapped to an iScsi connection when trying to boot from itself ?

    • As said above, I tested this tutorial against Folsom, not Grizzly. The doc you mentioned is related to Grizzly, but afaik the Cinder scheduler is also “Chance” for Folsom. So, yep, this is my scheduler.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s