Adding a second Cinder-volume with Folsom

eleven_drive_monstrosity

Eleven Drive Monstruosity (Flickr, Creative Commons 2.0)

Most of the time, you have to deal with disk space. Most of the time, it’s growing. And most of the time, you run (or you plan to run) out of space. When dealing with Openstack and Cinder, the choice is : are you able to increase your backend size (maybe you can add a new PV to your LVM volume group ?) or not (do you accept downtime ?) ?

If the answer is no, then add a second Cinder-volume to your current Cinder setup. Below is how to deal with that. These below steps are corresponding to a Ubuntu 12.04 OS, LVM storage backend and RabbitMQ as AMQP broker, provided you already have a single Cinder setup with one cinder-volume.

First of all, create the LVM volume group on the new server.

sylvain@folsom03:~# sudo pvcreate /dev/sdc1 && sudo pvcreate /dev/sdd1 && sudo vgcreate cinder-volumes /dev/sdc1 /dev/sdd1

Once done, only install cinder-volume package. It will automatically include the cinder-common package, which contains all the conf and tools.

sylvain@folsom03:~# sudo apt-get -y install cinder-volume

Now, you have to modify the configuration files to fit your installation. Here, 172.16.0.1 (aka. folsom01) is my controler node IP, also hosting my current all-in-one Cinder setup. 172.16.0.3 (aka. folsom03) is the new Cinder-volume backend and 172.16.0.4 (aka. folsom04) is one of my compute nodes (for testing purpose).

sylvain@folsom03:~# sudo vi /etc/cinder/cinder.conf
[DEFAULT]
sql_connection = mysql://root:nova@172.16.0.1/cinder
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
iscsi_ip_address = 172.16.0.3

rabbit_host = 172.16.0.1
rabbit_port = 5672
glance_api_servers = 172.16.0.1:9292


sylvain@folsom03:~# sudo vi /etc/cinder/api-paste.ini
#############
# Openstack #
#############

[composite:osapi_volume]
use = call:cinder.api.openstack.urlmap:urlmap_factory
/: osvolumeversions
/v1: openstack_volume_api_v1

[composite:openstack_volume_api_v1]
use = call:cinder.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth osapi_volume_app_v1
keystone = faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1

[filter:faultwrap]
paste.filter_factory = cinder.api.openstack:FaultWrapper.factory

[filter:noauth]
paste.filter_factory = cinder.api.openstack.auth:NoAuthMiddleware.factory

[filter:sizelimit]
paste.filter_factory = cinder.api.sizelimit:RequestBodySizeLimiter.factory

[app:osapi_volume_app_v1]
paste.app_factory = cinder.api.openstack.volume:APIRouter.factory

[pipeline:osvolumeversions]
pipeline = faultwrap osvolumeversionapp

[app:osvolumeversionapp]
paste.app_factory = cinder.api.openstack.volume.versions:Versions.factory

##########
# Shared #
##########

[filter:keystonecontext]
paste.filter_factory = cinder.api.auth:CinderKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 172.16.0.1
service_port = 5000
auth_host = 172.16.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = %TENANTNAME%
admin_user = %ADMINUSERNAME%
admin_password = %ADMINPASSWORD%

You can now safely restart both tgt and cinder-volume services (they are already started at installation time).

sylvain@folsom03:~# sudo service tgt restart
sylvain@folsom03:~# sudo service cinder-volume restart

Let’s check everything is working good.

sylvain@folsom03:~# sudo cinder-manage host list
host zone
folsom01 nova
folsom03 nova

You can create a new volume and attach it to an instance :

sylvain@folsom03:~$ cinder create --display_name TEST_VOLUME 50
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| created_at | 2013-05-31T13:56:14.206466 |
| display_description | None |
| display_name | TEST_VOLUME |
| id | 40e6e60f-2b38-4938-875a-9f3831430c2c |
| metadata | {} |
| size | 50 |
| snapshot_id | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+

You can see the iscsi target here :

sylvain@folsom03:~$ sudo tgt-admin -s
Target 1: iqn.2010-10.org.openstack:volume-40e6e60f-2b38-4938-875a-9f3831430c2c
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 53687 MB, Block size: 512
Online: Yes
Removable media: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/cinder-volumes/volume-40e6e60f-2b38-4938-875a-9f3831430c2c
Backing store flags:
Account information:
ACL information:
ALL

You can also check that a logical volume has been created :

sylvain@folsom03:~$ sudo lvdisplay cinder-volumes
--- Logical volume ---
LV Name /dev/cinder-volumes/volume-40e6e60f-2b38-4938-875a-9f3831430c2c
VG Name cinder-volumes
LV UUID DKLHUm-vXjy-BsDX-C6fU-PO2l-WbL3-BSesDd
LV Write Access read/write
LV Status available
# open 1
LV Size 50,00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1

And if you want to attach it to an instance :

sylvain@folsom03:~$ nova volume-attach OSTOOLS01 40e6e60f-2b38-4938-875a-9f3831430c2c /dev/vdb
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdb |
| id | 40e6e60f-2b38-4938-875a-9f3831430c2c |
| serverId | a2dac743-99d3-4b72-9c71-232de23cb5ec |
| volumeId | 40e6e60f-2b38-4938-875a-9f3831430c2c |
+----------+--------------------------------------+

sylvain@folsom03:~$ sudo tgt-admin -s
Target 1: iqn.2010-10.org.openstack:volume-40e6e60f-2b38-4938-875a-9f3831430c2c
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 2
Initiator: iqn.1993-08.org.debian:01:71f5e64213fe
Connection: 0
IP Address: 172.16.0.4

LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 53687 MB, Block size: 512
Online: Yes
Removable media: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/cinder-volumes/volume-40e6e60f-2b38-4938-875a-9f3831430c2c
Backing store flags:
Account information:
ACL information:
ALL

Now you’re done for today.

Note (as of 2013/06/18): I just saw Giulio Fidente blog about adding more volume nodes. My bad, I haven’t checked if my blog entry was redundent. Anyway, this one is talking about Folsom and Ubuntu, which is pretty different. Giulio, if you’re reading me, please accept this blog entry as a vibrant hommage to your own blog 😉

Advertisements