How to compare 2 patchsets in Gerrit ?

Apples & Oranges - They Don't Compare (Flickr, CC2.0)

Apples & Oranges – They Don’t Compare (Flickr, CC2.0)

Reviewing is one of my duties I’m doing daily. I try to dedicate around 2 hours each day in reading code, understanding the principles, checking if everything is OK from a Python perspective, verifying the test coverage and eventually trying to understand if it’s good for the project I’m supporting and not breaking anything even if CI is happy.

All that stuff can take time. And as I’m lazy, I really dislike the idea of reviewing again the same change that I previously approved if I’m sure that the new patchset is only a rebase. So, the question became in my mind very quickly : how can I compare that 2 patchsets are different ?

Actually, there are many ways to do so with Gerrit and Git. The obvious one is to make use of the Gerrit UI and ask for comparing 2 patchsets.

The main problem is that it shows all the differences, including the changes coming from the rebase so that’s not really handy, unless the change is very small.

Another is maybe to make use of the magical option “-m” of git review which allows to rebase each patchset on master and compare them.

git review -m <CHANGE_NUMBER>,<OLD_PS>[-<NEW_PS>]

That’s a pretty nice tool because it will allow you to make sure that the diffs are made against the same thing (here, master) so that it works fine provided you have your local master up-to-date. That said, if when rebasing your old patchset, you get a conflict, you should be stuck and sometimes the output would be confusing.

On my own, I finally ended up by doing something different : I’m fetching each patchset on a local branch and comparing each branch with the previous commit it has in the branch. Something like that :

vimdiff <(git diff ${MY_PS1_BRANCH}^ ${MY_PS1_BRANCH}) <(git diff ${MY_PS2_BRANCH}^ ${MY_PS2_BRANCH})

Here, MY_PS1_BRANCH would just be the local branch for the previous patchset, and MY_PS2_BRANCH would be the local branch for the next patchset.

By comparing the previous commit and the actual commit (ie. the change) on one branch and the another, then I’m quite sure that I won’t have to carry all the rebasing problems with my local master.

Hook for tox compliancy before a git-review

When Pep8 meets 300...

Shamely copied from OpenstackReactions

Everyone can do mistakes, in particular when under pressure one do a quick change-and-push patchset to Gerrit without checking PEP8 compliancy… How many times Jenkins gives -1 to a patch because of PEP8 check failing ?

The  solution is to add a git hook on a commit for checking PEP8 compliancy. I personnally prefer running the check *before* submitting the patch, that’s another option.

Nothing original here : create a Python script located in <your_project>/.git/hooks, name it pre-review, make it executable and fill it with :

#!/usr/bin/env python
import tox

tox.cmdline(['-e', 'pep8'])

Hope it will help.



Baremetal Driver and the Devstack

(Reposting the article from

You maybe know the Baremetal driver is quite experimental and planned to be replaced by the Ironic project. That said, there were recent improvements from the community which made the baremetal driver still very interesting to test. So as to get the latest updates, I tried to configure a Devstack for provisioning real baremetal hosts. Here are my notes from the install which can help some of you. I hope.

Configure your devstack

$ git clone
$ cd devstack

Edit your localrc as below. Make sure to change the network and baremetal settings to your own environment, of course.

# Credentials

# Logging

# Services
disable_service n-net
enable_service q-svc
enable_service q-agt
disable_service q-dhcp
disable_service q-l3
disable_service q-meta
enable_service neutron


#Neutron settings if FlatNetwork for Baremetal

# Baremetal Network settings

# Global baremetal settings for real nodes

# Change at least BM_FIRST_MAC to match the MAC address of the baremetal node to deploy

# IPMI credentials for the baremetal node to deploy

# Make sure to match your Devstack hostname

Start the stack.

$ ./

Prevent dnsmasq to attribute other leases

As there is probably another DHCP server in the same subnet, we need to make sure local dnsmasq won’t serve other PXE or DHCP requests. One workaround can be to deploy 75-filter-bootps-cronjob and filter-bootps from TripleO which iptables-blacklists all DHCP requests but the ones setup by baremetal driver.

Create a single Ubuntu image with a few additions and add it to Glance

As Devstack is only providing a CirrOS image, there is much of benefits to deploy a custom Ubuntu image. Thanks to diskimage-builder provided again by TripleO folks (thanks by the way!), we can add as many elements as we want.

$ git clone
$ git clone
$ export ELEMENTS_PATH=~/tripleo-image-elements/elements
$ diskimage-builder/bin/disk-image-create -u base local-config stackuser heat-cfntools -o ubuntu_xlcloud
$ diskimage-builder/bin/disk-image-get-kernel -d ./ -o ubuntu_xlcloud -i $(pwd)/ubuntu_xlcloud.qcow2
$ glance image-create --name ubuntu_xlcloud-vmlinuz --public --disk-format aki < ubuntu_xlcloud-vmlinuz
$ glance image-create --name ubuntu_xlcloud-initrd --public --disk-format ari < ubuntu_xlcloud-initrd
$ glance image-create --name ubuntu_xlcloud --public --disk-format qcow2 --container-format bare \
--property kernel_id=$UBUNTU_XLCLOUD_VMLINUZ_UUID --property ramdisk_id=$UBUNTU_XLCLOUD_INITRD_UUID < ubuntu_xlcloud.qcow2

Boot the stack !

Of course, we can provide a Heat template, but leave it simple for now :

$ nova keypair-add --pub-key ~/.ssh/ sylvain
$ nova boot --flavor bm.small --image ubuntu_xlcloud --key-name sylvain mynewhost

Puppet as a service in Openstack : how to manage different VMs with same hostname

Note: This doc applies to (and has been tested with) Puppet 3, but should also be compatible with Puppet 2.7. Let me know if you see some changes with this version.

I went to Puppet while I was playing with Openstack, trying to provide multiple labs environments for my company peers. The basic idea was to fire up a new projet, start the VMs, change configs et voila ! Generating the manifests and modules were not too complicated, but I went through an unexpected issue.

You all maybe know that Puppet security handling is about OpenSSL certificates that are automatically generated on your Puppet agent and must be signed on your Puppet master node. The point is, the signed certificate name is the hostname where the Puppet agent resides in.

That’s not big deal on a classical baremetal environment with all nodes having different hostnames etc. Unfortunately, on a cloud environment, the challenge is much better interesting : provided you have multiple tenants (or projects) with only one Puppet master, who can assess that two distinct VMs will have different hostnames ?

Luckily, the Puppet doc gives the solution : use Facter instead. Facter will allow you to choose which fact to use for identifying your node. On my setup, I chose ipaddress. Take a look at this, there are some things to consider before using it.

Last but not the least, using Facter for identifying the node doesn’t prevent you to sign certificates on the master side. That’s a matter of choosing which level security you want. I took decision to use only one certificate for all my instances. That means you have to provide the certificate, the public and the private keys themselves on all your VMs, either by userdata script or making sure they’re built in your images.

Stop talking. First, generate the global certificate (plus the private and public keys) :

[root@mymaster ~]# puppet cert –generate globalcert.openstacklocal

Knowing where to store the SSL certs on a Puppet agent is pretty straightforward :

[root@mynode ~]# puppet agent –configprint ssldir

So, by copying
/var/lib/puppet/ssl/ca/signed/globalcert.openstacklocal.pem from the master node to /var/lib/puppet/ssl/certs/ on the agent node
– and /var/lib/puppet/ssl/private_keys/globalcert.openstacklocal.pem from the master node to /var/lib/puppet/ssl/private_keys/ on the agent node
, you’ll be able to have an already signed-on Puppet agent.

By now, you have a basic agent-to-master authentication model still relying on the certificate name. Let’s change this to Facter ipaddress instead.

On the master, please edit /etc/puppet/puppet.conf as follows :

node_name = facter

Once done, allow the global certificate to be granted for retrieving information from the master by editing /etc/puppet/auth.conf

# allow nodes to retrieve their own catalog
path ~ ^/catalog/([^/]+)$
method find
allow $1
allow globalcert.openstacklocal

# allow nodes to retrieve their own node definition
path ~ ^/node/([^/]+)$
method find
allow $1
allow globalcert.openstacklocal

# allow all nodes to access the certificates services
path /certificate_revocation_list/ca
method find
allow *

# allow all nodes to store their own reports
path ~ ^/report/([^/]+)$
method save
allow $1
allow globalcert.openstacklocal

Everything is ready on the Puppet master, don’t forget to restart it :

[root@mymaster ~]# service puppetmaster restart

Now, change /etc/puppet/puppet.conf on the Puppet agent by asking to provide Facter’s ipaddress instead of certificate :

certname = globalcert.openstacklocal
node_name = facter
node_name_fact = ‘ipaddress’

You’re done. You can now use the private IP address for specifying a node in a manifest like this :

node ‘’ inherits trunk {


(where is the Puppet agent IP address)

Happy use of it.

Adding a second Cinder-volume with Folsom


Eleven Drive Monstruosity (Flickr, Creative Commons 2.0)

Most of the time, you have to deal with disk space. Most of the time, it’s growing. And most of the time, you run (or you plan to run) out of space. When dealing with Openstack and Cinder, the choice is : are you able to increase your backend size (maybe you can add a new PV to your LVM volume group ?) or not (do you accept downtime ?) ?

If the answer is no, then add a second Cinder-volume to your current Cinder setup. Below is how to deal with that. These below steps are corresponding to a Ubuntu 12.04 OS, LVM storage backend and RabbitMQ as AMQP broker, provided you already have a single Cinder setup with one cinder-volume.

First of all, create the LVM volume group on the new server.

sylvain@folsom03:~# sudo pvcreate /dev/sdc1 && sudo pvcreate /dev/sdd1 && sudo vgcreate cinder-volumes /dev/sdc1 /dev/sdd1

Once done, only install cinder-volume package. It will automatically include the cinder-common package, which contains all the conf and tools.

sylvain@folsom03:~# sudo apt-get -y install cinder-volume

Now, you have to modify the configuration files to fit your installation. Here, (aka. folsom01) is my controler node IP, also hosting my current all-in-one Cinder setup. (aka. folsom03) is the new Cinder-volume backend and (aka. folsom04) is one of my compute nodes (for testing purpose).

sylvain@folsom03:~# sudo vi /etc/cinder/cinder.conf
sql_connection = mysql://root:nova@
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
iscsi_ip_address =

rabbit_host =
rabbit_port = 5672
glance_api_servers =

sylvain@folsom03:~# sudo vi /etc/cinder/api-paste.ini
# Openstack #

use = call:cinder.api.openstack.urlmap:urlmap_factory
/: osvolumeversions
/v1: openstack_volume_api_v1

use = call:cinder.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth osapi_volume_app_v1
keystone = faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1

paste.filter_factory = cinder.api.openstack:FaultWrapper.factory

paste.filter_factory = cinder.api.openstack.auth:NoAuthMiddleware.factory

paste.filter_factory = cinder.api.sizelimit:RequestBodySizeLimiter.factory

paste.app_factory = cinder.api.openstack.volume:APIRouter.factory

pipeline = faultwrap osvolumeversionapp

paste.app_factory = cinder.api.openstack.volume.versions:Versions.factory

# Shared #

paste.filter_factory = cinder.api.auth:CinderKeystoneContext.factory

paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host =
service_port = 5000
auth_host =
auth_port = 35357
auth_protocol = http
admin_tenant_name = %TENANTNAME%
admin_user = %ADMINUSERNAME%
admin_password = %ADMINPASSWORD%

You can now safely restart both tgt and cinder-volume services (they are already started at installation time).

sylvain@folsom03:~# sudo service tgt restart
sylvain@folsom03:~# sudo service cinder-volume restart

Let’s check everything is working good.

sylvain@folsom03:~# sudo cinder-manage host list
host zone
folsom01 nova
folsom03 nova

You can create a new volume and attach it to an instance :

sylvain@folsom03:~$ cinder create --display_name TEST_VOLUME 50
| Property | Value |
| attachments | [] |
| availability_zone | nova |
| created_at | 2013-05-31T13:56:14.206466 |
| display_description | None |
| display_name | TEST_VOLUME |
| id | 40e6e60f-2b38-4938-875a-9f3831430c2c |
| metadata | {} |
| size | 50 |
| snapshot_id | None |
| status | creating |
| volume_type | None |

You can see the iscsi target here :

sylvain@folsom03:~$ sudo tgt-admin -s
Target 1:
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 53687 MB, Block size: 512
Online: Yes
Removable media: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/cinder-volumes/volume-40e6e60f-2b38-4938-875a-9f3831430c2c
Backing store flags:
Account information:
ACL information:

You can also check that a logical volume has been created :

sylvain@folsom03:~$ sudo lvdisplay cinder-volumes
--- Logical volume ---
LV Name /dev/cinder-volumes/volume-40e6e60f-2b38-4938-875a-9f3831430c2c
VG Name cinder-volumes
LV Write Access read/write
LV Status available
# open 1
LV Size 50,00 GiB
Current LE 12800
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:1

And if you want to attach it to an instance :

sylvain@folsom03:~$ nova volume-attach OSTOOLS01 40e6e60f-2b38-4938-875a-9f3831430c2c /dev/vdb
| Property | Value |
| device | /dev/vdb |
| id | 40e6e60f-2b38-4938-875a-9f3831430c2c |
| serverId | a2dac743-99d3-4b72-9c71-232de23cb5ec |
| volumeId | 40e6e60f-2b38-4938-875a-9f3831430c2c |

sylvain@folsom03:~$ sudo tgt-admin -s
Target 1:
System information:
Driver: iscsi
State: ready
I_T nexus information:
I_T nexus: 2
Connection: 0
IP Address:

LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 53687 MB, Block size: 512
Online: Yes
Removable media: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/cinder-volumes/volume-40e6e60f-2b38-4938-875a-9f3831430c2c
Backing store flags:
Account information:
ACL information:

Now you’re done for today.

Note (as of 2013/06/18): I just saw Giulio Fidente blog about adding more volume nodes. My bad, I haven’t checked if my blog entry was redundent. Anyway, this one is talking about Folsom and Ubuntu, which is pretty different. Giulio, if you’re reading me, please accept this blog entry as a vibrant hommage to your own blog 😉