Puppet as a service in Openstack : how to manage different VMs with same hostname

Note: This doc applies to (and has been tested with) Puppet 3, but should also be compatible with Puppet 2.7. Let me know if you see some changes with this version.

I went to Puppet while I was playing with Openstack, trying to provide multiple labs environments for my company peers. The basic idea was to fire up a new projet, start the VMs, change configs et voila ! Generating the manifests and modules were not too complicated, but I went through an unexpected issue.

You all maybe know that Puppet security handling is about OpenSSL certificates that are automatically generated on your Puppet agent and must be signed on your Puppet master node. The point is, the signed certificate name is the hostname where the Puppet agent resides in.

That’s not big deal on a classical baremetal environment with all nodes having different hostnames etc. Unfortunately, on a cloud environment, the challenge is much better interesting : provided you have multiple tenants (or projects) with only one Puppet master, who can assess that two distinct VMs will have different hostnames ?

Luckily, the Puppet doc gives the solution : use Facter instead. Facter will allow you to choose which fact to use for identifying your node. On my setup, I chose ipaddress. Take a look at this, there are some things to consider before using it.

Last but not the least, using Facter for identifying the node doesn’t prevent you to sign certificates on the master side. That’s a matter of choosing which level security you want. I took decision to use only one certificate for all my instances. That means you have to provide the certificate, the public and the private keys themselves on all your VMs, either by userdata script or making sure they’re built in your images.

Stop talking. First, generate the global certificate (plus the private and public keys) :

[root@mymaster ~]# puppet cert –generate globalcert.openstacklocal

Knowing where to store the SSL certs on a Puppet agent is pretty straightforward :

[root@mynode ~]# puppet agent –configprint ssldir

So, by copying
/var/lib/puppet/ssl/ca/signed/globalcert.openstacklocal.pem from the master node to /var/lib/puppet/ssl/certs/ on the agent node
– and /var/lib/puppet/ssl/private_keys/globalcert.openstacklocal.pem from the master node to /var/lib/puppet/ssl/private_keys/ on the agent node
, you’ll be able to have an already signed-on Puppet agent.

By now, you have a basic agent-to-master authentication model still relying on the certificate name. Let’s change this to Facter ipaddress instead.

On the master, please edit /etc/puppet/puppet.conf as follows :

node_name = facter

Once done, allow the global certificate to be granted for retrieving information from the master by editing /etc/puppet/auth.conf

# allow nodes to retrieve their own catalog
path ~ ^/catalog/([^/]+)$
method find
allow $1
allow globalcert.openstacklocal

# allow nodes to retrieve their own node definition
path ~ ^/node/([^/]+)$
method find
allow $1
allow globalcert.openstacklocal

# allow all nodes to access the certificates services
path /certificate_revocation_list/ca
method find
allow *

# allow all nodes to store their own reports
path ~ ^/report/([^/]+)$
method save
allow $1
allow globalcert.openstacklocal

Everything is ready on the Puppet master, don’t forget to restart it :

[root@mymaster ~]# service puppetmaster restart

Now, change /etc/puppet/puppet.conf on the Puppet agent by asking to provide Facter’s ipaddress instead of certificate :

certname = globalcert.openstacklocal
node_name = facter
node_name_fact = ‘ipaddress’

You’re done. You can now use the private IP address for specifying a node in a manifest like this :

node ‘’ inherits trunk {


(where is the Puppet agent IP address)

Happy use of it.