Creating a virtual grid with libvirt + debian preseeds + puppet + IceGrid
This recipe explain how to build a grid composed by 6 debian virtual machines running IceGrid. Once the setup is done, the whole process may be executed with absolutelly no user interaction. The process takes advantage from libvirt (for virtual machine installation), debian preseds (for unattended installation) and puppet (for configuration management).
The resulting virtual grid is suitable to make tesing and continuous integration for distributed applications (using ZeroC IceGrid in this case).
WARNING: there is pending work in this post.
ingredients
Install next packages:
- libvirt-bin
- virtinst
- icegrid-gui
- libguestfs-tools
- guestfish
guestfs
Some scripts need guestfs. This requires a supermin appliance. To build it execute:
# update-guestfs-appliance
virtual network
We will use the next network definition. Write it in a file named net-grid.xml
:
[ net-grid.xml ]
<network>
<name>grid</name>
<forward mode='nat'/>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:6F:5E:D8'/>
<dns>
<host ip='192.168.1.1'>
<hostname>node0</hostname>
<hostname>puppet</hostname>
</host>
</dns>
<ip address='192.168.1.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.1.2' end='192.168.1.254'/>
<host mac="00:AA:00:00:00:11" name="node1" ip='192.168.1.11'/>
<host mac="00:AA:00:00:00:12" name="node2" ip='192.168.1.12'/>
<host mac="00:AA:00:00:00:13" name="node3" ip='192.168.1.13'/>
<host mac="00:AA:00:00:00:14" name="node4" ip='192.168.1.14'/>
<host mac="00:AA:00:00:00:15" name="node5" ip='192.168.1.15'/>
<host mac="00:AA:00:00:00:16" name="node6" ip='192.168.1.16'/>
</dhcp>
</ip>
</network>
# virsh net-define net-grid.xml
# virsh net-start grid
node1 instalation
We make a minimal debian installation and then we will make 5 clones. Next script automates the installation process. Remember you need the preseed.cfg file in current directory (the script web-servers it using Python).
[ install.sh ]
#!/bin/bash --
export LIBVIRT_DEFAULT_URI=qemu:///system
name=node1
virsh destroy $name
virsh undefine $name
python -m SimpleHTTPServer 80 &
virt-install \
--debug \
--name=$name \
--ram=512 \
--disk ./$name.img,size=1.2,sparse=true \
--location=http://babel.esi.uclm.es/debian/dists/testing/main/installer-i386 \
--network=network=grid,mac=00:AA:00:00:00:11 \
--extra-args="\
auto=true priority=critical vga=normal hostname=$name \
url=http://192.168.122.1/preseed.cfg"
clonning nodes
Once the installation has finished, we can make clones. To build clean clones is required to remove some settings established during installation. Thanks to guestfs it is easy to remove than settings using libguestfs-tools
:
# export LIBVIRT_DEFAULT_URI=qemu:///system
# virt-sysprep -d node1 --enable udev-persistent-net
This is a good moment to set the puppet agent execution using cron. To make this just copy a crontab file in the guest fs:
# export LIBVIRT_DEFAULT_URI=qemu:///system
# guestfish --rw -i -d node1 copy-in puppet/puppet /etc/cron.d/
You need the puppet directory.
It is time to clone:
export LIBVIRT_DEFAULT_URI=qemu:///system
for i in $(seq 2 6); do
virt-clone --original node1 --name node$i --file ./node$i.img --mac 00:AA:00:00:00:1$i
virt-sysprep -d node$i --enable hostname --hostname node$i
done
The script set the mac address and hostname in the range node2 to node6. Due the network specification, these hosts will get predefined IP addresses.
starting nodes
export LIBVIRT_DEFAULT_URI=qemu:///system
for dom in $(virsh list --all --name); do
virsh start $dom
done
puppetmaster
You need to install puppetmaster in the host machine. The next command configure master to autosign agent certificates. That is obviously not secure, but convenient for our purposes here. As soon as nodes send cert request, master will sign them and then all of them are ready to apply the manifest.
# hostname puppet
# apt-get install puppetmaster
# echo '*' > /etc/puppet/autosign.conf
# service puppetmaster restart
writing the puppet manifest
Our initial manifest is pretty simple. We want to install some packages: ice34-services, psmisc and configure the icegrid-node service. You must copy next file in /etc/puppet/manifests/site.pp
package {'ice34-services':
ensure => present,
}
package {'psmisc':
ensure => present,
}
file {'/etc/icegrid':
ensure => 'directory',
mode => 755,
owner => root,
}
file {['/var/icegrid', '/var/icegrid/db']:
ensure => 'directory',
mode => 755,
owner => root,
}
file {'/etc/icegrid/node.config':
mode => 644,
owner => root,
content => "
Ice.Default.Locator=IceGrid/Locator -t:tcp -h node0 -p 4061
IceGrid.Node.Name=${hostname}
IceGrid.Node.Data=/var/icegrid/db
IceGrid.Node.Endpoints=tcp
"
}
service {'icegridnode':
provider => 'base',
ensure => running,
start => '/usr/bin/icegridnode --Ice.Config=/etc/icegrid/node.config &',
stop => 'killall icegridnode',
status => 'pgrep icegridnode',
require => Package['ice34-services', 'psmisc']
}
This is very rough manifest, but enough for this case.
IceGrid configuration
You can see in the sites.pp
manifest that we are configuring icegrid nodes to connect to the IceGrid Registry in node0
. node0 is an alias for the host computer private IP address (see net-grid.xml
). The manifest create required directories and files to get icegridnode running. Note this is not the right way to execute the icegridnode service. It should have a init.d script to get easily managed with puppet.
You must execute an IceGrid Registry in the host computer. You will need the file registry.config.
$ icegrid-node --Ice.Config=icegrid/registry.config
Then, you may connect to the registry with:
$ icegrid-gui --Ice.Config=icegrid/locator.config
If all is working fine, you will see the 6 nodes in the “live deployment” tab of icegrid-gui.
references
- Thanks to magmax for its puppet support.