Projekt KVM Cluster: Difference between revisions
m (Joachim moved page Projekt IDG Cluster to Projekt KVM Cluster without leaving a redirect: better desc) |
|||
(34 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
= | = KVM Cluster = | ||
The IDG cluster is a 2 node heartbeat/drbd cluster running on sles | The IDG cluster is a 2 node heartbeat/drbd cluster running on sles | ||
Line 8: | Line 8: | ||
There is a 3 node test cluster showing what heartbeat and drbd can do. | There is a 3 node test cluster showing what heartbeat and drbd can do. | ||
It was originally setup as suse 9.3 vmware images. | It was originally setup as suse 9.3 vmware images. | ||
I | I transfered this to kvm since vmware on linux is a pain (kernel updates) | ||
Cluster status can be accessed here (if it is running): https://banzhaf.chickenkiller.com:444/job1/jobs1/crm_mon.cgi | |||
=== vmware files === | === vmware files === | ||
Line 128: | Line 129: | ||
** vmnet2 -> virbr2 (same subnet and type, here 192.168.101.0/24 isolated) | ** vmnet2 -> virbr2 (same subnet and type, here 192.168.101.0/24 isolated) | ||
** vmnet3 -> virbr3 (same subnet and type, here 192.168.102.0/24 isolated) | ** vmnet3 -> virbr3 (same subnet and type, here 192.168.102.0/24 isolated) | ||
* enable access to bridges for qemu in /etc/qemu/bridge.conf (e.g. | ** vmnet4 -> virbr4 (same subnet and type, here 192.168.103.0/24 isolated) | ||
* enable access to bridges for qemu in /etc/qemu/bridge.conf (e.g. add line allow all) | |||
* enable ip forwarding on the host to let the vm's access the outside world (if wanted) | |||
Warning: do not restart networking on the vm host because that destroys the bridge config and isolates the vm's | |||
=== Make kvm startup script === | === Make kvm startup script === | ||
Line 135: | Line 140: | ||
# start cluster node 1 | # start cluster node 1 | ||
pass=somepw | pass=somepw | ||
port=someport | port=someport | ||
if ! ps auxwww | grep -q "[5]64de75d-5290-b559-b100-58b7152f141a"; then | if ! ps auxwww | grep -q "[5]64de75d-5290-b559-b100-58b7152f141a"; then | ||
qemu-kvm \ | qemu-kvm \ | ||
Line 153: | Line 158: | ||
-net nic,vlan=2,model=e1000,macaddr=00:50:56:01:01:02 \ | -net nic,vlan=2,model=e1000,macaddr=00:50:56:01:01:02 \ | ||
-net bridge,vlan=2,br=virbr3 \ | -net bridge,vlan=2,br=virbr3 \ | ||
-net nic,vlan=3,model=e1000,macaddr=00:50:56:01:01:03 \ | |||
-net bridge,vlan=3,br=virbr4 \ | |||
-vga qxl \ | -vga qxl \ | ||
-enable-kvm \ | -enable-kvm \ | ||
Line 166: | Line 173: | ||
* the macaddr also comes from the vmx file, this way the guest network will just work. | * the macaddr also comes from the vmx file, this way the guest network will just work. | ||
The drive interface is ide, because the scsi drivers of the guests are not compatible with kvm scsi controller | The drive interface is ide, because the scsi drivers of the guests are not compatible with kvm scsi controller. | ||
That means, references to the disk sda need to be changed to hda (on grub screen when guest boots and later in /boot/grub/menu.lst | That means, references to the disk sda need to be changed to hda (on grub screen when guest boots and later in /boot/grub/menu.lst and /etc/fstab) | ||
=== Image config === | |||
* edit /etc/grub/menu.lst | |||
** change sda to hda everywhere | |||
** change vga=0 if another number is used and spice looks awful | |||
* edit /etc/fstab | |||
** change sda to hda everywhere | |||
* edit /etc/resolv.conf | |||
** use the vm hosts nameserver (or none if you disabled ip forwarding on the host) | |||
* edit /etc/ntp.conf | |||
** use a valid reachable ntp server (usually at least the vm host works: 192.168.1.1) | |||
* collect syslog warnings on host | |||
** edit /etc/syslog-ng/syslog-ng.conf.in: add udp(...); | |||
destination warn { file("/var/log/warn" fsync(yes)); udp("192.168.1.1" port(514)); }; | |||
** use the new config | |||
SuSEconfig | |||
rcsyslog restart | |||
** on the target host, using rsyslog | |||
*** enable udb in /etc/rsyslog.d/remote.conf | |||
*** add vm bridges virbr1-4 to internal network in /etc/sysconfig/SuSEfirewall2 | |||
*** restart firewall and syslog | |||
== Update heartbeat == | |||
Source tar ball | |||
wget http://pkgs.fedoraproject.org/repo/pkgs/heartbeat/heartbeat-2.1.3.tar.gz/bca53530a3802f7677772323047405cd/heartbeat-2.1.3.tar.gz | |||
=== Package source === | |||
http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/ | |||
=== Packages needed for compile === | |||
wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/python-devel-2.4-14.i586.rpm | |||
wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/python-tk-2.4-14.i586.rpm | |||
wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/blt-2.4z-205.i586.rpm | |||
wget ftp://bo.mirror.garr.it/pub/1/suse/ftp.suse.com/projects/tcl/8.4.12/9.2-i386/tk-8.4.12-2.1.i586.rpm | |||
wget ftp://bo.mirror.garr.it/pub/1/suse/ftp.suse.com/projects/tcl/8.4.12/9.2-i386/tcl-8.4.12-5.1.i586.rpm | |||
wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/gdbm-devel-1.8.3-230.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/gnutls-1.2.0-3.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/gnutls-devel-1.2.0-3.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/lzo-1.08-107.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/libopencdk-0.5.5-3.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/libgcrypt-1.2.1-3.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/libgcrypt-devel-1.2.1-3.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/libgpg-error-1.0-3.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/libgpg-error-devel-1.0-3.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/e2fsprogs-devel-1.36-5.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/pam-devel-0.78-8.i586.rpm | |||
wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/readline-devel-5.0-7.2.i586.rpm | |||
wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/readline-5.0-7.2.i586.rpm | |||
=== Make rpms === | |||
tar xzf heartbeat-2.1.3.tar.gz | |||
cd heartbeat-2.1.3 | |||
./ConfigureMe package | |||
=== Install new heartbeat version === | |||
Better disable automatic heartbeat start for now. Else it could be you get trapped in a reboot loop because the nodes restart themselves due to config errors with the new version. | |||
Usual rpm -Uhv stuff. The nondev packages of the above build preparation need to be installed on the other nodes, too. | |||
After that, expect lots of warnings or errors in the cib, because the DTD has changed. | |||
Most important: boot and cluster configuration in separate tags and id for every object | |||
Also the RAs need to be more ocf compliant than before. Heartbeat now checks the resource status in the beginning. Especially the db2 script failed on stop/status when the underlying filesystem is not available (which is normal at startup). It needs to just report it as success/stopped. | |||
Therefore I copied the original db2 resource script from /usr/lib/ocf/resource.d/heartbeat to /usr/lib/ocf/resource.d/listec, implemented the necessary behavior and changed the cib to use the listec provider for db2. | |||
== Web Status == | |||
How about querying the cluster status via a browser? | |||
=== One apache on each node === | |||
* Install apache2, apache2-worker (or prefork) on each node | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/apache2-2.0.53-9.i586.rpm | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/apache2-worker-2.0.53-9.i586.rpm | |||
rpm -ihv apache2-2.0.53-9.i586.rpm apache2-worker-2.0.53-9.i586.rpm | |||
* Edit /etc/sysconfig/apache2 (add email) | |||
* Create simple /srv/www/cgi-bin/crm_mon.cgi | |||
#!/bin/sh | |||
exec /usr/sbin/crm_mon --web-cgi | |||
* Add wwwrun user to group haclient in /etc/group | |||
* Optional: provide /srv/www/htdocs/favicon.ico (avoids messages in apache error log) | |||
* Enable apache autostart on boot. | |||
chkconfig --add apache2 | |||
* Optional: Make clone resource with one floating ip. Then we can use the same url while cluster has quorum. | |||
* Optional: Make a default welcome page with a link to the status page | |||
=== One active apache at a time === | |||
If you want to use apache as a clone resource (only active if heartbeat is active!) | |||
* Install wget on each node | |||
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/wget-1.9.1-57.3.i586.rpm | |||
rpm -ihv wget-1.9.1-57.3.i586.rpm | |||
* Enable server-status page (in /etc/syscongig/apache2 add module status and define STATUS) | |||
* Disable apache autostart on boot | |||
=== Proxy from VM Host === | |||
Since the virtual machines are usually not directly accessible from outside, you need another apache on the vm host that acts as a proxy. | |||
Add the file /etc/apache2/conf.d/heartbeat.conf | |||
ProxyPass /jobs1 http://jobs1/cgi-bin | |||
ProxyPassReverse /jobs1 http://jobs1/cgi-bin | |||
ProxyPass /jobn1 http://jobn1/cgi-bin | |||
ProxyPassReverse /jobn1 http://jobn1/cgi-bin | |||
ProxyPass /jobn2 http://jobn2/cgi-bin | |||
ProxyPassReverse /jobn2 http://jobn2/cgi-bin | |||
ProxyPass /jobn3 http://jobn3/cgi-bin | |||
ProxyPassReverse /jobn3 http://jobn3/cgi-bin | |||
Enable proxy: In /etc/sysconfig/apache2 add " proxy proxy_http" to MODULES variable | |||
== Backup == | |||
Backup is on qnap:/share/BackupJob1/suse-9.3-cluster.tar.bz2 | |||
== Cleanup /etc/hosts == | |||
some entries were outdated or plane wrong (e.g. job4) | |||
== Password == | |||
root has listec password now. Others can be changed if needed. | |||
== Outside Access == | |||
* configure firewall on vm host in /etc/sysconfig/SuSEfirewall2 | |||
FW_DEV_EXT="br0 eth0" | |||
FW_DEV_INT="virbr1 virbr2 virbr3 virbr4" | |||
FW_ROUTE="yes" | |||
FW_MASQUERADE="yes" | |||
FW_PROTECT_FROM_INT="yes" | |||
FW_CONFIGURATIONS_EXT="apache2 apache2-ssl sshd" | |||
FW_FORWARD_MASQ="0.0.0.0/0,192.168.100.101,tcp,221,22 0.0.0.0/0,192.168.100.102,tcp,222,22 0.0.0.0/0,192.168.100.103,tcp,223,22" | |||
* configure port forwarding from internet router ports 221x to vmhost:22x | |||
== Todo == | |||
* Maybe update selfcompiled 0.7 drbd to 0.8 which is available as a distro rpm |
Latest revision as of 12:59, 5 May 2022
KVM Cluster
The IDG cluster is a 2 node heartbeat/drbd cluster running on sles It is an active/active configuration for 2 service stacks with a db2 database on top of each.
Testcluster
There is a 3 node test cluster showing what heartbeat and drbd can do. It was originally setup as suse 9.3 vmware images. I transfered this to kvm since vmware on linux is a pain (kernel updates) Cluster status can be accessed here (if it is running): https://banzhaf.chickenkiller.com:444/job1/jobs1/crm_mon.cgi
vmware files
suse-9.3-c1: insgesamt 3,1G -rw------- 1 joachim users 8,5K 13. Dez 2005 nvram -rw------- 1 joachim users 8,5K 27. Nov 2005 nvram.sav -rw------- 1 joachim users 1,4G 27. Nov 2005 SUSE-Linux-9.3-s001.vmdk -rw------- 1 joachim users 1,4G 27. Nov 2005 SUSE-Linux-9.3-s002.vmdk -rw------- 1 joachim users 37M 27. Nov 2005 SUSE-Linux-9.3-s003.vmdk -rw------- 1 joachim users 277M 27. Nov 2005 SUSE-Linux-9.3-s004.vmdk -rw------- 1 joachim users 9,2M 27. Nov 2005 SUSE-Linux-9.3-s005.vmdk -rw------- 1 joachim users 64K 27. Nov 2005 SUSE-Linux-9.3-s006.vmdk -rw------- 1 joachim users 586 27. Nov 2005 SUSE-Linux-9.3.vmdk -rwxr-xr-x 1 joachim users 2,7K 26. Feb 07:57 SUSE-Linux-9.3.vmx -rw-r--r-- 1 joachim users 269 26. Feb 07:57 SUSE-Linux-9.3.vmxf -rw------- 1 joachim users 2,6K 13. Dez 2005 SUSE-Linux-9.3.vmx.sav -rw-r--r-- 1 joachim users 30K 13. Dez 2005 vmware.log
vmware image config
$ cat suse-9.3-c1/SUSE-Linux-9.3.vmx #!/usr/bin/vmware .encoding = "UTF-8" config.version = "7" virtualHW.version = "3" scsi0.present = "TRUE" memsize = "384" scsi0:0.present = "TRUE" scsi0:0.fileName = "SUSE-Linux-9.3.vmdk" ide1:0.present = "TRUE" ide1:0.fileName = "/dev/dvd" ide1:0.deviceType = "atapi-cdrom" floppy0.startConnected = "FALSE" floppy0.fileName = "/dev/fd0" Ethernet0.present = "TRUE" Ethernet0.connectionType = "custom" displayName = "SUSE Linux 9.3 Cluster Node 1" guestOS = "suse" priority.grabbed = "normal" priority.ungrabbed = "normal" powerType.powerOff = "soft" powerType.powerOn = "soft" powerType.suspend = "soft" powerType.reset = "soft" Ethernet0.vnet = "/dev/vmnet1" Ethernet1.present = "TRUE" Ethernet1.connectionType = "custom" Ethernet1.vnet = "/dev/vmnet2" Ethernet2.present = "TRUE" Ethernet2.connectionType = "custom" Ethernet2.vnet = "/dev/vmnet3" Ethernet3.present = "TRUE" Ethernet3.connectionType = "custom" Ethernet3.vnet = "/dev/vmnet4" usb.present = "FALSE" #serial0.present = "TRUE" #serial0.fileType = "pipe" #serial0.fileName = "/tmp/vmware-suse-9.3-cluster-serial-1" #serial1.present = "TRUE" #serial1.fileType = "pipe" #serial1.fileName = "/tmp/vmware-suse-9.3-cluster-serial-3" #serial1.pipe.endPoint = "client" undopoint.action = "prompt" annotation = "Minimal Suse 9.3 pro for Cluster Nodes.|0AYou date: 2005-10-19|0AHOWTO Clone:|0ACopy everything to new directory|0Acustomize SUSE-Linux-9.3.vmx:|0A change Display Name|0A change Annotation|0A change ethernetX.address to unique values|0Astart vm|0A change /etc/HOSTNAME|0A change /etc/hosts if host ips change|0A change /etc/sysconfig/network/ifcfg-eth-X|0A" uuid.location = "56 4d e7 5d 52 90 b5 59-b1 00 58 b7 15 2f 14 1a" uuid.bios = "56 4d e7 5d 52 90 b5 59-b1 00 58 b7 15 2f 14 1a" # vmwareid cl nd if ethernet0.address = "00:50:56:01:01:00" ethernet1.address = "00:50:56:01:01:01" ethernet2.address = "00:50:56:01:01:02" ethernet3.address = "00:50:56:01:01:03" tools.remindInstall = "FALSE" undopoints.seqNum = "0" scsi0:0.mode = "undoable" undopoint.restoreFromCheckpoint = "FALSE" undopoint.checkpointedOnline = "FALSE" scsi0:0.redo = "./SUSE-Linux-9.3.vmdk.REDO_a2m6ht" ide1:0.startConnected = "FALSE" tools.syncTime = "TRUE" undopoint.protected = "FALSE" gui.restricted = "FALSE" Ethernet0.addressType = "static" Ethernet1.addressType = "static" Ethernet2.addressType = "static" Ethernet3.addressType = "static" isolation.tools.hgfs.disable = "TRUE" extendedConfigFile = "SUSE-Linux-9.3.vmxf" scsi0:0.deviceType = "scsi-hardDisk" virtualHW.productCompatibility = "hosted" sound.present = "FALSE"
Convert the image from vmdk to raw format
Easy as that
qemu-img convert SUSE-Linux-9.3.vmdk SUSE-Linux-9.3.raw
Setup host networking
- Enable kvm host via yast module "Install Hypervisor and tools"
- Enable bridging with the public interface of the host (here eth0 -> br0)
- for each vmware EthernetX configuration create a virtual network with virt-manager/connection details
- vmnet1 -> virbr1 (same subnet and type, here 192.168.100.0/24 NAT)
- vmnet2 -> virbr2 (same subnet and type, here 192.168.101.0/24 isolated)
- vmnet3 -> virbr3 (same subnet and type, here 192.168.102.0/24 isolated)
- vmnet4 -> virbr4 (same subnet and type, here 192.168.103.0/24 isolated)
- enable access to bridges for qemu in /etc/qemu/bridge.conf (e.g. add line allow all)
- enable ip forwarding on the host to let the vm's access the outside world (if wanted)
Warning: do not restart networking on the vm host because that destroys the bridge config and isolates the vm's
Make kvm startup script
#!/bin/bash # start cluster node 1 pass=somepw port=someport if ! ps auxwww | grep -q "[5]64de75d-5290-b559-b100-58b7152f141a"; then qemu-kvm \ -drive file=SUSE-Linux-9.3.raw,if=ide \ -uuid 564de75d-5290-b559-b100-58b7152f141a \ -m 384 \ -rtc base=localtime \ -soundhw ac97 \ -usb -usbdevice tablet \ -net nic,vlan=0,model=e1000,macaddr=00:50:56:01:01:00 \ -net bridge,vlan=0,br=virbr1 \ -net nic,vlan=1,model=e1000,macaddr=00:50:56:01:01:01 \ -net bridge,vlan=1,br=virbr2 \ -net nic,vlan=2,model=e1000,macaddr=00:50:56:01:01:02 \ -net bridge,vlan=2,br=virbr3 \ -net nic,vlan=3,model=e1000,macaddr=00:50:56:01:01:03 \ -net bridge,vlan=3,br=virbr4 \ -vga qxl \ -enable-kvm \ -device virtio-serial \ -chardev spicevmc,id=vdagent,name=vdagent \ -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 \ -spice port=$port,addr=localhost,password=$pass \ -name suse-9.3-c1 & fi exec spicec -h localhost -p $port -t 'Suse-9.3-C1' -w $pass
- The uuid (grep and -uuid) comes from the vmx file
- the macaddr also comes from the vmx file, this way the guest network will just work.
The drive interface is ide, because the scsi drivers of the guests are not compatible with kvm scsi controller. That means, references to the disk sda need to be changed to hda (on grub screen when guest boots and later in /boot/grub/menu.lst and /etc/fstab)
Image config
- edit /etc/grub/menu.lst
- change sda to hda everywhere
- change vga=0 if another number is used and spice looks awful
- edit /etc/fstab
- change sda to hda everywhere
- edit /etc/resolv.conf
- use the vm hosts nameserver (or none if you disabled ip forwarding on the host)
- edit /etc/ntp.conf
- use a valid reachable ntp server (usually at least the vm host works: 192.168.1.1)
- collect syslog warnings on host
- edit /etc/syslog-ng/syslog-ng.conf.in: add udp(...);
destination warn { file("/var/log/warn" fsync(yes)); udp("192.168.1.1" port(514)); };
- use the new config
SuSEconfig rcsyslog restart
- on the target host, using rsyslog
- enable udb in /etc/rsyslog.d/remote.conf
- add vm bridges virbr1-4 to internal network in /etc/sysconfig/SuSEfirewall2
- restart firewall and syslog
- on the target host, using rsyslog
Update heartbeat
Source tar ball
wget http://pkgs.fedoraproject.org/repo/pkgs/heartbeat/heartbeat-2.1.3.tar.gz/bca53530a3802f7677772323047405cd/heartbeat-2.1.3.tar.gz
Package source
http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/
Packages needed for compile
wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/python-devel-2.4-14.i586.rpm wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/python-tk-2.4-14.i586.rpm wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/blt-2.4z-205.i586.rpm wget ftp://bo.mirror.garr.it/pub/1/suse/ftp.suse.com/projects/tcl/8.4.12/9.2-i386/tk-8.4.12-2.1.i586.rpm wget ftp://bo.mirror.garr.it/pub/1/suse/ftp.suse.com/projects/tcl/8.4.12/9.2-i386/tcl-8.4.12-5.1.i586.rpm wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/gdbm-devel-1.8.3-230.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/gnutls-1.2.0-3.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/gnutls-devel-1.2.0-3.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/lzo-1.08-107.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/libopencdk-0.5.5-3.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/libgcrypt-1.2.1-3.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/libgcrypt-devel-1.2.1-3.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/libgpg-error-1.0-3.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/libgpg-error-devel-1.0-3.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/e2fsprogs-devel-1.36-5.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/pam-devel-0.78-8.i586.rpm wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/readline-devel-5.0-7.2.i586.rpm wget ftp://bo.mirror.garr.it/pub/1/suse/discontinued/i386/9.3/suse/i586/readline-5.0-7.2.i586.rpm
Make rpms
tar xzf heartbeat-2.1.3.tar.gz cd heartbeat-2.1.3 ./ConfigureMe package
Install new heartbeat version
Better disable automatic heartbeat start for now. Else it could be you get trapped in a reboot loop because the nodes restart themselves due to config errors with the new version.
Usual rpm -Uhv stuff. The nondev packages of the above build preparation need to be installed on the other nodes, too.
After that, expect lots of warnings or errors in the cib, because the DTD has changed. Most important: boot and cluster configuration in separate tags and id for every object
Also the RAs need to be more ocf compliant than before. Heartbeat now checks the resource status in the beginning. Especially the db2 script failed on stop/status when the underlying filesystem is not available (which is normal at startup). It needs to just report it as success/stopped.
Therefore I copied the original db2 resource script from /usr/lib/ocf/resource.d/heartbeat to /usr/lib/ocf/resource.d/listec, implemented the necessary behavior and changed the cib to use the listec provider for db2.
Web Status
How about querying the cluster status via a browser?
One apache on each node
- Install apache2, apache2-worker (or prefork) on each node
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/apache2-2.0.53-9.i586.rpm wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/apache2-worker-2.0.53-9.i586.rpm rpm -ihv apache2-2.0.53-9.i586.rpm apache2-worker-2.0.53-9.i586.rpm
- Edit /etc/sysconfig/apache2 (add email)
- Create simple /srv/www/cgi-bin/crm_mon.cgi
#!/bin/sh exec /usr/sbin/crm_mon --web-cgi
- Add wwwrun user to group haclient in /etc/group
- Optional: provide /srv/www/htdocs/favicon.ico (avoids messages in apache error log)
- Enable apache autostart on boot.
chkconfig --add apache2
- Optional: Make clone resource with one floating ip. Then we can use the same url while cluster has quorum.
- Optional: Make a default welcome page with a link to the status page
One active apache at a time
If you want to use apache as a clone resource (only active if heartbeat is active!)
- Install wget on each node
wget http://ftp.hosteurope.de/mirror/ftp.suse.com/pub/suse/discontinued/i386/9.3/suse/i586/wget-1.9.1-57.3.i586.rpm rpm -ihv wget-1.9.1-57.3.i586.rpm
- Enable server-status page (in /etc/syscongig/apache2 add module status and define STATUS)
- Disable apache autostart on boot
Proxy from VM Host
Since the virtual machines are usually not directly accessible from outside, you need another apache on the vm host that acts as a proxy.
Add the file /etc/apache2/conf.d/heartbeat.conf
ProxyPass /jobs1 http://jobs1/cgi-bin ProxyPassReverse /jobs1 http://jobs1/cgi-bin ProxyPass /jobn1 http://jobn1/cgi-bin ProxyPassReverse /jobn1 http://jobn1/cgi-bin ProxyPass /jobn2 http://jobn2/cgi-bin ProxyPassReverse /jobn2 http://jobn2/cgi-bin ProxyPass /jobn3 http://jobn3/cgi-bin ProxyPassReverse /jobn3 http://jobn3/cgi-bin
Enable proxy: In /etc/sysconfig/apache2 add " proxy proxy_http" to MODULES variable
Backup
Backup is on qnap:/share/BackupJob1/suse-9.3-cluster.tar.bz2
Cleanup /etc/hosts
some entries were outdated or plane wrong (e.g. job4)
Password
root has listec password now. Others can be changed if needed.
Outside Access
- configure firewall on vm host in /etc/sysconfig/SuSEfirewall2
FW_DEV_EXT="br0 eth0" FW_DEV_INT="virbr1 virbr2 virbr3 virbr4" FW_ROUTE="yes" FW_MASQUERADE="yes" FW_PROTECT_FROM_INT="yes" FW_CONFIGURATIONS_EXT="apache2 apache2-ssl sshd" FW_FORWARD_MASQ="0.0.0.0/0,192.168.100.101,tcp,221,22 0.0.0.0/0,192.168.100.102,tcp,222,22 0.0.0.0/0,192.168.100.103,tcp,223,22"
- configure port forwarding from internet router ports 221x to vmhost:22x
Todo
- Maybe update selfcompiled 0.7 drbd to 0.8 which is available as a distro rpm