Install workflow: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
|||
Line 1: | Line 1: | ||
[[Projekt_Lis.Tec_Cluster]] | |||
---- | |||
== main workflow == | == main workflow == | ||
* parse commandline for create/modify/reconstruct mode, config file and config file option replacements | * parse commandline for create/modify/reconstruct mode, config file and config file option replacements |
Revision as of 16:38, 8 April 2009
main workflow
- parse commandline for create/modify/reconstruct mode, config file and config file option replacements
- create: setup and start new vm's as cluster nodes and configure them according to config file
- modify: use existing nodes
- reconstruct: generate config file from existing servers
- read config file
- validate config file
- act according to command mode
create workflow
- copy existing vm stub (if not using real machines)
- modify vm stub to contain new uuids and mac addresses (if not using real machines)
- generate ssh keys for inter node comm.
- customize config file and build iso to provide it with the ssh keys, rpms and the node prep command to the new node via cd image
- start the vm's (or demand admin to start the real machines) if they can not be pinged
- wait for callback from the new nodes (may be automatic, if an autostart mechanism for the cd isos is in place)
- new node configures ip, hostname, routing and ssh authentication according to config file in iso, if necessary
- new node calls back management node with prep status
- callback provides node prep status and triggers node customization.
- check for required packages (ntp, partitioner, lvm, drbd, heartbeat, db2) on the nodes and install/update if necessary
- check and configure ntp if necessary
- check and configure syslog for remote logging if necessary
- check for required partitions and create if necessary
- check for required vg's and lv's and create if necessary
- check for required filesystems and create if necessary
- check for required drbd devices and create if necessary (define in config file for each drbd device which nodes data shall be destroyed)
- check for basic hb config and create it if necessary (use custom udp port to allow for more clusters in one subnet, use a unique cluster auth key)
- on secondary check for db2 instances and create stubs if necessary (fail if non ha instance already exists!)
- on primary check for db2 instances and create if necessary
- on primary check if db2 instances are on drbd devices and move if necessary (includes adding hb resource config)
- on primary check if databases exist and create if necessary (backup or ddl from iso)
- check for database backup tools and install if necessary
- check if databases are covered by db2 backup tool config and add them if necessary
- wait for the other node to appear and being in sync
modify workflow
runs on the management node
like create workflow except skipping the first five steps
somehow it must be stored which resources are under cluster manager control, so they can be removed if they do not appear in a modified config script
it must be checked if the change is possible (i.e. rename of a vg used by the ha db2 instances)
reconstruct workflow
runs on a cluster node
- gathers all info that is needed to recreate the cluster (except for the database data)
- writes the config file
- generates the iso that must be provided to new nodes