All examples assume two nodes that are reachable by their short name and IP address:
- node1 - 192.168.1.1
- node2 - 192.168.1.2
The convention followed is that [ALL] # denotes a command that needs to be run on all cluster machines, and [ONE] # indicates a command that only needs to be run on one cluster host.
We have a quickstart edition for each major distro. To continue, select the distribution you'll be using:
- RHEL 7 (and clones such as CentOS),
- RHEL 6 (and clones such as CentOS),
- openSUSE and SLES 12,
- SLES 11, or
- Ubuntu Precise LTS
Why Does Each Distribution Have its Own Quickstart?
Instead of re-inventing the wheel, Pacemaker makes use of the messaging, membership and quorum capabilities of other projects (such as Heartbeat or Corosync).
Pacemaker is fully functional with all three current Corosync release series (1.2.x, 1.4.x and 2.0.x) as well as Heartbeat. However this has been a source of confusion because Pacemaker needs to be set up differently depending on what each distribution ships. We call each combination of Pacemaker + Corosync (or Heartbeat) a "stack".
For example, on RHEL6 the supported stack is based on CMAN which has APIs Pacemaker can use to obtain the membership and quroum information it needs. Although CMAN uses Corosync underneath, it is configured via cluster.conf and Pacemaker is started as a separate init script.
However SLES11 doesn't ship CMAN, so its users configure corosync.conf directly and enable a custom plugin that gets loaded into Corosync (because Corosync 1.4 doesn't have the quorum and membership APIs needed by Pacemaker). This plugin also starts Pacemaker automatically when Corosync is started.
To confuse things further, SLES users start Corosync with the openAIS init script because it used to be part of that project.
Eventually everyone will move to Corosync 2 which removes support for CMAN and custom plugins BUT natively includes the APIs Pacemaker needs for quorum and membership. In this case, users would configure corosync.conf and use the Pacemaker init-script to start up after Corosync.
There are some architectural differences between the different stacks, and some are more elegant than others, but the most important thing by far is that everyone is getting membership and quorum information from the same place.
See this post for a longer discussion on the different stack options and how they relate to cluster filesystems in particular.