Skip to content

Update documentation #3911

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/sphinx/Clusters_from_Scratch/active-active.rst
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,7 @@ being fenced every time quorum is lost.

To address this situation, set ``no-quorum-policy`` to ``freeze`` when GFS2 is
in use. This means that when quorum is lost, the remaining partition will do
nothing until quorum is regained.
nothing until quorum is regained.

.. code-block:: console

Expand Down
16 changes: 8 additions & 8 deletions doc/sphinx/Clusters_from_Scratch/ap-configuration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ Final Cluster Configuration
pcmk-1 pcmk-2
Pacemaker Nodes:
pcmk-1 pcmk-2

Resources:
Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
Attributes: cidr_netmask=24 ip=192.168.122.120
Expand Down Expand Up @@ -121,13 +121,13 @@ Final Cluster Configuration
Operations: monitor interval=20s timeout=40s (WebFS-monitor-interval-20s)
start interval=0s timeout=60s (WebFS-start-interval-0s)
stop interval=0s timeout=60s (WebFS-stop-interval-0s)

Stonith Devices:
Resource: fence_dev (class=stonith type=some_fence_agent)
Attributes: pcmk_delay_base=pcmk-1:5s;pcmk-2:0s pcmk_host_map=pcmk-1:almalinux9-1;pcmk-2:almalinux9-2
Operations: monitor interval=60s (fence_dev-monitor-interval-60s)
Fencing Levels:

Location Constraints:
Resource: WebSite
Enabled on:
Expand All @@ -143,17 +143,17 @@ Final Cluster Configuration
WebSite with WebFS-clone (score:INFINITY) (id:colocation-WebSite-WebFS-INFINITY)
WebFS-clone with dlm-clone (score:INFINITY) (id:colocation-WebFS-dlm-clone-INFINITY)
Ticket Constraints:

Alerts:
No alerts defined

Resources Defaults:
Meta Attrs: build-resource-defaults
resource-stickiness=100
Operations Defaults:
Meta Attrs: op_defaults-meta_attributes
timeout=240s

Cluster Properties:
cluster-infrastructure: corosync
cluster-name: mycluster
Expand All @@ -162,10 +162,10 @@ Final Cluster Configuration
last-lrm-refresh: 1658896047
no-quorum-policy: freeze
stonith-enabled: true

Tags:
No tags defined

Quorum:
Options:

Expand Down
6 changes: 3 additions & 3 deletions doc/sphinx/Clusters_from_Scratch/cluster-setup.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ that will make our lives easier:
.. code-block:: console

# dnf install -y pacemaker pcs psmisc policycoreutils-python3

.. NOTE::

This document uses ``pcs`` for cluster management. Other alternatives,
Expand Down Expand Up @@ -206,10 +206,10 @@ Start by taking some time to familiarize yourself with what ``pcs`` can do.
.. code-block:: console

[root@pcmk-1 ~]# pcs

Usage: pcs [-f file] [-h] [commands]...
Control and configure pacemaker and corosync.

Options:
-h, --help Display usage and exit.
-f file Perform actions on file instead of active CIB.
Expand Down
13 changes: 2 additions & 11 deletions doc/sphinx/Clusters_from_Scratch/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,6 @@ Clusters from Scratch
*Step-by-Step Instructions for Building Your First High-Availability Cluster*


Abstract
--------
This document provides a step-by-step guide to building a simple high-availability
cluster using Pacemaker.

Expand All @@ -22,9 +20,6 @@ included. However, the guide is primarily composed of commands, the reasons for
executing them, and their expected outputs.


Table of Contents
-----------------

.. toctree::
:maxdepth: 3
:numbered:
Expand All @@ -41,9 +36,5 @@ Table of Contents
ap-configuration
ap-corosync-conf
ap-reading

Index
-----

* :ref:`genindex`
* :ref:`search`
:ref:`genindex`
:ref:`search`
30 changes: 15 additions & 15 deletions doc/sphinx/Clusters_from_Scratch/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Install |CFS_DISTRO| |CFS_DISTRO_VER|
Boot the Install Image
______________________

Download the latest |CFS_DISTRO| |CFS_DISTRO_VER| DVD ISO by navigating to
Download the latest |CFS_DISTRO| |CFS_DISTRO_VER| DVD ISO by navigating to
the |CFS_DISTRO| `mirrors list <https://mirrors.almalinux.org/isos.html>`_,
selecting the latest 9.x version for your machine's architecture, selecting a
download mirror that's close to you, and finally selecting the latest .iso file
Expand Down Expand Up @@ -192,13 +192,13 @@ Ensure that the machine has the static IP address you configured earlier.
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:32:cf:a9 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.101/24 brd 192.168.122.255 scope global noprefixroute enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::c3e1:3ba:959:fa96/64 scope link noprefixroute
inet6 fe80::c3e1:3ba:959:fa96/64 scope link noprefixroute
valid_lft forever preferred_lft forever

.. NOTE::
Expand All @@ -219,7 +219,7 @@ Next, ensure that the routes are as expected:
.. code-block:: console

[root@pcmk-1 ~]# ip route
default via 192.168.122.1 dev enp1s0 proto static metric 100
default via 192.168.122.1 dev enp1s0 proto static metric 100
192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.101 metric 100

If there is no line beginning with ``default via``, then use ``nmcli`` to add a
Expand All @@ -238,7 +238,7 @@ testing whether we can reach the gateway we configured.
[root@pcmk-1 ~]# ping -c 1 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.492 ms

--- 192.168.122.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms
Expand All @@ -250,7 +250,7 @@ Now try something external; choose a location you know should be available.
[root@pcmk-1 ~]# ping -c 1 www.clusterlabs.org
PING mx1.clusterlabs.org (95.217.104.78) 56(84) bytes of data.
64 bytes from mx1.clusterlabs.org (95.217.104.78): icmp_seq=1 ttl=54 time=134 ms

--- mx1.clusterlabs.org ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 133.987/133.987/133.987/0.000 ms
Expand All @@ -269,11 +269,11 @@ From another host, check whether we can see the new host at all:
[gchin@gchin ~]$ ping -c 1 192.168.122.101
PING 192.168.122.101 (192.168.122.101) 56(84) bytes of data.
64 bytes from 192.168.122.101: icmp_seq=1 ttl=64 time=0.344 ms

--- 192.168.122.101 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms

Next, login as ``root`` via SSH.

.. code-block:: console
Expand All @@ -283,9 +283,9 @@ Next, login as ``root`` via SSH.
ECDSA key fingerprint is SHA256:NBvcRrPDLIt39Rf0Tz4/f2Rd/FA5wUiDOd9bZ9QWWjo.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.122.101' (ECDSA) to the list of known hosts.
root@192.168.122.101's password:
root@192.168.122.101's password:
Last login: Tue Jan 10 20:46:30 2021
[root@pcmk-1 ~]#
[root@pcmk-1 ~]#

Apply Updates
_____________
Expand Down Expand Up @@ -351,7 +351,7 @@ Confirm that you can communicate between the two new nodes:
64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=1.22 ms
64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.795 ms
64 bytes from 192.168.122.102: icmp_seq=3 ttl=64 time=0.751 ms

--- 192.168.122.102 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2054ms
rtt min/avg/max/mdev = 0.751/0.923/1.224/0.214 ms
Expand All @@ -378,7 +378,7 @@ We can now verify the setup by again using ``ping``:
64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=1 ttl=64 time=0.295 ms
64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=2 ttl=64 time=0.616 ms
64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=3 ttl=64 time=0.809 ms

--- pcmk-2.localdomain ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2043ms
rtt min/avg/max/mdev = 0.295/0.573/0.809/0.212 ms
Expand Down Expand Up @@ -444,10 +444,10 @@ Install the key on the other node:
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@pcmk-2's password:
root@pcmk-2's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'pcmk-2'"
and check to make sure that only the key(s) you wanted were added.

Expand Down
16 changes: 8 additions & 8 deletions doc/sphinx/Clusters_from_Scratch/shared-storage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,16 +90,16 @@ which is more than sufficient for a single HTML file and (later) GFS2 metadata.
.. code-block:: console

[root@pcmk-1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VG #PV #LV #SN Attr VSize VFree
almalinux_pcmk-1 1 2 0 wz--n- <19.00g <13.00g

[root@pcmk-1 ~]# lvcreate --name drbd-demo --size 512M almalinux_pcmk-1
Logical volume "drbd-demo" created.
[root@pcmk-1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
drbd-demo almalinux_pcmk-1 -wi-a----- 512.00m
root almalinux_pcmk-1 -wi-ao---- 4.00g
swap almalinux_pcmk-1 -wi-ao---- 2.00g
drbd-demo almalinux_pcmk-1 -wi-a----- 512.00m
root almalinux_pcmk-1 -wi-ao---- 4.00g
swap almalinux_pcmk-1 -wi-ao---- 2.00g

Repeat for the second node, making sure to use the same size:

Expand Down Expand Up @@ -210,9 +210,9 @@ Run them on one node:
The server's response is:

you are the 25212th user to install this version

We can confirm DRBD's status on this node:

.. code-block:: console

[root@pcmk-1 ~]# drbdadm status
Expand Down Expand Up @@ -596,7 +596,7 @@ it can no longer host resources, and eventually all the resources will move.
* Promoted: [ pcmk-1 ]
* Stopped: [ pcmk-2 ]
* WebFS (ocf:heartbeat:Filesystem): Started pcmk-1

Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
Expand Down Expand Up @@ -630,7 +630,7 @@ eligible to host resources again.
* Promoted: [ pcmk-1 ]
* Unpromoted: [ pcmk-2 ]
* WebFS (ocf:heartbeat:Filesystem): Started pcmk-1

Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
Expand Down
8 changes: 4 additions & 4 deletions doc/sphinx/Clusters_from_Scratch/verification.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,17 +68,17 @@ Next, check the membership and quorum APIs:

.. code-block:: console

[root@pcmk-1 ~]# corosync-cmapctl | grep members
[root@pcmk-1 ~]# corosync-cmapctl | grep members
runtime.members.1.config_version (u64) = 0
runtime.members.1.ip (str) = r(0) ip(192.168.122.101)
runtime.members.1.ip (str) = r(0) ip(192.168.122.101)
runtime.members.1.join_count (u32) = 1
runtime.members.1.status (str) = joined
runtime.members.2.config_version (u64) = 0
runtime.members.2.ip (str) = r(0) ip(192.168.122.102)
runtime.members.2.ip (str) = r(0) ip(192.168.122.102)
runtime.members.2.join_count (u32) = 1
runtime.members.2.status (str) = joined

[root@pcmk-1 ~]# pcs status corosync
[root@pcmk-1 ~]# pcs status corosync

Membership information
----------------------
Expand Down
Loading