diff --git a/doc/sphinx/Clusters_from_Scratch/active-active.rst b/doc/sphinx/Clusters_from_Scratch/active-active.rst index 0d271746375..d12dfa47569 100644 --- a/doc/sphinx/Clusters_from_Scratch/active-active.rst +++ b/doc/sphinx/Clusters_from_Scratch/active-active.rst @@ -258,7 +258,7 @@ being fenced every time quorum is lost. To address this situation, set ``no-quorum-policy`` to ``freeze`` when GFS2 is in use. This means that when quorum is lost, the remaining partition will do -nothing until quorum is regained. +nothing until quorum is regained. .. code-block:: console diff --git a/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst b/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst index b71e9af67c5..0bd92a1f9fb 100644 --- a/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst +++ b/doc/sphinx/Clusters_from_Scratch/ap-configuration.rst @@ -85,7 +85,7 @@ Final Cluster Configuration pcmk-1 pcmk-2 Pacemaker Nodes: pcmk-1 pcmk-2 - + Resources: Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: cidr_netmask=24 ip=192.168.122.120 @@ -121,13 +121,13 @@ Final Cluster Configuration Operations: monitor interval=20s timeout=40s (WebFS-monitor-interval-20s) start interval=0s timeout=60s (WebFS-start-interval-0s) stop interval=0s timeout=60s (WebFS-stop-interval-0s) - + Stonith Devices: Resource: fence_dev (class=stonith type=some_fence_agent) Attributes: pcmk_delay_base=pcmk-1:5s;pcmk-2:0s pcmk_host_map=pcmk-1:almalinux9-1;pcmk-2:almalinux9-2 Operations: monitor interval=60s (fence_dev-monitor-interval-60s) Fencing Levels: - + Location Constraints: Resource: WebSite Enabled on: @@ -143,17 +143,17 @@ Final Cluster Configuration WebSite with WebFS-clone (score:INFINITY) (id:colocation-WebSite-WebFS-INFINITY) WebFS-clone with dlm-clone (score:INFINITY) (id:colocation-WebFS-dlm-clone-INFINITY) Ticket Constraints: - + Alerts: No alerts defined - + Resources Defaults: Meta Attrs: build-resource-defaults resource-stickiness=100 Operations Defaults: Meta Attrs: op_defaults-meta_attributes timeout=240s - + Cluster Properties: cluster-infrastructure: corosync cluster-name: mycluster @@ -162,10 +162,10 @@ Final Cluster Configuration last-lrm-refresh: 1658896047 no-quorum-policy: freeze stonith-enabled: true - + Tags: No tags defined - + Quorum: Options: diff --git a/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst b/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst index 437b5f8556a..5cdbe2f64c2 100644 --- a/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst +++ b/doc/sphinx/Clusters_from_Scratch/cluster-setup.rst @@ -50,7 +50,7 @@ that will make our lives easier: .. code-block:: console # dnf install -y pacemaker pcs psmisc policycoreutils-python3 - + .. NOTE:: This document uses ``pcs`` for cluster management. Other alternatives, @@ -206,10 +206,10 @@ Start by taking some time to familiarize yourself with what ``pcs`` can do. .. code-block:: console [root@pcmk-1 ~]# pcs - + Usage: pcs [-f file] [-h] [commands]... Control and configure pacemaker and corosync. - + Options: -h, --help Display usage and exit. -f file Perform actions on file instead of active CIB. diff --git a/doc/sphinx/Clusters_from_Scratch/index.rst b/doc/sphinx/Clusters_from_Scratch/index.rst index 74fe2503af6..3477ccd5385 100644 --- a/doc/sphinx/Clusters_from_Scratch/index.rst +++ b/doc/sphinx/Clusters_from_Scratch/index.rst @@ -4,8 +4,6 @@ Clusters from Scratch *Step-by-Step Instructions for Building Your First High-Availability Cluster* -Abstract --------- This document provides a step-by-step guide to building a simple high-availability cluster using Pacemaker. @@ -22,9 +20,6 @@ included. However, the guide is primarily composed of commands, the reasons for executing them, and their expected outputs. -Table of Contents ------------------ - .. toctree:: :maxdepth: 3 :numbered: @@ -41,9 +36,5 @@ Table of Contents ap-configuration ap-corosync-conf ap-reading - -Index ------ - -* :ref:`genindex` -* :ref:`search` + :ref:`genindex` + :ref:`search` diff --git a/doc/sphinx/Clusters_from_Scratch/installation.rst b/doc/sphinx/Clusters_from_Scratch/installation.rst index e7f9e2d8d57..02f5be975f4 100644 --- a/doc/sphinx/Clusters_from_Scratch/installation.rst +++ b/doc/sphinx/Clusters_from_Scratch/installation.rst @@ -7,7 +7,7 @@ Install |CFS_DISTRO| |CFS_DISTRO_VER| Boot the Install Image ______________________ -Download the latest |CFS_DISTRO| |CFS_DISTRO_VER| DVD ISO by navigating to +Download the latest |CFS_DISTRO| |CFS_DISTRO_VER| DVD ISO by navigating to the |CFS_DISTRO| `mirrors list `_, selecting the latest 9.x version for your machine's architecture, selecting a download mirror that's close to you, and finally selecting the latest .iso file @@ -192,13 +192,13 @@ Ensure that the machine has the static IP address you configured earlier. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever - inet6 ::1/128 scope host + inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:32:cf:a9 brd ff:ff:ff:ff:ff:ff inet 192.168.122.101/24 brd 192.168.122.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever - inet6 fe80::c3e1:3ba:959:fa96/64 scope link noprefixroute + inet6 fe80::c3e1:3ba:959:fa96/64 scope link noprefixroute valid_lft forever preferred_lft forever .. NOTE:: @@ -219,7 +219,7 @@ Next, ensure that the routes are as expected: .. code-block:: console [root@pcmk-1 ~]# ip route - default via 192.168.122.1 dev enp1s0 proto static metric 100 + default via 192.168.122.1 dev enp1s0 proto static metric 100 192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.101 metric 100 If there is no line beginning with ``default via``, then use ``nmcli`` to add a @@ -238,7 +238,7 @@ testing whether we can reach the gateway we configured. [root@pcmk-1 ~]# ping -c 1 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.492 ms - + --- 192.168.122.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms @@ -250,7 +250,7 @@ Now try something external; choose a location you know should be available. [root@pcmk-1 ~]# ping -c 1 www.clusterlabs.org PING mx1.clusterlabs.org (95.217.104.78) 56(84) bytes of data. 64 bytes from mx1.clusterlabs.org (95.217.104.78): icmp_seq=1 ttl=54 time=134 ms - + --- mx1.clusterlabs.org ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 133.987/133.987/133.987/0.000 ms @@ -269,11 +269,11 @@ From another host, check whether we can see the new host at all: [gchin@gchin ~]$ ping -c 1 192.168.122.101 PING 192.168.122.101 (192.168.122.101) 56(84) bytes of data. 64 bytes from 192.168.122.101: icmp_seq=1 ttl=64 time=0.344 ms - + --- 192.168.122.101 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms - + Next, login as ``root`` via SSH. .. code-block:: console @@ -283,9 +283,9 @@ Next, login as ``root`` via SSH. ECDSA key fingerprint is SHA256:NBvcRrPDLIt39Rf0Tz4/f2Rd/FA5wUiDOd9bZ9QWWjo. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '192.168.122.101' (ECDSA) to the list of known hosts. - root@192.168.122.101's password: + root@192.168.122.101's password: Last login: Tue Jan 10 20:46:30 2021 - [root@pcmk-1 ~]# + [root@pcmk-1 ~]# Apply Updates _____________ @@ -351,7 +351,7 @@ Confirm that you can communicate between the two new nodes: 64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=1.22 ms 64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.795 ms 64 bytes from 192.168.122.102: icmp_seq=3 ttl=64 time=0.751 ms - + --- 192.168.122.102 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2054ms rtt min/avg/max/mdev = 0.751/0.923/1.224/0.214 ms @@ -378,7 +378,7 @@ We can now verify the setup by again using ``ping``: 64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=1 ttl=64 time=0.295 ms 64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=2 ttl=64 time=0.616 ms 64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=3 ttl=64 time=0.809 ms - + --- pcmk-2.localdomain ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2043ms rtt min/avg/max/mdev = 0.295/0.573/0.809/0.212 ms @@ -444,10 +444,10 @@ Install the key on the other node: Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys - root@pcmk-2's password: - + root@pcmk-2's password: + Number of key(s) added: 1 - + Now try logging into the machine, with: "ssh 'pcmk-2'" and check to make sure that only the key(s) you wanted were added. diff --git a/doc/sphinx/Clusters_from_Scratch/shared-storage.rst b/doc/sphinx/Clusters_from_Scratch/shared-storage.rst index dea3e58027b..898e921b0c0 100644 --- a/doc/sphinx/Clusters_from_Scratch/shared-storage.rst +++ b/doc/sphinx/Clusters_from_Scratch/shared-storage.rst @@ -90,16 +90,16 @@ which is more than sufficient for a single HTML file and (later) GFS2 metadata. .. code-block:: console [root@pcmk-1 ~]# vgs - VG #PV #LV #SN Attr VSize VFree + VG #PV #LV #SN Attr VSize VFree almalinux_pcmk-1 1 2 0 wz--n- <19.00g <13.00g [root@pcmk-1 ~]# lvcreate --name drbd-demo --size 512M almalinux_pcmk-1 Logical volume "drbd-demo" created. [root@pcmk-1 ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert - drbd-demo almalinux_pcmk-1 -wi-a----- 512.00m - root almalinux_pcmk-1 -wi-ao---- 4.00g - swap almalinux_pcmk-1 -wi-ao---- 2.00g + drbd-demo almalinux_pcmk-1 -wi-a----- 512.00m + root almalinux_pcmk-1 -wi-ao---- 4.00g + swap almalinux_pcmk-1 -wi-ao---- 2.00g Repeat for the second node, making sure to use the same size: @@ -210,9 +210,9 @@ Run them on one node: The server's response is: you are the 25212th user to install this version - + We can confirm DRBD's status on this node: - + .. code-block:: console [root@pcmk-1 ~]# drbdadm status @@ -596,7 +596,7 @@ it can no longer host resources, and eventually all the resources will move. * Promoted: [ pcmk-1 ] * Stopped: [ pcmk-2 ] * WebFS (ocf:heartbeat:Filesystem): Started pcmk-1 - + Daemon Status: corosync: active/disabled pacemaker: active/disabled @@ -630,7 +630,7 @@ eligible to host resources again. * Promoted: [ pcmk-1 ] * Unpromoted: [ pcmk-2 ] * WebFS (ocf:heartbeat:Filesystem): Started pcmk-1 - + Daemon Status: corosync: active/disabled pacemaker: active/disabled diff --git a/doc/sphinx/Clusters_from_Scratch/verification.rst b/doc/sphinx/Clusters_from_Scratch/verification.rst index 08fab3148ca..d6b35eac599 100644 --- a/doc/sphinx/Clusters_from_Scratch/verification.rst +++ b/doc/sphinx/Clusters_from_Scratch/verification.rst @@ -68,17 +68,17 @@ Next, check the membership and quorum APIs: .. code-block:: console - [root@pcmk-1 ~]# corosync-cmapctl | grep members + [root@pcmk-1 ~]# corosync-cmapctl | grep members runtime.members.1.config_version (u64) = 0 - runtime.members.1.ip (str) = r(0) ip(192.168.122.101) + runtime.members.1.ip (str) = r(0) ip(192.168.122.101) runtime.members.1.join_count (u32) = 1 runtime.members.1.status (str) = joined runtime.members.2.config_version (u64) = 0 - runtime.members.2.ip (str) = r(0) ip(192.168.122.102) + runtime.members.2.ip (str) = r(0) ip(192.168.122.102) runtime.members.2.join_count (u32) = 1 runtime.members.2.status (str) = joined - [root@pcmk-1 ~]# pcs status corosync + [root@pcmk-1 ~]# pcs status corosync Membership information ---------------------- diff --git a/doc/sphinx/Pacemaker_Administration/agents.rst b/doc/sphinx/Pacemaker_Administration/agents.rst index f6df901cdf7..c85c14d6af2 100644 --- a/doc/sphinx/Pacemaker_Administration/agents.rst +++ b/doc/sphinx/Pacemaker_Administration/agents.rst @@ -39,7 +39,7 @@ overwritten by) the agents shipped by existing providers. So, for example, if you choose the provider name of big-corp and want a new resource named big-app, you would create a resource agent called ``/usr/lib/ocf/resource.d/big-corp/big-app`` and define a resource: - + .. code-block: xml @@ -55,7 +55,7 @@ All OCF resource agents are required to implement the following actions. .. list-table:: **Required Actions for OCF Agents** :class: longtable - :widths: 1 4 3 + :widths: 15 25 60 :header-rows: 1 * - Action @@ -113,7 +113,7 @@ only with advanced resource types such as clones. .. list-table:: **Optional Actions for OCF Resource Agents** :class: longtable: - :widths: 1 4 3 + :widths: 15 45 40 :header-rows: 1 * - Action @@ -211,7 +211,7 @@ There are three types of failure recovery: .. list-table:: **Types of Recovery Performed by the Cluster** :class: longtable - :widths: 1 5 5 + :widths: 10 45 45 :header-rows: 1 * - Type @@ -256,7 +256,7 @@ have failed, if ``OCF_SUCCESS`` was not the expected return value. .. list-table:: **OCF Exit Codes and Their Recovery Types** :class: longtable - :widths: 1 3 6 2 + :widths: 8 32 50 10 :header-rows: 1 * - Exit Code @@ -438,7 +438,7 @@ listed in the table below. .. list-table:: **OCF Environment Variables** :class: longtable - :widths: 1 6 + :widths: 50 50 :header-rows: 1 * - Environment Variable @@ -720,7 +720,7 @@ what role it currently believes it to be in. .. list-table:: **Role Implications of OCF Return Codes** :class: longtable - :widths: 1 3 + :widths: 50 50 :header-rows: 1 * - Monitor Return Code @@ -769,7 +769,7 @@ cluster and what is about to happen to it. .. list-table:: **Environment Variables Supplied with Clone Notify Actions** :class: longtable - :widths: 1 1 + :widths: 50 50 :header-rows: 1 * - Variable @@ -914,7 +914,7 @@ Extra Notifications for Promotable Clones .. list-table:: **Extra Environment Variables Supplied for Promotable Clones** :class: longtable - :widths: 1 1 + :widths: 50 50 :header-rows: 1 * - Variable @@ -1101,13 +1101,13 @@ ______________ The relevant part of the `LSB specifications `_ includes a description of all the return codes listed here. - + Assuming `some_service` is configured correctly and currently inactive, the following sequence will help you determine if it is LSB-compatible: #. Start (stopped): - + .. code-block:: none # /etc/init.d/some_service start ; echo "result: $?" @@ -1117,7 +1117,7 @@ LSB-compatible: usual output)? #. Status (running): - + .. code-block:: none # /etc/init.d/some_service status ; echo "result: $?" @@ -1128,7 +1128,7 @@ LSB-compatible: usual output)? #. Start (running): - + .. code-block:: none # /etc/init.d/some_service start ; echo "result: $?" @@ -1138,7 +1138,7 @@ LSB-compatible: script's usual output)? #. Stop (running): - + .. code-block:: none # /etc/init.d/some_service stop ; echo "result: $?" @@ -1148,7 +1148,7 @@ LSB-compatible: script's usual output)? #. Status (stopped): - + .. code-block:: none # /etc/init.d/some_service status ; echo "result: $?" @@ -1159,7 +1159,7 @@ LSB-compatible: script's usual output)? #. Stop (stopped): - + .. code-block:: none # /etc/init.d/some_service stop ; echo "result: $?" diff --git a/doc/sphinx/Pacemaker_Administration/alerts.rst b/doc/sphinx/Pacemaker_Administration/alerts.rst index 05424dca0b8..7a421efc410 100644 --- a/doc/sphinx/Pacemaker_Administration/alerts.rst +++ b/doc/sphinx/Pacemaker_Administration/alerts.rst @@ -9,14 +9,14 @@ Alert Agents Using the Sample Alert Agents ############################# - + Pacemaker provides several sample alert agents, installed in ``/usr/share/pacemaker/alerts`` by default. - + While these sample scripts may be copied and used as-is, they are provided mainly as templates to be edited to suit your purposes. See their source code for the full set of instance attributes they support. - + .. topic:: Sending cluster events as SNMP v2c traps .. code-block:: xml @@ -105,21 +105,21 @@ for the full set of instance attributes they support. Writing an Alert Agent ###################### - + .. index:: single: alert; environment variables single: environment variable; alert agents -.. list-table:: **Environment variables passed to alert agents** +.. list-table:: **Environment Variables Passed to Alert Agents** :class: longtable - :widths: 1 3 1 + :widths: 30 50 20 :header-rows: 1 - + * - Environment Variable - Description - Alert Types * - .. _CRM_alert_kind: - + .. index:: single: environment variable; CRM_alert_kind single: CRM_alert_kind @@ -138,11 +138,11 @@ Writing an Alert Agent - Name of affected node - all * - .. _CRM_alert_node_sequence: - + .. index:: single: environment variable; CRM_alert_node_sequence single: CRM_alert_node_sequence - + CRM_alert_node_sequence - A sequence number increased whenever an alert is being issued on the local node, which can be used to reference the order in which alerts @@ -151,20 +151,20 @@ Writing an Alert Agent events. This number has no cluster-wide meaning. - all * - .. _CRM_alert_recipient: - + .. index:: single: environment variable; CRM_alert_recipient single: CRM_alert_recipient - + CRM_alert_recipient - The configured recipient - all * - .. _CRM_alert_timestamp: - + .. index:: single: environment variable; CRM_alert_timestamp single: CRM_alert_timestamp - + CRM_alert_timestamp - A timestamp created prior to executing the agent, in the format specified by the ``timestamp-format`` meta-attribute. This allows the @@ -173,11 +173,11 @@ Writing an Alert Agent potentially be delayed due to system load, etc.). - all * - .. _CRM_alert_timestamp_epoch: - + .. index:: single: environment variable; CRM_alert_timestamp_epoch single: CRM_alert_timestamp_epoch - + CRM_alert_timestamp_epoch - The same time as ``CRM_alert_timestamp``, expressed as the integer number of seconds since January 1, 1970. This (along with @@ -185,30 +185,30 @@ Writing an Alert Agent to format time in a specific way rather than let the user configure it. - all * - .. _CRM_alert_timestamp_usec: - + .. index:: single: environment variable; CRM_alert_timestamp_usec single: CRM_alert_timestamp_usec - + CRM_alert_timestamp_usec - The same time as ``CRM_alert_timestamp``, expressed as the integer number of microseconds since ``CRM_alert_timestamp_epoch``. - all * - .. _CRM_alert_version: - + .. index:: single: environment variable; CRM_alert_version single: CRM_alert_version - + CRM_alert_version - The version of Pacemaker sending the alert - all * - .. _CRM_alert_desc: - + .. index:: single: environment variable; CRM_alert_desc single: CRM_alert_desc - + CRM_alert_desc - Detail about event. For ``node`` alerts, this is the node's current state (``member`` or ``lost``). For ``fencing`` alerts, this is a @@ -217,38 +217,38 @@ Writing an Alert Agent is a readable string equivalent of ``CRM_alert_status``. - ``node``, ``fencing``, ``resource`` * - .. _CRM_alert_nodeid: - + .. index:: single: environment variable; CRM_alert_nodeid single: CRM_alert_nodeid - + CRM_alert_nodeid - ID of node whose status changed - ``node`` * - .. _CRM_alert_rc: - + .. index:: single: environment variable; CRM_alert_rc single: CRM_alert_rc - + CRM_alert_rc - The numerical return code of the fencing or resource operation - ``fencing``, ``resource`` * - .. _CRM_alert_task: - + .. index:: single: environment variable; CRM_alert_task single: CRM_alert_task - + CRM_alert_task - The requested fencing or resource operation - ``fencing``, ``resource`` * - .. _CRM_alert_exec_time: - + .. index:: single: environment variable; CRM_alert_exec_time single: CRM_alert_exec_time - + CRM_alert_exec_time - The (wall-clock) time, in milliseconds, that it took to execute the action. If the action timed out, ``CRM_alert_status`` will be 2, @@ -256,84 +256,84 @@ Writing an Alert Agent action timeout. May not be supported on all platforms. *(since 2.0.1)* - ``resource`` * - .. _CRM_alert_interval: - + .. index:: single: environment variable; CRM_alert_interval single: CRM_alert_interval - + CRM_alert_interval - The interval of the resource operation - ``resource`` * - .. _CRM_alert_rsc: - + .. index:: single: environment variable; CRM_alert_rsc single: CRM_alert_rsc - + CRM_alert_rsc - The name of the affected resource - ``resource`` * - .. _CRM_alert_status: - + .. index:: single: environment variable; CRM_alert_status single: CRM_alert_status - + CRM_alert_status - A numerical code used by Pacemaker to represent the operation result - ``resource`` * - .. _CRM_alert_target_rc: - + .. index:: single: environment variable; CRM_alert_target_rc single: CRM_alert_target_rc - + CRM_alert_target_rc - The expected numerical return code of the operation - ``resource`` * - .. _CRM_alert_attribute_name: - + .. index:: single: environment variable; CRM_alert_attribute_name single: CRM_alert_attribute_name - + CRM_alert_attribute_name - The name of the node attribute that changed - ``attribute`` * - .. _CRM_alert_attribute_value: - + .. index:: single: environment variable; CRM_alert_attribute_value single: CRM_alert_attribute_value - + CRM_alert_attribute_value - The new value of the node attribute that changed - ``attribute`` Special concerns when writing alert agents: - + * Alert agents may be called with no recipient (if none is configured), so the agent must be able to handle this situation, even if it only exits in that case. (Users may modify the configuration in stages, and add a recipient later.) - + * If more than one recipient is configured for an alert, the alert agent will be called once per recipient. If an agent is not able to run concurrently, it should be configured with only a single recipient. The agent is free, however, to interpret the recipient as a list. - + * When a cluster event occurs, all alerts are fired off at the same time as separate processes. Depending on how many alerts and recipients are configured, and on what is done within the alert agents, a significant load burst may occur. The agent could be written to take this into consideration, for example by queueing resource-intensive actions into some other instance, instead of directly executing them. - + * Alert agents are run as the |CRM_DAEMON_USER| user, which has a minimal set of permissions. If an agent requires additional privileges, it is recommended to configure ``sudo`` to allow the agent to run the necessary commands as another user with the appropriate privileges. - + * As always, take care to validate and sanitize user-configured parameters, such as ``CRM_alert_timestamp`` (whose content is specified by the user-configured ``timestamp-format``), ``CRM_alert_recipient,`` and all diff --git a/doc/sphinx/Pacemaker_Administration/index.rst b/doc/sphinx/Pacemaker_Administration/index.rst index c8fd7220b58..1b071e05568 100644 --- a/doc/sphinx/Pacemaker_Administration/index.rst +++ b/doc/sphinx/Pacemaker_Administration/index.rst @@ -4,14 +4,8 @@ Pacemaker Administration *Managing Pacemaker Clusters* -Abstract --------- -This document has instructions and tips for system administrators who -manage high-availability clusters using Pacemaker. - - -Table of Contents ------------------ +This document has instructions and tips for system administrators who manage +high-availability clusters using Pacemaker. .. toctree:: :maxdepth: 3 @@ -30,10 +24,5 @@ Table of Contents alerts agents pcs-crmsh - - -Index ------ - -* :ref:`genindex` -* :ref:`search` + :ref:`genindex` + :ref:`search` diff --git a/doc/sphinx/Pacemaker_Administration/intro.rst b/doc/sphinx/Pacemaker_Administration/intro.rst index 067e293849e..aa1c2da6969 100644 --- a/doc/sphinx/Pacemaker_Administration/intro.rst +++ b/doc/sphinx/Pacemaker_Administration/intro.rst @@ -6,7 +6,7 @@ The Scope of this Document The purpose of this document is to help system administrators learn how to manage a Pacemaker cluster. - + System administrators may be interested in other parts of the `Pacemaker documentation set `_ such as *Clusters from Scratch*, a step-by-step guide to setting up an example diff --git a/doc/sphinx/Pacemaker_Administration/moving.rst b/doc/sphinx/Pacemaker_Administration/moving.rst index 3d6a92af510..2c3c4449a75 100644 --- a/doc/sphinx/Pacemaker_Administration/moving.rst +++ b/doc/sphinx/Pacemaker_Administration/moving.rst @@ -158,37 +158,35 @@ Normally, the ping resource should run on all cluster nodes, which means that you'll need to create a clone. A template for this can be found below, along with a description of the most interesting parameters. -.. table:: **Commonly Used ocf:pacemaker:ping Resource Parameters** - :widths: 1 4 - - +--------------------+--------------------------------------------------------------+ - | Resource Parameter | Description | - +====================+==============================================================+ - | dampen | .. index:: | - | | single: ocf:pacemaker:ping resource; dampen parameter | - | | single: dampen; ocf:pacemaker:ping resource parameter | - | | | - | | The time to wait (dampening) for further changes to occur. | - | | Use this to prevent a resource from bouncing around the | - | | cluster when cluster nodes notice the loss of connectivity | - | | at slightly different times. | - +--------------------+--------------------------------------------------------------+ - | multiplier | .. index:: | - | | single: ocf:pacemaker:ping resource; multiplier parameter | - | | single: multiplier; ocf:pacemaker:ping resource parameter | - | | | - | | The number of connected ping nodes gets multiplied by this | - | | value to get a score. Useful when there are multiple ping | - | | nodes configured. | - +--------------------+--------------------------------------------------------------+ - | host_list | .. index:: | - | | single: ocf:pacemaker:ping resource; host_list parameter | - | | single: host_list; ocf:pacemaker:ping resource parameter | - | | | - | | The machines to contact in order to determine the current | - | | connectivity status. Allowed values include resolvable DNS | - | | connectivity host names, IPv4 addresses, and IPv6 addresses. | - +--------------------+--------------------------------------------------------------+ +.. list-table:: **Commonly Used ocf:pacemaker:ping Resource Parameters** + :widths: 20 80 + :header-rows: 1 + + * - Resource Parameter + - Description + * - dampen + - .. index:: + single: ocf:pacemaker:ping resource; dampen parameter + single: dampen; ocf:pacemaker:ping resource parameter + + The time to wait (dampening) for further changes to occur. Use this to + prevent a resource from bouncing around the cluster when cluster nodes + notice the loss of connectivity at slightly different times. + * - multiplier + - .. index:: + single: ocf:pacemaker:ping resource; multiplier parameter + single: multiplier; ocf:pacemaker:ping resource parameter + + The number of connected ping nodes gets multiplied by this value to get + a score. Useful when there are multiple ping nodes configured. + * - host_list + - .. index:: + single: ocf:pacemaker:ping resource; host_list parameter + single: host_list; ocf:pacemaker:ping resource parameter + + The machines to contact in order to determine the current connectivity + status. Allowed values include resolvable DNS connectivity host names, + IPv4 addresses, and IPv6 addresses. .. topic:: Example ping resource that checks node connectivity once every minute diff --git a/doc/sphinx/Pacemaker_Administration/options.rst b/doc/sphinx/Pacemaker_Administration/options.rst index 776bb3606c6..ea339dd8f18 100644 --- a/doc/sphinx/Pacemaker_Administration/options.rst +++ b/doc/sphinx/Pacemaker_Administration/options.rst @@ -11,7 +11,7 @@ Pacemaker uses several environment variables set on the client side. .. list-table:: **Client-side Environment Variables** :class: longtable - :widths: 2 4 5 + :widths: 20 30 50 :header-rows: 1 * - Environment Variable @@ -79,7 +79,7 @@ Pacemaker uses several environment variables set on the client side. single: environment variable; CIB_ca_file CIB_ca_file - - + - - If this, :ref:`CIB_cert_file `, and :ref:`CIB_key_file ` are set, remote CIB administration will be encrypted using X.509 (SSL/TLS) certificates, with this root @@ -93,7 +93,7 @@ Pacemaker uses several environment variables set on the client side. single: environment variable; CIB_cert_file CIB_cert_file - - + - - If this, :ref:`CIB_ca_file `, and :ref:`CIB_key_file ` are set, remote CIB administration will be encrypted using X.509 (SSL/TLS) certificates, with this @@ -107,7 +107,7 @@ Pacemaker uses several environment variables set on the client side. single: environment variable; CIB_key_file CIB_key_file - - + - - If this, :ref:`CIB_ca_file `, and :ref:`CIB_cert_file ` are set, remote CIB administration will be encrypted using X.509 (SSL/TLS) certificates, with this @@ -121,7 +121,7 @@ Pacemaker uses several environment variables set on the client side. single: environment variable; CIB_crl_file CIB_crl_file - - + - - If this, :ref:`CIB_ca_file `, :ref:`CIB_cert_file `, and :ref:`CIB_key_file ` are all set, then certificates listed diff --git a/doc/sphinx/Pacemaker_Administration/pcs-crmsh.rst b/doc/sphinx/Pacemaker_Administration/pcs-crmsh.rst index 06fb24fb310..d0718dcfeb2 100644 --- a/doc/sphinx/Pacemaker_Administration/pcs-crmsh.rst +++ b/doc/sphinx/Pacemaker_Administration/pcs-crmsh.rst @@ -26,7 +26,7 @@ Show Cluster Configuration and Status crmsh # crm configure show pcs # pcs config - + .. topic:: Show Cluster Status .. code-block:: none @@ -224,7 +224,7 @@ edited and verified before committing to the live configuration: crmsh # crm configure rsc_defaults resource-stickiness=100 pcs # pcs resource defaults resource-stickiness=100 - + .. topic:: List Current Operation Defaults .. code-block:: none @@ -321,7 +321,7 @@ Manage Constraints crmsh # crm configure location prefer-pcmk-1 WebSite 50: pcmk-1 pcs # pcs constraint location WebSite prefers pcmk-1=50 - + .. topic:: Create a Location Constraint Based on Role .. code-block:: none @@ -336,7 +336,7 @@ Manage Constraints crmsh # crm resource move WebSite pcmk-1 pcs # pcs resource move WebSite pcmk-1 pacemaker # crm_resource -r WebSite --move -N pcmk-1 - + .. topic:: Move a Resource Away from Its Current Node (by Creating a Location Constraint) .. code-block:: none diff --git a/doc/sphinx/Pacemaker_Administration/tools.rst b/doc/sphinx/Pacemaker_Administration/tools.rst index de9ee85607f..69e0ffc7c63 100644 --- a/doc/sphinx/Pacemaker_Administration/tools.rst +++ b/doc/sphinx/Pacemaker_Administration/tools.rst @@ -59,7 +59,7 @@ and see how it responds when you cause or simulate failures. See the manual page or the output of ``crm_mon --help`` for a full description of its many options. - + .. topic:: Sample output from crm_mon -1 .. code-block:: none @@ -78,7 +78,7 @@ of its many options. * Active resources: * Fencing (stonith:fence_xvm): Started node1 * IP (ocf:heartbeat:IPaddr2): Started node2 - + .. topic:: Sample output from crm_mon -n -1 .. code-block:: none @@ -183,7 +183,7 @@ operating system distribution and how you installed the software. If you want to modify just one section of the configuration, you can query and replace just that section to avoid modifying any others. - + .. topic:: Safely using an editor to modify only the resources section .. code-block:: none @@ -195,7 +195,7 @@ query and replace just that section to avoid modifying any others. To quickly delete a part of the configuration, identify the object you wish to delete by XML tag and id. For example, you might search the CIB for all STONITH-related configuration: - + .. topic:: Searching for STONITH-related configuration items .. code-block:: none @@ -249,7 +249,7 @@ a name to make it possible to have more than one. Read this section and the on-screen instructions carefully; failure to do so could result in destroying the cluster's active configuration! - + .. topic:: Creating and displaying the active sandbox .. code-block:: none @@ -257,7 +257,7 @@ a name to make it possible to have more than one. # crm_shadow --create test Setting up shadow instance Type Ctrl-D to exit the crm_shadow shell - shadow[test]: + shadow[test]: shadow[test] # crm_shadow --which test @@ -266,20 +266,20 @@ instead of talking to the cluster's active configuration. Once you have finished experimenting, you can either make the changes active via the ``--commit`` option, or discard them using the ``--delete`` option. Again, be sure to follow the on-screen instructions carefully! - + For a full list of ``crm_shadow`` options and commands, invoke it with the ``--help`` option. .. topic:: Use sandbox to make multiple changes all at once, discard them, and verify real configuration is untouched .. code-block:: none - + shadow[test] # crm_failcount -r rsc_c001n01 -G scope=status name=fail-count-rsc_c001n01 value=0 shadow[test] # crm_standby --node c001n02 -v on shadow[test] # crm_standby --node c001n02 -G scope=nodes name=standby value=on - + shadow[test] # cibadmin --erase --force shadow[test] # cibadmin --query @@ -402,7 +402,7 @@ dependencies. ``$FILENAME.svg`` will be the same information in a standard graphical format that you can view in your browser or other app of choice. You could, of course, use other ``dot`` options to generate other formats. - + How to interpret the graphical output: * Bubbles indicate actions, and arrows indicate ordering dependencies @@ -424,7 +424,7 @@ How to interpret the graphical output: blue, the cluster does not feel the action needs to be executed. If the dashed border is red, the cluster would like to execute the action but cannot. Any actions depending on an action with a dashed border will not be - able to execute. + able to execute. * Loops should not happen, and should be reported as a bug if found. .. topic:: Small Cluster Transition @@ -488,19 +488,34 @@ defaults, and operation defaults. To understand the differences, it helps to understand the various types of node attribute. -.. table:: **Types of Node Attributes** - - +-----------+----------+-------------------+------------------+----------------+----------------+ - | Type | Recorded | Recorded in | Survive full | Manageable by | Manageable by | - | | in CIB? | attribute manager | cluster restart? | crm_attribute? | attrd_updater? | - | | | memory? | | | | - +===========+==========+===================+==================+================+================+ - | permanent | yes | no | yes | yes | no | - +-----------+----------+-------------------+------------------+----------------+----------------+ - | transient | yes | yes | no | yes | yes | - +-----------+----------+-------------------+------------------+----------------+----------------+ - | private | no | yes | no | no | yes | - +-----------+----------+-------------------+------------------+----------------+----------------+ +.. list-table:: **Types of Node Attributes** + :widths: 20 16 16 16 16 16 + :header-rows: 1 + + * - Type + - Recorded in CIB? + - Recorded in attribute manager by memory? + - Survive full cluster restart? + - Manageable by by crm_attribute? + - Manageable by attrd_updater? + * - permanent + - yes + - no + - yes + - yes + - no + * - transient + - yes + - yes + - no + - yes + - yes + * - private + - no + - yes + - no + - no + - yes As you can see from the table above, ``crm_attribute`` can manage permanent and transient node attributes, while ``attrd_updater`` can manage transient and diff --git a/doc/sphinx/Pacemaker_Administration/troubleshooting.rst b/doc/sphinx/Pacemaker_Administration/troubleshooting.rst index 4f24725979f..ac1b8106116 100644 --- a/doc/sphinx/Pacemaker_Administration/troubleshooting.rst +++ b/doc/sphinx/Pacemaker_Administration/troubleshooting.rst @@ -92,7 +92,7 @@ The log messages immediately before the "saving inputs" message will include any actions that the scheduler thinks need to be done. .. important:: - + Any actions that have already been initiated must complete (or time out) before a new transition can be calculated. diff --git a/doc/sphinx/Pacemaker_Administration/upgrading.rst b/doc/sphinx/Pacemaker_Administration/upgrading.rst index b23c65ea89f..93fcdd8ec57 100644 --- a/doc/sphinx/Pacemaker_Administration/upgrading.rst +++ b/doc/sphinx/Pacemaker_Administration/upgrading.rst @@ -82,24 +82,38 @@ Upgrading Cluster Software There are three approaches to upgrading a cluster, each with advantages and disadvantages. -.. table:: **Upgrade Methods** - - +---------------------------------------------------+----------+----------+--------+---------+----------+----------+ - | Method | Available| Can be | Service| Service | Exercises| Allows | - | | between | used with| outage | recovery| failover | change of| - | | all | Pacemaker| during | during | logic | messaging| - | | versions | Remote | upgrade| upgrade | | layer | - | | | nodes | | | | [#]_ | - +===================================================+==========+==========+========+=========+==========+==========+ - | Complete cluster shutdown | yes | yes | always | N/A | no | yes | - +---------------------------------------------------+----------+----------+--------+---------+----------+----------+ - | Rolling (node by node) | no | yes | always | yes | yes | no | - | | | | [#]_ | | | | - +---------------------------------------------------+----------+----------+--------+---------+----------+----------+ - | Detach and reattach | yes | no | only | no | no | yes | - | | | | due to | | | | - | | | | failure| | | | - +---------------------------------------------------+----------+----------+--------+---------+----------+----------+ +.. list-table:: **Upgrade Methods** + :widths: 16 14 14 14 14 14 14 + :header-rows: 1 + + * - Method + - Available between all versions + - Can be used with Pacemaker Remote nodes + - Service outage during upgrade + - Service recovery during upgrade + - Exercises failover logic + - Allows changes of messaging layer [#]_ + * - Complete cluster shutdown + - yes + - yes + - always + - N/A + - no + - yes + * - Rolling (node by node) + - no + - yes + - always [#]_ + - yes + - yes + - no + * - Detach and reattach + - yes + - no + - only + - no + - no + - yes .. index:: @@ -194,7 +208,7 @@ when upgrading a cluster node. .. list-table:: **Version Compatibility for Cluster Nodes** :class: longtable - :widths: 1 1 + :widths: 50 50 :header-rows: 1 * - Version Being Installed @@ -213,7 +227,7 @@ least the minimum version listed in the table below. .. list-table:: **Cluster Node Version Compatibility for Pacemaker Remote Nodes** :class: longtable - :widths: 1 1 + :widths: 50 50 :header-rows: 1 * - Pacemaker Remote Version @@ -377,7 +391,7 @@ A more cautious approach would proceed like this: code. These will often be installed in a location such as ``/usr/share/pacemaker``, or may be obtained from the `source repository `_. - + #. Run the conversion scripts that apply to your older version, for example: .. code-block:: none diff --git a/doc/sphinx/Pacemaker_Development/c.rst b/doc/sphinx/Pacemaker_Development/c.rst index 8d879617f12..66a918c7806 100644 --- a/doc/sphinx/Pacemaker_Development/c.rst +++ b/doc/sphinx/Pacemaker_Development/c.rst @@ -20,17 +20,20 @@ Code Organization Pacemaker's C code is organized as follows: -+-----------------+-----------------------------------------------------------+ -| Directory | Contents | -+=================+===========================================================+ -| daemons | the Pacemaker daemons (pacemakerd, pacemaker-based, etc.) | -+-----------------+-----------------------------------------------------------+ -| include | header files for library APIs | -+-----------------+-----------------------------------------------------------+ -| lib | libraries | -+-----------------+-----------------------------------------------------------+ -| tools | command-line tools | -+-----------------+-----------------------------------------------------------+ +.. list-table:: **C Code Organization** + :widths: 25 75 + :header-rows: 1 + + * - Directory + - Contents + * - daemons + - the Pacemaker daemons (pacemakerd, pacemaker-based, etc.) + * - include + - header files for library APIs + * - lib + - libraries + * - tools + - command-line tools Source file names should be unique across the entire project, to allow for individual tracing via ``PCMK_trace_files``. @@ -43,70 +46,103 @@ individual tracing via ``PCMK_trace_files``. Pacemaker Libraries ################### -+---------------+---------+---------------+---------------------------+-------------------------------------+ -| Library | Symbol | Source | API Headers | Description | -| | prefix | location | | | -+===============+=========+===============+===========================+=====================================+ -| libcib | cib | lib/cib | | include/crm/cib.h | .. index:: | -| | | | | include/crm/cib/* | single: C library; libcib | -| | | | | single: libcib | -| | | | | | -| | | | | API for pacemaker-based IPC and | -| | | | | the CIB | -+---------------+---------+---------------+---------------------------+-------------------------------------+ -| libcrmcluster | pcmk | lib/cluster | | include/crm/cluster.h | .. index:: | -| | | | | include/crm/cluster/* | single: C library; libcrmcluster | -| | | | | single: libcrmcluster | -| | | | | | -| | | | | Abstract interface to underlying | -| | | | | cluster layer | -+---------------+---------+---------------+---------------------------+-------------------------------------+ -| libcrmcommon | pcmk | lib/common | | include/crm/common/* | .. index:: | -| | | | | some of include/crm/* | single: C library; libcrmcommon | -| | | | | single: libcrmcommon | -| | | | | | -| | | | | Everything else | -+---------------+---------+---------------+---------------------------+-------------------------------------+ -| libcrmservice | svc | lib/services | | include/crm/services.h | .. index:: | -| | | | | single: C library; libcrmservice | -| | | | | single: libcrmservice | -| | | | | | -| | | | | Abstract interface to supported | -| | | | | resource types (OCF, LSB, etc.) | -+---------------+---------+---------------+---------------------------+-------------------------------------+ -| liblrmd | lrmd | lib/lrmd | | include/crm/lrmd*.h | .. index:: | -| | | | | single: C library; liblrmd | -| | | | | single: liblrmd | -| | | | | | -| | | | | API for pacemaker-execd IPC | -+---------------+---------+---------------+---------------------------+-------------------------------------+ -| libpacemaker | pcmk | lib/pacemaker | | include/pacemaker*.h | .. index:: | -| | | | | include/pcmki/* | single: C library; libpacemaker | -| | | | | single: libpacemaker | -| | | | | | -| | | | | High-level APIs equivalent to | -| | | | | command-line tool capabilities | -| | | | | (and high-level internal APIs) | -+---------------+---------+---------------+---------------------------+-------------------------------------+ -| libpe_rules | pe | lib/pengine | | include/crm/pengine/* | .. index:: | -| | | | | single: C library; libpe_rules | -| | | | | single: libpe_rules | -| | | | | | -| | | | | Deprecated APIs related to | -| | | | | evaluating rules | -+---------------+---------+---------------+---------------------------+-------------------------------------+ -| libpe_status | pe | lib/pengine | | include/crm/pengine/* | .. index:: | -| | | | | single: C library; libpe_status | -| | | | | single: libpe_status | -| | | | | | -| | | | | Low-level scheduler functionality | -+---------------+---------+---------------+---------------------------+-------------------------------------+ -| libstonithd | stonith | lib/fencing | | include/crm/stonith-ng.h| .. index:: | -| | | | | include/crm/fencing/* | single: C library; libstonithd | -| | | | | single: libstonithd | -| | | | | | -| | | | | API for pacemaker-fenced IPC | -+---------------+---------+---------------+---------------------------+-------------------------------------+ +.. list-table:: **C Libraries** + :class: longtable + :widths: 15 10 15 25 35 + :header-rows: 1 + + * - Library + - Symbol Prefix + - Source Location + - API Headers + - Description + * - libcib + - cib + - lib/cib + - | include/crm/cib.h + | include/crm/cib/ + - .. index:: + single: C library; libcib + single: libcib + + API for pacemaker-based IPC and the CIB + * - libcrmcluster + - pcmk + - lib/cluster + - | include/crm/cluster.h + | include/crm/cluster/ + - .. index:: + single: C library; libcrmcluster + single: libcrmcluster + + Abstract interface to underlying cluster layer + * - libcrmcommon + - pcmk + - lib/common + - | include/crm/common/ + | some of include/services.h + - .. index:: + single: C library; libcrmcommon + single: libcrmcommon + + Everything else + * - libcrmservice + - svc + - lib/services + - include/crm/services.h + - .. index:: + single: C library; libcrmservice + single: libcrmservice + + Abstract interface to supported resource types (OCF, LSB, etc.) + * - liblrmd + - lrmd + - lib/lrmd + - include/crm/lrmd*.h + - .. index:: + single: C library; liblrmd + single: liblrmd + + API for pacemaker-execd IPC + * - libpacemaker + - pcmk + - lib/pacemaker + - | include/pacemaker*.h + | include/pcmki/ + - .. index:: + single: C library; libpacemaker + single: libpacemaker + + High-level APIs equivalent to command-line tool capabilities + (and high-level internal APIs) + * - libpe_rules + - pe + - lib/pengine + - include/crm/pengine/ + - .. index:: + single: C library; libpe_rules + single: libpe_rules + + Deprecated APIs related to evaluating rules + * - libbe_status + - pe + - lib/pengine + - include/crm/pengine/ + - .. index:: + single: C library; libpe_status + single: libpe_status + + Low-level scheduler functionality + * - libstonithd + - stonith + - lib/fencing + - | include/crm/stonith-ng.h + | include/crm/fencing + - .. index:: + single: C library; libstonithd + single: libstonithd + + API for pacemaker-fenced IPC Public versus Internal APIs @@ -1020,39 +1056,39 @@ A custom message can be defined with a unique string identifier, plus implementation functions for each supported format. The caller invokes the message using the identifier. The user selects the output format via ``--output-as``, and the output code automatically calls the appropriate -implementation function. Custom messages are useful when you want to output -messages that are more complex than a one-line error or informational message, -reproducible, and automatically handled by the output formatting system. +implementation function. Custom messages are useful when you want to output +messages that are more complex than a one-line error or informational message, +reproducible, and automatically handled by the output formatting system. Custom messages can contain other custom messages. -Custom message functions are implemented as follows: Start with the macro -``PCMK__OUTPUT_ARGS``, whose arguments are the message name, followed by the -arguments to the message. Then there is the function declaration, for which the -arguments are the pointer to the current output object, then a variable argument +Custom message functions are implemented as follows: Start with the macro +``PCMK__OUTPUT_ARGS``, whose arguments are the message name, followed by the +arguments to the message. Then there is the function declaration, for which the +arguments are the pointer to the current output object, then a variable argument list. -To output a custom message, you first need to create, i.e. register, the custom -message that you want to output. Either call ``register_message``, which -registers a custom message at runtime, or make use of the collection of -predefined custom messages in ``fmt_functions``, which is defined in -``lib/pacemaker/pcmk_output.c``. Once you have the message to be outputted, +To output a custom message, you first need to create, i.e. register, the custom +message that you want to output. Either call ``register_message``, which +registers a custom message at runtime, or make use of the collection of +predefined custom messages in ``fmt_functions``, which is defined in +``lib/pacemaker/pcmk_output.c``. Once you have the message to be outputted, output it by calling ``message``. -Note: The ``fmt_functions`` functions accommodate all of the output formats; -the default implementation accommodates any format that isn't explicitly -accommodated. The default output provides valid output for any output format, -but you may still want to implement a specific output, i.e. xml, text, or html. -The ``message`` function automatically knows which implementation to use, +Note: The ``fmt_functions`` functions accommodate all of the output formats; +the default implementation accommodates any format that isn't explicitly +accommodated. The default output provides valid output for any output format, +but you may still want to implement a specific output, i.e. xml, text, or html. +The ``message`` function automatically knows which implementation to use, because the ``pcmk__output_s`` contains this information. The interface (most importantly ``pcmk__output_t``) is declared in ``include/crm/common/output*h``. See the API comments and existing tools for examples. -Some of its important member functions are ``err``, which formats error messages -and ``info``, which formats informational messages. Also, ``list_item``, -which formats list items, ``begin_list``, which starts lists, and ``end_list``, -which ends lists, are important because lists can be useful, yet differently +Some of its important member functions are ``err``, which formats error messages +and ``info``, which formats informational messages. Also, ``list_item``, +which formats list items, ``begin_list``, which starts lists, and ``end_list``, +which ends lists, are important because lists can be useful, yet differently handled by the different output types. .. index:: diff --git a/doc/sphinx/Pacemaker_Development/components.rst b/doc/sphinx/Pacemaker_Development/components.rst index bff62d49dfc..f886eb25688 100644 --- a/doc/sphinx/Pacemaker_Development/components.rst +++ b/doc/sphinx/Pacemaker_Development/components.rst @@ -40,7 +40,7 @@ progresses from the DC's point of view as follows: * The DC sends the node a `join offer` (``CRM_OP_JOIN_OFFER``), and the node proceeds to ``controld_join_welcomed``. This can happen in three ways: - + * The joining node will send a `join announce` (``CRM_OP_JOIN_ANNOUNCE``) at its controller startup, and the DC will reply to that with a join offer. * When the DC's peer status callback notices that the node has joined the diff --git a/doc/sphinx/Pacemaker_Development/faq.rst b/doc/sphinx/Pacemaker_Development/faq.rst index b1b1e5ac90a..94deb63734a 100644 --- a/doc/sphinx/Pacemaker_Development/faq.rst +++ b/doc/sphinx/Pacemaker_Development/faq.rst @@ -72,7 +72,7 @@ Frequently Asked Questions :Q: How should I format my Git commit messages? :A: An example is "Feature: scheduler: wobble the frizzle better". - + * The first part is the type of change, used to automatically generate the change log for the next release. Commit messages with the following will be included in the change log: @@ -91,15 +91,15 @@ Frequently Asked Questions change log entry * **Refactor** for refactoring-only code changes * **Build** for build process changes - + * The next part is the name of the component(s) being changed, for example, **controller** or **libcrmcommon** (it's more free-form, so don't sweat getting it exact). - + * The rest briefly describes the change. The git project recommends the entire summary line stay under 50 characters, but more is fine if needed for clarity. - + * Except for the most simple and obvious of changes, the summary should be followed by a blank line and a longer explanation of *why* the change was made. diff --git a/doc/sphinx/Pacemaker_Development/general.rst b/doc/sphinx/Pacemaker_Development/general.rst index 94015c9b8f1..99b89a6ebef 100644 --- a/doc/sphinx/Pacemaker_Development/general.rst +++ b/doc/sphinx/Pacemaker_Development/general.rst @@ -14,7 +14,7 @@ When copyright notices are added to a file, they should look like this: .. note:: **Copyright Notice Format** | Copyright *YYYY[-YYYY]* the Pacemaker project contributors - | + | | The version control history for this file may have further details. The first *YYYY* is the year the file was *originally* published. The original diff --git a/doc/sphinx/Pacemaker_Development/index.rst b/doc/sphinx/Pacemaker_Development/index.rst index a3f624f65b2..b74c9f8cf4f 100644 --- a/doc/sphinx/Pacemaker_Development/index.rst +++ b/doc/sphinx/Pacemaker_Development/index.rst @@ -4,16 +4,11 @@ Pacemaker Development *Working with the Pacemaker Code Base* -Abstract --------- This document has guidelines and tips for developers interested in editing Pacemaker source code and submitting changes for inclusion in the project. Start with the FAQ; the rest is optional detail. -Table of Contents ------------------ - .. toctree:: :maxdepth: 3 :numbered: @@ -27,9 +22,5 @@ Table of Contents helpers evolution glossary - -Index ------ - -* :ref:`genindex` -* :ref:`search` + :ref:`genindex` + :ref:`search` diff --git a/doc/sphinx/Pacemaker_Explained/acls.rst b/doc/sphinx/Pacemaker_Explained/acls.rst index 878f8f64b37..1ceff36d9e0 100644 --- a/doc/sphinx/Pacemaker_Explained/acls.rst +++ b/doc/sphinx/Pacemaker_Explained/acls.rst @@ -9,7 +9,7 @@ Access Control Lists (ACLs) By default, the ``root`` user or any user in the |CRM_DAEMON_GROUP| group can modify Pacemaker's CIB without restriction. Pacemaker offers *access control lists (ACLs)* to provide more fine-grained authorization. - + .. important:: Being able to modify the CIB's resource section allows a user to run any @@ -18,7 +18,7 @@ lists (ACLs)* to provide more fine-grained authorization. ACL Prerequisites ################# - + In order to use ACLs: * The ``enable-acl`` :ref:`cluster option ` must be set to @@ -35,7 +35,7 @@ In order to use ACLs: with ACL support. If you are using an older release, your installation supports ACLs only if the output of the command ``pacemakerd --features`` contains ``acls``. In newer versions, ACLs are always enabled. - + .. important:: ``enable-acl`` should be set either by the root user, or as part of a batch @@ -53,7 +53,7 @@ ACL Configuration ACLs are specified within an ``acls`` element of the CIB. The ``acls`` element may contain any number of ``acl_role``, ``acl_target``, and ``acl_group`` elements. - + .. index:: single: Access Control List (ACL); acl_role @@ -65,9 +65,9 @@ ACL Roles An ACL *role* is a collection of permissions allowing or denying access to particular portions of the CIB. A role is configured with an ``acl_role`` element in the CIB ``acls`` section. - -.. table:: **Properties of an acl_role element** - :widths: 1 3 + +.. table:: **Properties of an acl_role Element** + :widths: 25 75 +------------------+-----------------------------------------------------------+ | Attribute | Description | @@ -88,13 +88,13 @@ element in the CIB ``acls`` section. +------------------+-----------------------------------------------------------+ An ``acl_role`` element may contain any number of ``acl_permission`` elements. - + .. index:: single: Access Control List (ACL); acl_permission pair: acl_permission; XML element -.. table:: **Properties of an acl_permission element** - :widths: 1 3 +.. table:: **Properties of an acl_permission Element** + :widths: 25 75 +------------------+-----------------------------------------------------------+ | Attribute | Description | @@ -171,31 +171,31 @@ An ``acl_role`` element may contain any number of ``acl_permission`` elements. * Permissions are applied to the selected XML element's entire XML subtree (all elements enclosed within it). - + * Write permission grants the ability to create, modify, or remove the element and its subtree, and also the ability to create any "scaffolding" elements (enclosing elements that do not have attributes other than an ID). - + * Permissions for more specific matches (more deeply nested elements) take precedence over more general ones. - + * If multiple permissions are configured for the same match (for example, in different roles applied to the same user), any ``deny`` permission takes precedence, then ``write``, then lastly ``read``. - + ACL Targets and Groups ###################### - + ACL targets correspond to user accounts on the system. .. index:: single: Access Control List (ACL); acl_target pair: acl_target; XML element -.. table:: **Properties of an acl_target element** - :widths: 1 3 +.. table:: **Properties of an acl_target Element** + :widths: 25 75 +------------------+-----------------------------------------------------------+ | Attribute | Description | @@ -221,13 +221,13 @@ ACL targets correspond to user accounts on the system. ACL groups correspond to groups on the system. Any role configured for these groups apply to all users in that group *(since 2.1.5)*. - + .. index:: single: Access Control List (ACL); acl_group pair: acl_group; XML element -.. table:: **Properties of an acl_group element** - :widths: 1 3 +.. table:: **Properties of an acl_group Element** + :widths: 25 75 +------------------+-----------------------------------------------------------+ | Attribute | Description | @@ -264,8 +264,8 @@ elements. single: Access Control List (ACL); role pair: role; XML element -.. table:: **Properties of a role element** - :widths: 1 3 +.. table:: **Properties of a role Element** + :widths: 25 75 +------------------+-----------------------------------------------------------+ | Attribute | Description | @@ -285,7 +285,7 @@ elements. the CIB, regardless of ACLs. For all other user accounts, when ``enable-acl`` is true, permission to all parts of the CIB is denied by default (permissions must be explicitly granted). - + ACLs and Pacemaker Remote Nodes ############################### @@ -298,117 +298,117 @@ and ``pacemaker-remote`` as the role. ACL Examples ############ - + .. code-block:: xml - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + - + In the above example, the user ``alice`` has the minimal permissions necessary diff --git a/doc/sphinx/Pacemaker_Explained/alerts.rst b/doc/sphinx/Pacemaker_Explained/alerts.rst index 27000ed9410..63b8a6f3110 100644 --- a/doc/sphinx/Pacemaker_Explained/alerts.rst +++ b/doc/sphinx/Pacemaker_Explained/alerts.rst @@ -45,7 +45,7 @@ each event. Alert agents will be called only on cluster nodes. They will be called for events involving Pacemaker Remote nodes, but they will never be called *on* those nodes. - + For more information about sample alert agents provided by Pacemaker and about developing custom alert agents, see the *Pacemaker Administration* document. @@ -56,11 +56,11 @@ developing custom alert agents, see the *Pacemaker Administration* document. Alert Recipients ################ - + Usually, alerts are directed towards a recipient. Thus, each alert may be additionally configured with one or more recipients. The cluster will call the agent separately for each recipient. - + .. topic:: Alert configuration with recipient .. code-block:: xml @@ -72,68 +72,71 @@ agent separately for each recipient. - + In the above example, the cluster will call ``my-script.sh`` for each event, passing the recipient ``some-address`` as an environment variable. The recipient may be anything the alert agent can recognize -- an IP address, an e-mail address, a file name, whatever the particular agent supports. - - + + .. index:: single: alert; meta-attributes single: meta-attribute; alert meta-attributes Alert Meta-Attributes ##################### - + As with resources, meta-attributes can be configured for alerts to change whether and how Pacemaker calls them. - -.. table:: **Meta-Attributes of an Alert or Recipient** + +.. list-table:: **Meta-Attributes of an Alert or Recipient** :class: longtable - :widths: 1 1 3 - - +------------------+---------------+-----------------------------------------------------+ - | Meta-Attribute | Default | Description | - +==================+===============+=====================================================+ - | description | | .. index:: | - | | | single: acl_permission; description (attribute) | - | | | single: description; acl_permission attribute | - | | | single: attribute; description (acl_permission) | - | | | | - | | | Arbitrary text for user's use (ignored by Pacemaker)| - +------------------+---------------+-----------------------------------------------------+ - | enabled | true | .. index:: | - | | | single: alert; meta-attribute, enabled | - | | | single: meta-attribute; enabled (alert) | - | | | single: enabled; alert meta-attribute | - | | | | - | | | If false for an alert, the alert will not be used. | - | | | If true for an alert and false for a particular | - | | | recipient of that alert, that recipient will not be | - | | | used. *(since 2.1.6)* | - +------------------+---------------+-----------------------------------------------------+ - | timestamp-format | %H:%M:%S.%06N | .. index:: | - | | | single: alert; meta-attribute, timestamp-format | - | | | single: meta-attribute; timestamp-format (alert) | - | | | single: timestamp-format; alert meta-attribute | - | | | | - | | | Format the cluster will use when sending the | - | | | event's timestamp to the agent. This is a string as | - | | | used with the ``date(1)`` command. | - +------------------+---------------+-----------------------------------------------------+ - | timeout | 30s | .. index:: | - | | | single: alert; meta-attribute, timeout | - | | | single: meta-attribute; timeout (alert) | - | | | single: timeout; alert meta-attribute | - | | | | - | | | If the alert agent does not complete within this | - | | | amount of time, it will be terminated. | - +------------------+---------------+-----------------------------------------------------+ - + :widths: 20 20 60 + :header-rows: 1 + + * - Meta-Attribute + - Default + - Description + * - description + - + - .. index:: + single: acl_permission; description (attribute) + single: description; acl_permission attribute + single: attribute; description (acl_permission) + + Arbitrary text for user's use (ignored by Pacemaker) + * - enabled + - true + - .. index:: + single: alert; meta-attribute, enabled + single: meta-attribute; enabled (alert) + single: enabled; alert meta-attribute + + If false for an alert, the alert will not be used. If true for an alert + and false for a particular recipient of that alert, that recipient will + not be used. *(since 2.1.6)* + * - timestamp-format + - %H:%M:%S.%06N + - .. index:: + single: alert; meta-attribute, timestamp-format + single: meta-attribute; timestamp-format (alert) + single: timestamp-format; alert meta-attribute + + Format the cluster will use when sending the event's timestamp to the + agent. This is a string as used with the ``date(1)`` command. + * - timeout + - 30s + - .. index:: + single: alert; meta-attribute, timeout + single: meta-attribute; timeout (alert) + single: timeout; alert meta-attribute + + If the alert agent does not complete within this amount of time, it + will be terminated. + Meta-attributes can be configured per alert and/or per recipient. - + .. topic:: Alert configuration with meta-attributes .. code-block:: xml @@ -160,26 +163,26 @@ Meta-attributes can be configured per alert and/or per recipient. - + In the above example, the ``my-script.sh`` will get called twice for each event, with each call using a 15-second timeout. One call will be passed the recipient ``someuser@example.com`` and a timestamp in the format ``%D %H:%M``, while the other call will be passed the recipient ``otheruser@example.com`` and a timestamp in the format ``%c``. - - + + .. index:: single: alert; instance attributes single: instance attribute; alert instance attributes Alert Instance Attributes ######################### - + As with resource agents, agent-specific configuration values may be configured as instance attributes. These will be passed to the agent as additional environment variables. The number, names and allowed values of these instance attributes are completely up to the particular agent. - + .. topic:: Alert configuration with instance attributes .. code-block:: xml @@ -200,8 +203,8 @@ attributes are completely up to the particular agent. - - + + .. index:: single: alert; filters pair: XML element; select @@ -213,7 +216,7 @@ attributes are completely up to the particular agent. Alert Filters ############# - + By default, an alert agent will be called for node events, fencing events, and resource events. An agent may choose to ignore certain types of events, but there is still the overhead of calling it for those events. To eliminate that @@ -222,9 +225,9 @@ overhead, you may select which types of events the agent should receive. Alert filters are configured within a ``select`` element inside an ``alert`` element. -.. list-table:: **Possible alert filters** +.. list-table:: **Possible Alert Filters** :class: longtable - :widths: 1 3 + :widths: 25 75 :header-rows: 1 * - Name @@ -256,11 +259,11 @@ element. - + With ```` (the only event type not enabled by default), the agent will receive alerts when a node attribute changes. If you wish the agent to be called only when certain attributes change, you can configure that as well. - + .. topic:: Alert configuration to be called when certain node attributes change .. code-block:: xml @@ -278,7 +281,7 @@ to be called only when certain attributes change, you can configure that as well - + Node attribute alerts are currently considered experimental. Alerts may be limited to attributes set via ``attrd_updater``, and agents may be called multiple times with the same attribute value. diff --git a/doc/sphinx/Pacemaker_Explained/ap-samples.rst b/doc/sphinx/Pacemaker_Explained/ap-samples.rst index 35188a87506..e618ef9314c 100644 --- a/doc/sphinx/Pacemaker_Explained/ap-samples.rst +++ b/doc/sphinx/Pacemaker_Explained/ap-samples.rst @@ -3,7 +3,7 @@ Sample Configurations Empty ##### - + .. topic:: An Empty Configuration .. code-block:: xml @@ -17,10 +17,10 @@ Empty - + Simple ###### - + .. topic:: A simple configuration with two nodes, some cluster options and a resource .. code-block:: xml @@ -65,14 +65,14 @@ Simple - + In the above example, we have one resource (an IP address) that we check every five minutes and will run on host ``c001n01`` until either the resource fails 10 times or the host shuts down. - + Advanced Configuration ###################### - + .. topic:: An advanced configuration with groups, clones and STONITH .. code-block:: xml diff --git a/doc/sphinx/Pacemaker_Explained/cluster-options.rst b/doc/sphinx/Pacemaker_Explained/cluster-options.rst index 6ebe5f38ebd..22329142e5d 100644 --- a/doc/sphinx/Pacemaker_Explained/cluster-options.rst +++ b/doc/sphinx/Pacemaker_Explained/cluster-options.rst @@ -73,7 +73,7 @@ We will refer to a set of options and its enclosing element as a *block*. .. list-table:: **Properties of an Option Block's Enclosing Element** :class: longtable - :widths: 2 2 3 5 + :widths: 15 15 15 55 :header-rows: 1 * - Name @@ -171,7 +171,7 @@ holds. So the decision was made to place them in an easy-to-find location. .. list-table:: **CIB Properties** :class: longtable - :widths: 2 2 2 5 + :widths: 20 15 10 55 :header-rows: 1 * - Name @@ -179,10 +179,10 @@ holds. So the decision was made to place them in an easy-to-find location. - Default - Description * - .. _admin_epoch: - + .. index:: pair: admin_epoch; cib - + admin_epoch - :ref:`nonnegative integer ` - 0 @@ -192,30 +192,30 @@ holds. So the decision was made to place them in an easy-to-find location. very important. ``admin_epoch`` is never modified by the cluster; you can use this to make the configurations on any inactive nodes obsolete. * - .. _epoch: - + .. index:: pair: epoch; cib - + epoch - :ref:`nonnegative integer ` - 0 - The cluster increments this every time the CIB's configuration section is updated. * - .. _num_updates: - + .. index:: pair: num_updates; cib - + num_updates - :ref:`nonnegative integer ` - 0 - The cluster increments this every time the CIB's configuration or status sections are updated, and resets it to 0 when epoch changes. * - .. _validate_with: - + .. index:: pair: validate-with; cib - + validate-with - :ref:`enumeration ` - @@ -225,10 +225,10 @@ holds. So the decision was made to place them in an easy-to-find location. names of schema files installed on the local machine (for example, "pacemaker-3.9") * - .. _remote_tls_port: - + .. index:: pair: remote-tls-port; cib - + remote-tls-port - :ref:`port ` - @@ -237,10 +237,10 @@ holds. So the decision was made to place them in an easy-to-find location. the cluster. No key is used, so this should be used only on a protected network where man-in-the-middle attacks can be avoided. * - .. _remote_clear_port: - + .. index:: pair: remote-clear-port; cib - + remote-clear-port - :ref:`port ` - @@ -249,20 +249,20 @@ holds. So the decision was made to place them in an easy-to-find location. in the cluster. No encryption is used, so this should be used only on a protected network. * - .. _cib_last_written: - + .. index:: pair: cib-last-written; cib - + cib-last-written - :ref:`date/time ` - - Indicates when the configuration was last written to disk. Maintained by the cluster; for informational purposes only. * - .. _have_quorum: - + .. index:: pair: have-quorum; cib - + have-quorum - :ref:`boolean ` - @@ -270,20 +270,20 @@ holds. So the decision was made to place them in an easy-to-find location. response is determined by ``no-quorum-policy`` (see below). Maintained by the cluster. * - .. _dc_uuid: - + .. index:: pair: dc-uuid; cib - + dc-uuid - :ref:`text ` - - Node ID of the cluster's current designated controller (DC). Used and maintained by the cluster. * - .. _execution_date: - + .. index:: pair: execution-date; cib - + execution-date - :ref:`epoch time ` - @@ -310,7 +310,7 @@ values, by running the ``man pacemaker-schedulerd`` and .. list-table:: **Cluster Options** :class: longtable - :widths: 2 2 2 5 + :widths: 25 13 12 50 :header-rows: 1 * - Name @@ -318,10 +318,10 @@ values, by running the ``man pacemaker-schedulerd`` and - Default - Description * - .. _cluster_name: - + .. index:: pair: cluster option; cluster-name - + cluster-name - :ref:`text ` - @@ -333,20 +333,20 @@ values, by running the ``man pacemaker-schedulerd`` and by certain resource agents (for example, the ``ocf:heartbeat:GFS2`` agent stores the cluster name in filesystem meta-data). * - .. _dc_version: - + .. index:: pair: cluster option; dc-version - + dc-version - :ref:`version ` - *detected* - Version of Pacemaker on the cluster's designated controller (DC). Maintained by the cluster, and intended for diagnostic purposes. * - .. _cluster_infrastructure: - + .. index:: pair: cluster option; cluster-infrastructure - + cluster-infrastructure - :ref:`text ` - *detected* @@ -354,15 +354,15 @@ values, by running the ``man pacemaker-schedulerd`` and Maintained by the cluster, and intended for informational and diagnostic purposes. * - .. _no_quorum_policy: - + .. index:: pair: cluster option; no-quorum-policy - + no-quorum-policy - :ref:`enumeration ` - stop - What to do when the cluster does not have quorum. Allowed values: - + * ``ignore:`` continue all resource management * ``freeze:`` continue resource management, but don't recover resources from nodes not in the affected partition @@ -373,10 +373,10 @@ values, by running the ``man pacemaker-schedulerd`` and *(since 2.1.9)* * ``suicide:`` same as ``fence`` *(deprecated since 2.1.9)* * - .. _batch_limit: - + .. index:: pair: cluster option; batch-limit - + batch-limit - :ref:`integer ` - 0 @@ -386,10 +386,10 @@ values, by running the ``man pacemaker-schedulerd`` and dynamically calculated limit only when any node has high load. If -1, the cluster will not impose any limit. * - .. _migration_limit: - + .. index:: pair: cluster option; migration-limit - + migration-limit - :ref:`integer ` - -1 @@ -397,10 +397,10 @@ values, by running the ``man pacemaker-schedulerd`` and cluster is allowed to execute in parallel on a node. A value of -1 means unlimited. * - .. _load_threshold: - + .. index:: pair: cluster option; load-threshold - + load-threshold - :ref:`percentage ` - 80% @@ -408,10 +408,10 @@ values, by running the ``man pacemaker-schedulerd`` and cluster will slow down its recovery process when the amount of system resources used (currently CPU) approaches this limit. * - .. _node_action_limit: - + .. index:: pair: cluster option; node-action-limit - + node-action-limit - :ref:`integer ` - 0 @@ -420,10 +420,10 @@ values, by running the ``man pacemaker-schedulerd`` and per node. :ref:`PCMK_node_action_limit ` overrides this option on a per-node basis. * - .. _symmetric_cluster: - + .. index:: pair: cluster option; symmetric-cluster - + symmetric-cluster - :ref:`boolean ` - true @@ -431,20 +431,20 @@ values, by running the ``man pacemaker-schedulerd`` and is allowed to run on a node only if a :ref:`location constraint ` enables it. * - .. _stop_all_resources: - + .. index:: pair: cluster option; stop-all-resources - + stop-all-resources - :ref:`boolean ` - false - Whether all resources should be disallowed from running (can be useful during maintenance or troubleshooting) * - .. _stop_orphan_resources: - + .. index:: pair: cluster option; stop-orphan-resources - + stop-orphan-resources - :ref:`boolean ` - true @@ -453,20 +453,20 @@ values, by running the ``man pacemaker-schedulerd`` and :ref:`is-managed ` (that is, even unmanaged resources will be stopped when orphaned if this value is ``true``). * - .. _stop_orphan_actions: - + .. index:: pair: cluster option; stop-orphan-actions - + stop-orphan-actions - :ref:`boolean ` - true - Whether recurring :ref:`operations ` that have been deleted from the configuration should be cancelled * - .. _start_failure_is_fatal: - + .. index:: pair: cluster option; start-failure-is-fatal - + start-failure-is-fatal - :ref:`boolean ` - true @@ -475,20 +475,20 @@ values, by running the ``man pacemaker-schedulerd`` and decide whether the node is still eligible based on the resource's current failure count and ``migration-threshold``. * - .. _enable_startup_probes: - + .. index:: pair: cluster option; enable-startup-probes - + enable-startup-probes - :ref:`boolean ` - true - Whether the cluster should check the pre-existing state of resources when the cluster starts * - .. _maintenance_mode: - + .. index:: pair: cluster option; maintenance-mode - + maintenance-mode - :ref:`boolean ` - false @@ -500,19 +500,19 @@ values, by running the ``man pacemaker-schedulerd`` and resource meta-attributes, and :ref:`enabled ` operation meta-attribute. * - .. _stonith_enabled: - + .. index:: pair: cluster option; stonith-enabled - + stonith-enabled - :ref:`boolean ` - true - Whether the cluster is allowed to fence nodes (for example, failed nodes and nodes with resources that can't be stopped). - + If true, at least one fence device must be configured before resources are allowed to run. - + If false, unresponsive nodes are immediately assumed to be running no resources, and resource recovery on online nodes starts without any further protection (which can mean *data loss* if the unresponsive node @@ -523,30 +523,30 @@ values, by running the ``man pacemaker-schedulerd`` and requests initiated externally (such as with the ``stonith_admin`` command-line tool). * - .. _stonith_action: - + .. index:: pair: cluster option; stonith-action - + stonith-action - :ref:`enumeration ` - reboot - Action the cluster should send to the fence agent when a node must be fenced. Allowed values are ``reboot`` and ``off``. * - .. _stonith_timeout: - + .. index:: pair: cluster option; stonith-timeout - + stonith-timeout - :ref:`duration ` - 60s - How long to wait for ``on``, ``off``, and ``reboot`` fence actions to complete by default. * - .. _stonith_max_attempts: - + .. index:: pair: cluster option; stonith-max-attempts - + stonith-max-attempts - :ref:`score ` - 10 @@ -570,17 +570,17 @@ values, by running the ``man pacemaker-schedulerd`` and performed via SBD without requiring a fencing resource explicitly configured. * - .. _stonith_watchdog_timeout: - + .. index:: pair: cluster option; stonith-watchdog-timeout - + stonith-watchdog-timeout - :ref:`timeout ` - 0 - If nonzero, and the cluster detects ``have-watchdog`` as ``true``, then watchdog-based self-fencing will be performed via SBD when fencing is required. - + If this is set to a positive value, lost nodes are assumed to achieve self-fencing within this much time. @@ -594,19 +594,19 @@ values, by running the ``man pacemaker-schedulerd`` and If this is set to a negative value, the cluster will use twice the local value of the ``SBD_WATCHDOG_TIMEOUT`` environment variable if that is positive, or otherwise treat this as 0. - + **Warning:** When used, this timeout must be larger than ``SBD_WATCHDOG_TIMEOUT`` on all nodes that use watchdog-based SBD, and Pacemaker will refuse to start on any of those nodes where this is not true for the local value or SBD is not active. When this is set to a negative value, ``SBD_WATCHDOG_TIMEOUT`` must be set to the same value on all nodes that use SBD, otherwise data corruption or loss could occur. - + * - .. _concurrent-fencing: - + .. index:: pair: cluster option; concurrent-fencing - + concurrent-fencing - :ref:`boolean ` - false @@ -616,10 +616,10 @@ values, by running the ``man pacemaker-schedulerd`` and itself such as recurring device monitors and ``status`` and ``list`` commands, are not limited by this option. * - .. _fence_reaction: - + .. index:: pair: cluster option; fence-reaction - + fence-reaction - :ref:`enumeration ` - stop @@ -632,10 +632,10 @@ values, by running the ``man pacemaker-schedulerd`` and failure. The default is likely to be changed to ``panic`` in a future release. *(since 2.0.3)* * - .. _priority_fencing_delay: - + .. index:: pair: cluster option; priority-fencing-delay - + priority-fencing-delay - :ref:`duration ` - 0 @@ -651,10 +651,10 @@ values, by running the ``man pacemaker-schedulerd`` and than (safely twice) the maximum delay from those parameters. *(since 2.0.4)* * - .. _node_pending_timeout: - + .. index:: pair: cluster option; node-pending-timeout - + node-pending-timeout - :ref:`duration ` - 0 @@ -663,10 +663,10 @@ values, by running the ``man pacemaker-schedulerd`` and managing resources. A value of 0 means never fence pending nodes. Setting the value to 2h means fence nodes after 2 hours. *(since 2.1.7)* * - .. _cluster_delay: - + .. index:: pair: cluster option; cluster-delay - + cluster-delay - :ref:`duration ` - 60s @@ -675,10 +675,10 @@ values, by running the ``man pacemaker-schedulerd`` and node within this time (beyond the action's own timeout). The ideal value will depend on the speed and load of your network and cluster nodes. * - .. _dc_deadtime: - + .. index:: pair: cluster option; dc-deadtime - + dc-deadtime - :ref:`duration ` - 20s @@ -686,10 +686,10 @@ values, by running the ``man pacemaker-schedulerd`` and ideal value will depend on the speed and load of your network and cluster nodes. * - .. _cluster_ipc_limit: - + .. index:: pair: cluster option; cluster-ipc-limit - + cluster-ipc-limit - :ref:`nonnegative integer ` - 500 @@ -699,10 +699,10 @@ values, by running the ``man pacemaker-schedulerd`` and of nodes. The default of 500 is also the minimum. Raise this if you see "Evicting client" log messages for cluster daemon process IDs. * - .. _pe_error_series_max: - + .. index:: pair: cluster option; pe-error-series-max - + pe-error-series-max - :ref:`integer ` - -1 @@ -710,10 +710,10 @@ values, by running the ``man pacemaker-schedulerd`` and can be helpful during troubleshooting and when reporting issues. A negative value means save all inputs, and 0 means save none. * - .. _pe_warn_series_max: - + .. index:: pair: cluster option; pe-warn-series-max - + pe-warn-series-max - :ref:`integer ` - 5000 @@ -721,10 +721,10 @@ values, by running the ``man pacemaker-schedulerd`` and inputs can be helpful during troubleshooting and when reporting issues. A negative value means save all inputs, and 0 means save none. * - .. _pe_input_series_max: - + .. index:: pair: cluster option; pe-input-series-max - + pe-input-series-max - :ref:`integer ` - 4000 @@ -732,20 +732,20 @@ values, by running the ``man pacemaker-schedulerd`` and helpful during troubleshooting and when reporting issues. A negative value means save all inputs, and 0 means save none. * - .. _enable_acl: - + .. index:: pair: cluster option; enable-acl - + enable-acl - :ref:`boolean ` - false - Whether :ref:`access control lists ` should be used to authorize CIB modifications * - .. _placement_strategy: - + .. index:: pair: cluster option; placement-strategy - + placement-strategy - :ref:`enumeration ` - default @@ -753,10 +753,10 @@ values, by running the ``man pacemaker-schedulerd`` and :ref:`utilization`). Allowed values are ``default``, ``utilization``, ``balanced``, and ``minimal``. * - .. _node_health_strategy: - + .. index:: pair: cluster option; node-health-strategy - + node-health-strategy - :ref:`enumeration ` - none @@ -764,20 +764,20 @@ values, by running the ``man pacemaker-schedulerd`` and attributes. Allowed values are ``none``, ``migrate-on-red``, ``only-green``, ``progressive``, and ``custom``. * - .. _node_health_base: - + .. index:: pair: cluster option; node-health-base - + node-health-base - :ref:`score ` - 0 - The base health score assigned to a node. Only used when ``node-health-strategy`` is ``progressive``. * - .. _node_health_green: - + .. index:: pair: cluster option; node-health-green - + node-health-green - :ref:`score ` - 0 @@ -785,10 +785,10 @@ values, by running the ``man pacemaker-schedulerd`` and Only used when ``node-health-strategy`` is ``progressive`` or ``custom``. * - .. _node_health_yellow: - + .. index:: pair: cluster option; node-health-yellow - + node-health-yellow - :ref:`score ` - 0 @@ -796,10 +796,10 @@ values, by running the ``man pacemaker-schedulerd`` and Only used when ``node-health-strategy`` is ``progressive`` or ``custom``. * - .. _node_health_red: - + .. index:: pair: cluster option; node-health-red - + node-health-red - :ref:`score ` - -INFINITY @@ -807,10 +807,10 @@ values, by running the ``man pacemaker-schedulerd`` and Only used when ``node-health-strategy`` is ``progressive`` or ``custom``. * - .. _cluster_recheck_interval: - + .. index:: pair: cluster option; cluster-recheck-interval - + cluster-recheck-interval - :ref:`duration ` - 15min @@ -818,7 +818,7 @@ values, by running the ``man pacemaker-schedulerd`` and recheck the cluster for failure-timeout settings and most time-based rules *(since 2.0.3)*. However, it will also recheck the cluster after this amount of inactivity. This has three main effects: - + * :ref:`Rules ` using ``date_spec`` are guaranteed to be checked only this often. * If :ref:`fencing ` fails enough to reach @@ -828,13 +828,13 @@ values, by running the ``man pacemaker-schedulerd`` and scheduler incorrectly determines only some of the actions needed to react to a particular event, it will often correctly determine the rest after at most this time. - + A value of 0 disables this polling. * - .. _shutdown_lock: - + .. index:: pair: cluster option; shutdown-lock - + shutdown-lock - :ref:`boolean ` - false @@ -855,10 +855,10 @@ values, by running the ``man pacemaker-schedulerd`` and not if Pacemaker Remote is stopped on the remote node without disabling the connection resource). *(since 2.0.4)* * - .. _shutdown_lock_limit: - + .. index:: pair: cluster option; shutdown-lock-limit - + shutdown-lock-limit - :ref:`duration ` - 0 @@ -868,10 +868,10 @@ values, by running the ``man pacemaker-schedulerd`` and not rejoined. (This works with remote nodes only if their connection resource's ``target-role`` is set to ``Stopped``.) *(since 2.0.4)* * - .. _startup_fencing: - + .. index:: pair: cluster option; startup-fencing - + startup-fencing - :ref:`boolean ` - true @@ -881,10 +881,10 @@ values, by running the ``man pacemaker-schedulerd`` and acts as a grace period before this fencing, since a DC must be elected to schedule fencing. * - .. _election_timeout: - + .. index:: pair: cluster option; election-timeout - + election-timeout - :ref:`duration ` - 2min @@ -892,40 +892,40 @@ values, by running the ``man pacemaker-schedulerd`` and of starting an election, the node that initiated the election will declare itself the winner. * - .. _shutdown_escalation: - + .. index:: pair: cluster option; shutdown-escalation - + shutdown-escalation - :ref:`duration ` - 20min - *Advanced Use Only:* The controller will exit immediately if a shutdown does not complete within this much time. * - .. _join_integration_timeout: - + .. index:: pair: cluster option; join-integration-timeout - + join-integration-timeout - :ref:`duration ` - 3min - *Advanced Use Only:* If you need to adjust this value, it probably indicates the presence of a bug. * - .. _join_finalization_timeout: - + .. index:: pair: cluster option; join-finalization-timeout - + join-finalization-timeout - :ref:`duration ` - 30min - *Advanced Use Only:* If you need to adjust this value, it probably indicates the presence of a bug. * - .. _transition_delay: - + .. index:: pair: cluster option; transition-delay - + transition-delay - :ref:`duration ` - 0s diff --git a/doc/sphinx/Pacemaker_Explained/collective.rst b/doc/sphinx/Pacemaker_Explained/collective.rst index 73fb7a7e4a4..c68e5942e20 100644 --- a/doc/sphinx/Pacemaker_Explained/collective.rst +++ b/doc/sphinx/Pacemaker_Explained/collective.rst @@ -22,7 +22,7 @@ One of the most common elements of a cluster is a set of resources that need to be located together, start sequentially, and stop in the reverse order. To simplify this configuration, we support the concept of groups. - + .. topic:: A group of two primitive resources .. code-block:: xml @@ -34,26 +34,26 @@ of groups. - - + + Although the example above contains only two resources, there is no limit to the number of resources a group can contain. The example is also sufficient to explain the fundamental properties of a group: - + * Resources are started in the order they appear in (**Public-IP** first, then **Email**) * Resources are stopped in the reverse order to which they appear in (**Email** first, then **Public-IP**) - + If a resource in the group can't run anywhere, then nothing after that is allowed to run, too. - + * If **Public-IP** can't run anywhere, neither can **Email**; * but if **Email** can't run anywhere, this does not affect **Public-IP** in any way - + The group above is logically equivalent to writing: - + .. topic:: How the cluster sees a group resource .. code-block:: xml @@ -71,7 +71,7 @@ The group above is logically equivalent to writing: - + Obviously as the group grows bigger, the reduced configuration effort can become significant. @@ -85,26 +85,26 @@ mount, an IP address, and an application that uses them. Group Properties ________________ -.. table:: **Properties of a Group Resource** - :widths: 1 4 - - +-------------+------------------------------------------------------------------+ - | Field | Description | - +=============+==================================================================+ - | id | .. index:: | - | | single: group; property, id | - | | single: property; id (group) | - | | single: id; group property | - | | | - | | A unique name for the group | - +-------------+------------------------------------------------------------------+ - | description | .. index:: | - | | single: group; attribute, description | - | | single: attribute; description (group) | - | | single: description; group attribute | - | | | - | | Arbitrary text for user's use (ignored by Pacemaker) | - +-------------+------------------------------------------------------------------+ +.. list-table:: **Properties of a Group Resource** + :widths: 25 75 + :header-rows: 1 + + * - Field + - Description + * - id + - .. index:: + single: group; property, id + single: property; id (group) + single: id; group property + + A unique name for the group + * - description + - .. index:: + single: group; attribute, description + single: attribute; description (group) + single: description; group attribute + + Arbitrary text for user's use (ignored by Pacemaker) Group Options _____________ @@ -112,26 +112,26 @@ _____________ Groups inherit the ``priority``, ``target-role``, and ``is-managed`` properties from primitive resources. See :ref:`resource_options` for information about those properties. - + Group Instance Attributes _________________________ Groups have no instance attributes. However, any that are set for the group object will be inherited by the group's children. - + Group Contents ______________ Groups may only contain a collection of cluster resources (see :ref:`primitive-resource`). To refer to a child of a group resource, just use the child's ``id`` instead of the group's. - + Group Constraints _________________ - + Although it is possible to reference a group's children in constraints, it is usually preferable to reference the group itself. - + .. topic:: Some constraints involving groups .. code-block:: xml @@ -140,14 +140,14 @@ constraints, it is usually preferable to reference the group itself. - + .. index:: pair: resource-stickiness; group Group Stickiness ________________ - + Stickiness, the measure of how much a resource wants to stay where it is, is additive in groups. Every active resource of the group will contribute its stickiness value to the group's total. So if the @@ -158,7 +158,7 @@ current location with a score of 500. .. index:: single: clone single: resource; clone - + .. _s-resource-clone: Clones - Resources That Can Have Multiple Active Instances @@ -167,16 +167,16 @@ Clones - Resources That Can Have Multiple Active Instances *Clone* resources are resources that can have more than one copy active at the same time. This allows you, for example, to run a copy of a daemon on every node. You can clone any primitive or group resource [#]_. - + Anonymous versus Unique Clones ______________________________ - + A clone resource is configured to be either *anonymous* or *globally unique*. - + Anonymous clones are the simplest. These behave completely identically everywhere they are running. Because of this, there can be only one instance of an anonymous clone active per node. - + The instances of globally unique clones are distinct entities. All instances are launched identically, but one instance of the clone is not identical to any other instance, whether running on the same node or a different node. As an @@ -200,37 +200,37 @@ Services that support such a special role have various terms for the special role and the default role: primary and secondary, master and replica, controller and worker, etc. Pacemaker uses the terms *promoted* and *unpromoted* to be agnostic to what the service calls them or what they do. - + All that Pacemaker cares about is that an instance comes up in the unpromoted role when started, and the resource agent supports the ``promote`` and ``demote`` actions to manage entering and exiting the promoted role. .. index:: pair: XML element; clone - + Clone Properties ________________ - -.. table:: **Properties of a Clone Resource** - :widths: 1 4 - - +-------------+------------------------------------------------------------------+ - | Field | Description | - +=============+==================================================================+ - | id | .. index:: | - | | single: clone; property, id | - | | single: property; id (clone) | - | | single: id; clone property | - | | | - | | A unique name for the clone | - +-------------+------------------------------------------------------------------+ - | description | .. index:: | - | | single: clone; attribute, description | - | | single: attribute; description (clone) | - | | single: description; clone attribute | - | | | - | | Arbitrary text for user's use (ignored by Pacemaker) | - +-------------+------------------------------------------------------------------+ + +.. list-table:: **Properties of a Clone Resource** + :widths: 25 75 + :header-rows: 1 + + * - Field + - Description + * - id + - .. index:: + single: clone; property, id + single: property; id (clone) + single: id; clone property + + A unique name for the clone + * - description + - .. index:: + single: clone; attribute, description + single: attribute; description (clone) + single: description; clone attribute + + Arbitrary text for user's use (ignored by Pacemaker) .. index:: pair: options; clone @@ -240,116 +240,116 @@ _____________ :ref:`Options ` inherited from primitive resources: ``priority, target-role, is-managed`` - -.. table:: **Clone-specific configuration options** + +.. list-table:: **Clone-Specific Configuration Options** :class: longtable - :widths: 1 1 3 - - +-------------------+-----------------+-------------------------------------------------------+ - | Field | Default | Description | - +===================+=================+=======================================================+ - | globally-unique | **true** if | .. index:: | - | | clone-node-max | single: clone; option, globally-unique | - | | is greater than | single: option; globally-unique (clone) | - | | 1 *(since* | single: globally-unique; clone option | - | | *3.0.0)*, | | - | | otherwise | If **true**, each clone instance performs a | - | | **false** | distinct function, such that a single node can run | - | | | more than one instance at the same time | - +-------------------+-----------------+-------------------------------------------------------+ - | clone-max | 0 | .. index:: | - | | | single: clone; option, clone-max | - | | | single: option; clone-max (clone) | - | | | single: clone-max; clone option | - | | | | - | | | The maximum number of clone instances that can | - | | | be started across the entire cluster. If 0, the | - | | | number of nodes in the cluster will be used. | - +-------------------+-----------------+-------------------------------------------------------+ - | clone-node-max | 1 | .. index:: | - | | | single: clone; option, clone-node-max | - | | | single: option; clone-node-max (clone) | - | | | single: clone-node-max; clone option | - | | | | - | | | If the clone is globally unique, this is the maximum | - | | | number of clone instances that can be started | - | | | on a single node | - +-------------------+-----------------+-------------------------------------------------------+ - | clone-min | 0 | .. index:: | - | | | single: clone; option, clone-min | - | | | single: option; clone-min (clone) | - | | | single: clone-min; clone option | - | | | | - | | | Require at least this number of clone instances | - | | | to be runnable before allowing resources | - | | | depending on the clone to be runnable. A value | - | | | of 0 means require all clone instances to be | - | | | runnable. | - +-------------------+-----------------+-------------------------------------------------------+ - | notify | false | .. index:: | - | | | single: clone; option, notify | - | | | single: option; notify (clone) | - | | | single: notify; clone option | - | | | | - | | | Call the resource agent's **notify** action for | - | | | all active instances, before and after starting | - | | | or stopping any clone instance. The resource | - | | | agent must support this action. | - | | | Allowed values: **false**, **true** | - +-------------------+-----------------+-------------------------------------------------------+ - | ordered | false | .. index:: | - | | | single: clone; option, ordered | - | | | single: option; ordered (clone) | - | | | single: ordered; clone option | - | | | | - | | | If **true**, clone instances must be started | - | | | sequentially instead of in parallel. | - | | | Allowed values: **false**, **true** | - +-------------------+-----------------+-------------------------------------------------------+ - | interleave | false | .. index:: | - | | | single: clone; option, interleave | - | | | single: option; interleave (clone) | - | | | single: interleave; clone option | - | | | | - | | | When this clone is ordered relative to another | - | | | clone, if this option is **false** (the default), | - | | | the ordering is relative to *all* instances of | - | | | the other clone, whereas if this option is | - | | | **true**, the ordering is relative only to | - | | | instances on the same node. | - | | | Allowed values: **false**, **true** | - +-------------------+-----------------+-------------------------------------------------------+ - | promotable | false | .. index:: | - | | | single: clone; option, promotable | - | | | single: option; promotable (clone) | - | | | single: promotable; clone option | - | | | | - | | | If **true**, clone instances can perform a | - | | | special role that Pacemaker will manage via the | - | | | resource agent's **promote** and **demote** | - | | | actions. The resource agent must support these | - | | | actions. | - | | | Allowed values: **false**, **true** | - +-------------------+-----------------+-------------------------------------------------------+ - | promoted-max | 1 | .. index:: | - | | | single: clone; option, promoted-max | - | | | single: option; promoted-max (clone) | - | | | single: promoted-max; clone option | - | | | | - | | | If ``promotable`` is **true**, the number of | - | | | instances that can be promoted at one time | - | | | across the entire cluster | - +-------------------+-----------------+-------------------------------------------------------+ - | promoted-node-max | 1 | .. index:: | - | | | single: clone; option, promoted-node-max | - | | | single: option; promoted-node-max (clone) | - | | | single: promoted-node-max; clone option | - | | | | - | | | If the clone is promotable and globally unique, this | - | | | is the number of instances that can be promoted at | - | | | one time on a single node (up to ``clone-node-max``) | - +-------------------+-----------------+-------------------------------------------------------+ - + :widths: 20 20 60 + :header-rows: 1 + + * - Field + - Default + - Description + * - globally-unique + - **true** if clone-node-max is greater than 1 *(since 3.0.0)*, otherwise + **false** + - .. index:: + single: clone; option, globally-unique + single: option; globally-unique (clone) + single: globally-unique; clone option + + If **true**, each clone instance performs a distinct function, such that + a single node can run more than one instance at the same time + * - clone-max + - 0 + - .. index:: + single: clone; option, clone-max + single: option; clone-max (clone) + single: clone-max; clone option + + The maximum number of clone instances that can be started across the + entire cluster. If 0, the number of nodes in the cluster will be used. + * - clone-node-max + - 1 + - .. index:: + single: clone; option, clone-node-max + single: option; clone-node-max (clone) + single: clone-node-max; clone option + + If the clone is globally unique, this is the maximum number of clone + instances that can be started on a single node + * - clone-min + - 0 + - .. index:: + single: clone; option, clone-min + single: option; clone-min (clone) + single: clone-min; clone option + + Require at least this number of clone instances to be runnable before + allowing resources depending on the clone to be runnable. A value of + 0 means require all clone instances to be runnable. + * - notify + - false + - .. index:: + single: clone; option, notify + single: option; notify (clone) + single: notify; clone option + + Call the resource agent's **notify** action for all active instances, + before and after starting or stopping any clone instance. The + resource agent must support this action. Allowed values: **false**, + **true** + * - ordered + - false + - .. index:: + single: clone; option, ordered + single: option; ordered (clone) + single: ordered; clone option + + If **true**, clone instances must be started sequentially instead of + in parallel. Allowed values: **false**, **true** + * - interleave + - false + - .. index:: + single: clone; option, interleave + single: option; interleave (clone) + single: interleave; clone option + + When this clone is ordered relative to another clone, if this option is + **false** (the default), the ordering is relative to *all* instances of + the other clone, whereas if this option is **true**, the ordering is + relative only to instances on the same node. Allowed values: **false**, + **true** + * - promotable + - false + - .. index:: + single: clone; option, promotable + single: option; promotable (clone) + single: promotable; clone option + + If **true**, clone instances can perform a special role that Pacemaker + will manage via the resource agent's **promote** and **demote** actions. + The resource agent must support these actions. Allowed values: + **false**, **true** + * - promoted-max + - 1 + - .. index:: + single: clone; option, promoted-max + single: option; promoted-max (clone) + single: promoted-max; clone option + + If ``promotable`` is **true**, the number of instances that can be + promoted at one time across the entire cluster + * - promoted-node-max + - 1 + - .. index:: + single: clone; option, promoted-node-max + single: option; promoted-node-max (clone) + single: promoted-node-max; clone option + + If the clone is promotable and globally unique, this is the number of + instances that can be promoted at one time on a single node (up to + ``clone-node-max``) + .. note:: **Deprecated Terminology** In older documentation and online examples, you may see promotable clones @@ -363,12 +363,12 @@ _____________ * Using ``Master`` as a role name instead of ``Promoted`` * Using ``Slave`` as a role name instead of ``Unpromoted`` - + Clone Contents ______________ - + Clones must contain exactly one primitive or group resource. - + .. topic:: A clone that runs a web server on all nodes .. code-block:: xml @@ -379,32 +379,32 @@ Clones must contain exactly one primitive or group resource. - + .. warning:: You should never reference the name of a clone's child (the primitive or group resource being cloned). If you think you need to do this, you probably need to re-evaluate your design. - + Clone Instance Attribute ________________________ - + Clones have no instance attributes; however, any that are set here will be inherited by the clone's child. - + .. index:: single: clone; constraint Clone Constraints _________________ - + In most cases, a clone will have a single instance on each active cluster node. If this is not the case, you can indicate which nodes the cluster should preferentially assign copies to with resource location constraints. These constraints are written no differently from those for primitive resources except that the clone's **id** is used. - + .. topic:: Some constraints involving clones .. code-block:: xml @@ -413,8 +413,8 @@ for primitive resources except that the clone's **id** is used. - - + + Ordering constraints behave slightly differently for clones. In the example above, ``apache-stats`` will wait until all copies of ``apache-clone`` that need to be started have done so before being started itself. @@ -431,7 +431,7 @@ Colocation between clones is also possible. If one clone **A** is colocated with another clone **B**, the set of allowed locations for **A** is limited to nodes on which **B** is (or will be) active. Placement is then performed normally. - + .. index:: single: promotable clone; constraint @@ -439,13 +439,13 @@ normally. Promotable Clone Constraints ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - + For promotable clone resources, the ``first-action`` and/or ``then-action`` fields for ordering constraints may be set to ``promote`` or ``demote`` to constrain the promoted role, and colocation constraints may contain ``rsc-role`` and/or ``with-rsc-role`` fields. -.. topic:: Constraints involving promotable clone resources +.. topic:: Constraints involving promotable clone resources .. code-block:: xml @@ -458,7 +458,7 @@ promoted role, and colocation constraints may contain ``rsc-role`` and/or - + In the example above, **myApp** will wait until one of the database copies has been started and promoted before being started @@ -479,7 +479,7 @@ possible. In such cases, the set of allowed locations for the **rsc** clone is (after role filtering) limited to nodes on which the ``with-rsc`` promotable clone resource is (or will be) in the specified role. Placement is then performed as normal. - + Using Promotable Clone Resources in Colocation Sets ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -489,7 +489,7 @@ inside a colocation constraint, the resource set may take a ``role`` attribute. In the following example, an instance of **B** may be promoted only on a node where **A** is in the promoted role. Additionally, resources **C** and **D** must be located on a node where both **A** and **B** are promoted. - + .. topic:: Colocate C and D with A's and B's promoted instances .. code-block:: xml @@ -506,7 +506,7 @@ must be located on a node where both **A** and **B** are promoted. - + Using Promotable Clone Resources in Ordered Sets ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -530,25 +530,25 @@ attribute. - + In the above example, **B** cannot be promoted until **A** has been promoted. Additionally, resources **C** and **D** must wait until **A** and **B** have been promoted before they can start. .. index:: pair: resource-stickiness; clone - + .. _s-clone-stickiness: Clone Stickiness ________________ - + To achieve stable assignments, clones are slightly sticky by default. If no value for ``resource-stickiness`` is provided, the clone will use a value of 1. Being a small value, it causes minimal disturbance to the score calculations of other resources but is enough to prevent Pacemaker from needlessly moving instances around the cluster. - + .. note:: For globally unique clones, this may result in multiple instances of the @@ -557,7 +557,7 @@ instances around the cluster. If you do not want this behavior, specify a ``resource-stickiness`` of 0 for the clone temporarily and let the cluster adjust, then set it back to 1 if you want the default behavior to apply again. - + .. important:: If ``resource-stickiness`` is set in the ``rsc_defaults`` section, it will @@ -574,7 +574,7 @@ active, but also that its actual role matches its intended one. Define two monitoring actions: the usual one will cover the unpromoted role, and an additional one with ``role="Promoted"`` will cover the promoted role. - + .. topic:: Monitoring both states of a promotable clone resource .. code-block:: xml @@ -589,8 +589,8 @@ and an additional one with ``role="Promoted"`` will cover the promoted role. - - + + .. important:: It is crucial that *every* monitor operation has a different interval! @@ -599,7 +599,7 @@ and an additional one with ``role="Promoted"`` will cover the promoted role. had the same monitor interval for both roles, Pacemaker would ignore the role when checking the status -- which would cause unexpected return codes, and therefore unnecessary complications. - + .. _s-promotion-scores: Determining Which Instance is Promoted @@ -618,7 +618,7 @@ ways: * Constraints: Location constraints can indicate which nodes are most preferred to be promoted. - + .. topic:: Explicitly preferring node1 to be promoted .. code-block:: xml @@ -627,14 +627,14 @@ ways: - + .. index: single: bundle single: resource; bundle pair: container; Docker pair: container; podman - + .. _s-resource-bundle: Bundles - Containerized Resources @@ -643,10 +643,10 @@ Bundles - Containerized Resources Pacemaker supports a special syntax for launching a service inside a `container `_ with any infrastructure it requires: the *bundle*. - + Pacemaker bundles support `Docker `_ and `podman `_ *(since 2.0.1)* container technologies. [#]_ - + .. topic:: A bundle for a containerized web server .. code-block:: xml @@ -677,7 +677,7 @@ Pacemaker bundles support `Docker `_ and Bundle Prerequisites ____________________ - + Before configuring a bundle in Pacemaker, the user must install the appropriate container launch technology (Docker or podman), and supply a fully configured container image, on every node allowed to run the bundle. @@ -689,30 +689,30 @@ the bundle. .. index:: pair: XML element; bundle - + Bundle Properties _________________ - -.. table:: **XML Attributes of a bundle Element** - :widths: 1 4 - - +-------------+------------------------------------------------------------------+ - | Field | Description | - +=============+==================================================================+ - | id | .. index:: | - | | single: bundle; attribute, id | - | | single: attribute; id (bundle) | - | | single: id; bundle attribute | - | | | - | | A unique name for the bundle (required) | - +-------------+------------------------------------------------------------------+ - | description | .. index:: | - | | single: bundle; attribute, description | - | | single: attribute; description (bundle) | - | | single: description; bundle attribute | - | | | - | | Arbitrary text for user's use (ignored by Pacemaker) | - +-------------+------------------------------------------------------------------+ + +.. list-table:: **XML Attributes of a bundle Element** + :widths: 25 75 + :header-rows: 1 + + * - Field + - Description + * - id + - .. index:: + single: bundle; attribute, id + single: attribute; id (bundle) + single: id; bundle attribute + + A unique name for the bundle (required) + * - description + - .. index:: + single: bundle; attribute, description + single: attribute; description (bundle) + single: description; bundle attribute + + Arbitrary text for user's use (ignored by Pacemaker) A bundle must contain exactly one ``docker`` or ``podman`` element. @@ -720,188 +720,192 @@ A bundle must contain exactly one ``docker`` or ``podman`` element. .. index:: pair: XML element; docker pair: XML element; podman - + Bundle Container Properties ___________________________ - -.. table:: **XML attributes of a docker or podman Element** + +.. list-table:: **XML Attributes of a docker or podman Element** :class: longtable - :widths: 2 3 4 - - +-------------------+------------------------------------+---------------------------------------------------+ - | Attribute | Default | Description | - +===================+====================================+===================================================+ - | image | | .. index:: | - | | | single: docker; attribute, image | - | | | single: attribute; image (docker) | - | | | single: image; docker attribute | - | | | single: podman; attribute, image | - | | | single: attribute; image (podman) | - | | | single: image; podman attribute | - | | | | - | | | Container image tag (required) | - +-------------------+------------------------------------+---------------------------------------------------+ - | replicas | Value of ``promoted-max`` | .. index:: | - | | if that is positive, else 1 | single: docker; attribute, replicas | - | | | single: attribute; replicas (docker) | - | | | single: replicas; docker attribute | - | | | single: podman; attribute, replicas | - | | | single: attribute; replicas (podman) | - | | | single: replicas; podman attribute | - | | | | - | | | A positive integer specifying the number of | - | | | container instances to launch | - +-------------------+------------------------------------+---------------------------------------------------+ - | replicas-per-host | 1 | .. index:: | - | | | single: docker; attribute, replicas-per-host | - | | | single: attribute; replicas-per-host (docker) | - | | | single: replicas-per-host; docker attribute | - | | | single: podman; attribute, replicas-per-host | - | | | single: attribute; replicas-per-host (podman) | - | | | single: replicas-per-host; podman attribute | - | | | | - | | | A positive integer specifying the number of | - | | | container instances allowed to run on a | - | | | single node | - +-------------------+------------------------------------+---------------------------------------------------+ - | promoted-max | 0 | .. index:: | - | | | single: docker; attribute, promoted-max | - | | | single: attribute; promoted-max (docker) | - | | | single: promoted-max; docker attribute | - | | | single: podman; attribute, promoted-max | - | | | single: attribute; promoted-max (podman) | - | | | single: promoted-max; podman attribute | - | | | | - | | | A non-negative integer that, if positive, | - | | | indicates that the containerized service | - | | | should be treated as a promotable service, | - | | | with this many replicas allowed to run the | - | | | service in the promoted role | - +-------------------+------------------------------------+---------------------------------------------------+ - | network | | .. index:: | - | | | single: docker; attribute, network | - | | | single: attribute; network (docker) | - | | | single: network; docker attribute | - | | | single: podman; attribute, network | - | | | single: attribute; network (podman) | - | | | single: network; podman attribute | - | | | | - | | | If specified, this will be passed to the | - | | | ``docker run`` or ``podman run`` command as the | - | | | network setting for the container. | - +-------------------+------------------------------------+---------------------------------------------------+ - | run-command | ``/usr/sbin/pacemaker-remoted`` if | .. index:: | - | | bundle contains a **primitive**, | single: docker; attribute, run-command | - | | otherwise none | single: attribute; run-command (docker) | - | | | single: run-command; docker attribute | - | | | single: podman; attribute, run-command | - | | | single: attribute; run-command (podman) | - | | | single: run-command; podman attribute | - | | | | - | | | This command will be run inside the container | - | | | when launching it ("PID 1"). If the bundle | - | | | contains a **primitive**, this command *must* | - | | | start ``pacemaker-remoted`` (but could, for | - | | | example, be a script that does other stuff, too). | - +-------------------+------------------------------------+---------------------------------------------------+ - | options | | .. index:: | - | | | single: docker; attribute, options | - | | | single: attribute; options (docker) | - | | | single: options; docker attribute | - | | | single: podman; attribute, options | - | | | single: attribute; options (podman) | - | | | single: options; podman attribute | - | | | | - | | | Extra command-line options to pass to the | - | | | ``docker run`` or ``podman run`` command | - +-------------------+------------------------------------+---------------------------------------------------+ - + :widths: 15 40 45 + :header-rows: 1 + + * - Attribute + - Default + - Description + * - image + - + - .. index:: + single: docker; attribute, image + single: attribute; image (docker) + single: image; docker attribute + single: podman; attribute, image + single: attribute; image (podman) + single: image; podman attribute + + Container image tag (required) + * - replicas + - Value of ``promoted-max`` is that is positive, else 1 + - .. index:: + single: docker; attribute, replicas + single: attribute; replicas (docker) + single: replicas; docker attribute + single: podman; attribute, replicas + single: attribute; replicas (podman) + single: replicas; podman attribute + + A positive integer specifying the number of container instances to launch + * - replicas-per-host + - 1 + - .. index:: + single: docker; attribute, replicas-per-host + single: attribute; replicas-per-host (docker) + single: replicas-per-host; docker attribute + single: podman; attribute, replicas-per-host + single: attribute; replicas-per-host (podman) + single: replicas-per-host; podman attribute + + A positive integer specifying the number of container instances allowed + to run on a single node + * - promoted-max + - 0 + - .. index:: + single: docker; attribute, promoted-max + single: attribute; promoted-max (docker) + single: promoted-max; docker attribute + single: podman; attribute, promoted-max + single: attribute; promoted-max (podman) + single: promoted-max; podman attribute + + A non-negative integer that, if positive, indicates that the containerized + service should be treated as a promotable service, with this many replicas + allowed to run the service in the promoted role + * - network + - + - .. index:: + single: docker; attribute, network + single: attribute; network (docker) + single: network; docker attribute + single: podman; attribute, network + single: attribute; network (podman) + single: network; podman attribute + + If specified, this will be passed to the ``docker run`` or ``podman run`` + command as the network setting for the container. + * - run-command + - ``/usr/sbin/pacemaker-remoted`` if bundle contains a **primitive**, + otherwise none + - .. index:: + single: docker; attribute, run-command + single: attribute; run-command (docker) + single: run-command; docker attribute + single: podman; attribute, run-command + single: attribute; run-command (podman) + single: run-command; podman attribute + + This command will be run inside the container when launching it ("PID 1"). + If the bundle contains a **primitive**, this command *must* start + ``pacemaker-remoted`` (but could, for example, be a script that does + other stuff, too). + * - options + - + - .. index:: + single: docker; attribute, options + single: attribute; options (docker) + single: options; docker attribute + single: podman; attribute, options + single: attribute; options (podman) + single: options; podman attribute + + Extra command-line options to pass to the ``docker run`` or + ``podman run`` command + .. note:: Considerations when using cluster configurations or container images from Pacemaker 1.1: - + * If the container image has a pre-2.0.0 version of Pacemaker, set ``run-command`` to ``/usr/sbin/pacemaker_remoted`` (note the underbar instead of dash). - + * ``masters`` is accepted as an alias for ``promoted-max``, but is deprecated since 2.0.0, and support for it will be removed in a future version. Bundle Network Properties _________________________ - + A bundle may optionally contain one ```` element. .. index:: pair: XML element; network single: bundle; network - -.. table:: **XML attributes of a network Element** - :widths: 2 1 5 - - +----------------+---------+------------------------------------------------------------+ - | Attribute | Default | Description | - +================+=========+============================================================+ - | add-host | TRUE | .. index:: | - | | | single: network; attribute, add-host | - | | | single: attribute; add-host (network) | - | | | single: add-host; network attribute | - | | | | - | | | If TRUE, and ``ip-range-start`` is used, Pacemaker will | - | | | automatically ensure that ``/etc/hosts`` inside the | - | | | containers has entries for each | - | | | :ref:`replica name ` | - | | | and its assigned IP. | - +----------------+---------+------------------------------------------------------------+ - | ip-range-start | | .. index:: | - | | | single: network; attribute, ip-range-start | - | | | single: attribute; ip-range-start (network) | - | | | single: ip-range-start; network attribute | - | | | | - | | | If specified, Pacemaker will create an implicit | - | | | ``ocf:heartbeat:IPaddr2`` resource for each container | - | | | instance, starting with this IP address, using up to | - | | | ``replicas`` sequential addresses. These addresses can be | - | | | used from the host's network to reach the service inside | - | | | the container, though it is not visible within the | - | | | container itself. Only IPv4 addresses are currently | - | | | supported. | - +----------------+---------+------------------------------------------------------------+ - | host-netmask | 32 | .. index:: | - | | | single: network; attribute; host-netmask | - | | | single: attribute; host-netmask (network) | - | | | single: host-netmask; network attribute | - | | | | - | | | If ``ip-range-start`` is specified, the IP addresses | - | | | are created with this CIDR netmask (as a number of bits). | - +----------------+---------+------------------------------------------------------------+ - | host-interface | | .. index:: | - | | | single: network; attribute; host-interface | - | | | single: attribute; host-interface (network) | - | | | single: host-interface; network attribute | - | | | | - | | | If ``ip-range-start`` is specified, the IP addresses are | - | | | created on this host interface (by default, it will be | - | | | determined from the IP address). | - +----------------+---------+------------------------------------------------------------+ - | control-port | 3121 | .. index:: | - | | | single: network; attribute; control-port | - | | | single: attribute; control-port (network) | - | | | single: control-port; network attribute | - | | | | - | | | If the bundle contains a ``primitive``, the cluster will | - | | | use this integer TCP port for communication with | - | | | Pacemaker Remote inside the container. Changing this is | - | | | useful when the container is unable to listen on the | - | | | default port, for example, when the container uses the | - | | | host's network rather than ``ip-range-start`` (in which | - | | | case ``replicas-per-host`` must be 1), or when the bundle | - | | | may run on a Pacemaker Remote node that is already | - | | | listening on the default port. Any ``PCMK_remote_port`` | - | | | environment variable set on the host or in the container | - | | | is ignored for bundle connections. | - +----------------+---------+------------------------------------------------------------+ - + +.. list-table:: **XML Attributes of a network Element** + :class: longtable + :widths: 20 20 60 + :header-rows: 1 + + * - Attribute + - Default + - Description + * - add-host + - TRUE + - .. index:: + single: network; attribute, add-host + single: attribute; add-host (network) + single: add-host; network attribute + + If TRUE, and ``ip-range-start`` is used, Pacemaker will automatically + ensure that ``/etc/hosts`` inside the containers has entries for each + :ref:`replica name ` and its + assigned IP. + * - ip-range-start + - + - .. index:: + single: network; attribute, ip-range-start + single: attribute; ip-range-start (network) + single: ip-range-start; network attribute + + If specified, Pacemaker will create an implicit ``ocf:heartbeat:IPaddr2`` + resource for each container instance, starting with this IP address, + using up to ``replicas`` sequential addresses. These addresses can be + used from the host's network to reach the service inside the container, + though it is not visible within the container itself. Only IPv4 + addresses are currently supported. + * - host-netmask + - 32 + - .. index:: + single: network; attribute; host-netmask + single: attribute; host-netmask (network) + single: host-netmask; network attribute + + If ``ip-range-start`` is specified, the IP addresses are created with + this CIDR netmask (as a number of bits). + * - host-interface + - + - .. index:: + single: network; attribute; host-interface + single: attribute; host-interface (network) + single: host-interface; network attribute + + If ``ip-range-start`` is specified, the IP addresses are created on this + host interface (by default, it will be determined from the IP address). + * - control-port + - 3121 + - .. index:: + single: network; attribute; control-port + single: attribute; control-port (network) + single: control-port; network attribute + + If the bundle contains a ``primitive``, the cluster will use this integer + TCP port for communication with Pacemaker Remote inside the container. + Changing this is useful when the container is unable to listen on the + default port, for example, when the container uses the host's network + rather than ``ip-range-start`` (in which case ``replicas-per-host`` must + be 1), or when the bundle may run on a Pacemaker Remote node that is + already listening on the default port. Any ``PCMK_remote_port`` + environment variable set on the host or in the container is ignored for + bundle connections. + .. _s-resource-bundle-note-replica-names: .. note:: @@ -912,57 +916,58 @@ A bundle may optionally contain one ```` element. .. index:: pair: XML element; port-mapping - + Additionally, a ``network`` element may optionally contain one or more ``port-mapping`` elements. - -.. table:: **Attributes of a port-mapping Element** - :widths: 2 1 5 - - +---------------+-------------------+------------------------------------------------------+ - | Attribute | Default | Description | - +===============+===================+======================================================+ - | id | | .. index:: | - | | | single: port-mapping; attribute, id | - | | | single: attribute; id (port-mapping) | - | | | single: id; port-mapping attribute | - | | | | - | | | A unique name for the port mapping (required) | - +---------------+-------------------+------------------------------------------------------+ - | port | | .. index:: | - | | | single: port-mapping; attribute, port | - | | | single: attribute; port (port-mapping) | - | | | single: port; port-mapping attribute | - | | | | - | | | If this is specified, connections to this TCP port | - | | | number on the host network (on the container's | - | | | assigned IP address, if ``ip-range-start`` is | - | | | specified) will be forwarded to the container | - | | | network. Exactly one of ``port`` or ``range`` | - | | | must be specified in a ``port-mapping``. | - +---------------+-------------------+------------------------------------------------------+ - | internal-port | value of ``port`` | .. index:: | - | | | single: port-mapping; attribute, internal-port | - | | | single: attribute; internal-port (port-mapping) | - | | | single: internal-port; port-mapping attribute | - | | | | - | | | If ``port`` and this are specified, connections | - | | | to ``port`` on the host's network will be | - | | | forwarded to this port on the container network. | - +---------------+-------------------+------------------------------------------------------+ - | range | | .. index:: | - | | | single: port-mapping; attribute, range | - | | | single: attribute; range (port-mapping) | - | | | single: range; port-mapping attribute | - | | | | - | | | If this is specified, connections to these TCP | - | | | port numbers (expressed as *first_port*-*last_port*) | - | | | on the host network (on the container's assigned IP | - | | | address, if ``ip-range-start`` is specified) will | - | | | be forwarded to the same ports in the container | - | | | network. Exactly one of ``port`` or ``range`` | - | | | must be specified in a ``port-mapping``. | - +---------------+-------------------+------------------------------------------------------+ + +.. list-table:: **Attributes of a port-mapping Element** + :class: longtable + :widths: 20 20 60 + :header-rows: 1 + + * - Attribute + - Default + - Description + * - id + - + - .. index:: + single: port-mapping; attribute, id + single: attribute; id (port-mapping) + single: id; port-mapping attribute + + A unique name for the port mapping (required) + * - port + - + - .. index:: + single: port-mapping; attribute, port + single: attribute; port (port-mapping) + single: port; port-mapping attribute + + If this is specified, connections to this TCP port number on the host + network (on the container's assigned IP address, if ``ip-range-start`` + is specified) will be forwarded to the container network. Exactly one + of ``port`` or ``range`` must be specified in a ``port-mapping``. + * - internal-port + - value of ``port`` + - .. index:: + single: port-mapping; attribute, internal-port + single: attribute; internal-port (port-mapping) + single: internal-port; port-mapping attribute + + If ``port`` and this are specified, connections to ``port`` on the host's + network will be forwarded to this port on the container network. + * - range + - + - .. index:: + single: port-mapping; attribute, range + single: attribute; range (port-mapping) + single: range; port-mapping attribute + + If this is specified, connections to these TCP port numbers (expressed as + *first_port*-*last_port*) on the host network (on the container's + assigned IP address, if ``ip-range-start`` is specified) will be forwarded + to the same ports in the container network. Exactly one of ``port`` or + ``range`` must be specified in a ``port-mapping``. .. note:: @@ -974,75 +979,80 @@ Additionally, a ``network`` element may optionally contain one or more pair: XML element; storage pair: XML element; storage-mapping single: bundle; storage - + .. _s-bundle-storage: Bundle Storage Properties _________________________ - + A bundle may optionally contain one ``storage`` element. A ``storage`` element has no properties of its own, but may contain one or more ``storage-mapping`` elements. - -.. table:: **Attributes of a storage-mapping Element** - :widths: 2 1 5 - - +-----------------+---------+-------------------------------------------------------------+ - | Attribute | Default | Description | - +=================+=========+=============================================================+ - | id | | .. index:: | - | | | single: storage-mapping; attribute, id | - | | | single: attribute; id (storage-mapping) | - | | | single: id; storage-mapping attribute | - | | | | - | | | A unique name for the storage mapping (required) | - +-----------------+---------+-------------------------------------------------------------+ - | source-dir | | .. index:: | - | | | single: storage-mapping; attribute, source-dir | - | | | single: attribute; source-dir (storage-mapping) | - | | | single: source-dir; storage-mapping attribute | - | | | | - | | | The absolute path on the host's filesystem that will be | - | | | mapped into the container. Exactly one of ``source-dir`` | - | | | and ``source-dir-root`` must be specified in a | - | | | ``storage-mapping``. | - +-----------------+---------+-------------------------------------------------------------+ - | source-dir-root | | .. index:: | - | | | single: storage-mapping; attribute, source-dir-root | - | | | single: attribute; source-dir-root (storage-mapping) | - | | | single: source-dir-root; storage-mapping attribute | - | | | | - | | | The start of a path on the host's filesystem that will | - | | | be mapped into the container, using a different | - | | | subdirectory on the host for each container instance. | - | | | The subdirectory will be named the same as the | - | | | :ref:`replica name `. | - | | | Exactly one of ``source-dir`` and ``source-dir-root`` | - | | | must be specified in a ``storage-mapping``. | - +-----------------+---------+-------------------------------------------------------------+ - | target-dir | | .. index:: | - | | | single: storage-mapping; attribute, target-dir | - | | | single: attribute; target-dir (storage-mapping) | - | | | single: target-dir; storage-mapping attribute | - | | | | - | | | The path name within the container where the host | - | | | storage will be mapped (required) | - +-----------------+---------+-------------------------------------------------------------+ - | options | | .. index:: | - | | | single: storage-mapping; attribute, options | - | | | single: attribute; options (storage-mapping) | - | | | single: options; storage-mapping attribute | - | | | | - | | | A comma-separated list of file system mount | - | | | options to use when mapping the storage | - +-----------------+---------+-------------------------------------------------------------+ - + +.. list-table:: **Attributes of a storage-mapping Element** + :class: longtable + :widths: 20 20 60 + :header-rows: 1 + + * - Attribute + - Default + - Description + * - id + - + - .. index:: + single: storage-mapping; attribute, id + single: attribute; id (storage-mapping) + single: id; storage-mapping attribute + + A unique name for the storage mapping (required) + * - source-dir + - + - .. index:: + single: storage-mapping; attribute, source-dir + single: attribute; source-dir (storage-mapping) + single: source-dir; storage-mapping attribute + + The absolute path on the host's filesystem that will be mapped into the + container. Exactly one of ``source-dir`` and ``source-dir-root`` must be + specified in a ``storage-mapping``. + * - source-dir-root + - + - .. index:: + single: storage-mapping; attribute, source-dir-root + single: attribute; source-dir-root (storage-mapping) + single: source-dir-root; storage-mapping attribute + + The start of a path on the host's filesystem that will be mapped into the + container, using a different subdirectory on the host for each container + instance. The subdirectory will be named the same as the + :ref:`replica name `. Exactly one + of ``source-dir`` and ``source-dir-root`` must be specified in a + ``storage-mapping``. + * - target-dir + - + - .. index:: + single: storage-mapping; attribute, target-dir + single: attribute; target-dir (storage-mapping) + single: target-dir; storage-mapping attribute + + The path name within the container where the host storage will be mapped + (required) + * - options + - + - .. index:: + single: storage-mapping; attribute, options + single: attribute; options (storage-mapping) + single: options; storage-mapping attribute + + A comma-separated list of file system mount options to use when mapping + the storage + .. note:: Pacemaker does not define the behavior if the source directory does not already exist on the host. However, it is expected that the container technology and/or its resource agent will create the source directory in that case. - + .. note:: If the bundle contains a ``primitive``, @@ -1051,12 +1061,12 @@ elements. and ``source-dir-root=/var/log/pacemaker/bundles target-dir=/var/log`` into the container, so it is not necessary to specify those paths in a ``storage-mapping``. - + .. important:: The ``PCMK_authkey_location`` environment variable must not be set to anything other than the default of ``/etc/pacemaker/authkey`` on any node in the cluster. - + .. important:: If SELinux is used in enforcing mode on the host, you must ensure the container @@ -1066,10 +1076,10 @@ elements. .. index:: single: bundle; primitive - + Bundle Primitive ________________ - + A bundle may optionally contain one :ref:`primitive ` resource. The primitive may have operations, instance attributes, and meta-attributes defined, as usual. @@ -1085,12 +1095,12 @@ If the bundle has more than one container instance (replica), the primitive resource will function as an implicit :ref:`clone ` -- a :ref:`promotable clone ` if the bundle has ``promoted-max`` greater than zero. - + .. note:: If you want to pass environment variables to a bundle's Pacemaker Remote connection or primitive, you have two options: - + * Environment variables whose value is the same regardless of the underlying host may be set using the container element's ``options`` attribute. * If you want variables to have host-specific values, you can use the @@ -1099,12 +1109,12 @@ greater than zero. Pacemaker Remote will parse this file as a shell-like format, with variables set as NAME=VALUE, ignoring blank lines and comments starting with "#". - + .. important:: When a bundle has a ``primitive``, Pacemaker on all cluster nodes must be able to contact Pacemaker Remote inside the bundle's containers. - + * The containers must have an accessible network (for example, ``network`` should not be set to "none" with a ``primitive``). * The default, using a distinct network space inside the container, works in @@ -1114,7 +1124,7 @@ greater than zero. ``network`` to "host"), a unique ``control-port`` should be specified for each bundle. Any firewall must allow access from all cluster nodes to the ``control-port`` on all cluster and remote node IPs. - + .. index:: single: bundle; node attributes @@ -1122,7 +1132,7 @@ greater than zero. Bundle Node Attributes ______________________ - + If the bundle has a ``primitive``, the primitive's resource agent may want to set node attributes such as :ref:`promotion scores `. However, with containers, it is not apparent which node should get the attribute. @@ -1148,31 +1158,31 @@ environment variables to the primitive's resource agent that allow it to set node attributes appropriately: ``CRM_meta_container_attribute_target`` (identical to the meta-attribute value) and ``CRM_meta_physical_host`` (the name of the underlying host). - + .. note:: When called by a resource agent, the ``attrd_updater`` and ``crm_attribute`` commands will automatically check those environment variables and set attributes appropriately. - + .. index:: single: bundle; meta-attributes Bundle Meta-Attributes ______________________ - + Any meta-attribute set on a bundle will be inherited by the bundle's primitive and any resources implicitly created by Pacemaker for the bundle. This includes options such as ``priority``, ``target-role``, and ``is-managed``. See :ref:`resource_options` for more information. - + Bundles support clone meta-attributes including ``notify``, ``ordered``, and ``interleave``. Limitations of Bundles ______________________ - + Restarting pacemaker while a bundle is unmanaged or the cluster is in maintenance mode may cause the bundle to fail. diff --git a/doc/sphinx/Pacemaker_Explained/constraints.rst b/doc/sphinx/Pacemaker_Explained/constraints.rst index 7d2f39c58e8..316aa60cbb8 100644 --- a/doc/sphinx/Pacemaker_Explained/constraints.rst +++ b/doc/sphinx/Pacemaker_Explained/constraints.rst @@ -40,7 +40,7 @@ ___________________ .. list-table:: **Attributes of a rsc_location Element** :class: longtable - :widths: 1 1 1 4 + :widths: 15 15 10 60 :header-rows: 1 * - Name @@ -48,23 +48,23 @@ ___________________ - Default - Description * - .. rsc_location_id: - + .. index:: single: rsc_location; attribute, id single: attribute; id (rsc_location) single: id; rsc_location attribute - + id - :ref:`id ` - - A unique name for the constraint (required) * - .. rsc_location_rsc: - + .. index:: single: rsc_location; attribute, rsc single: attribute; rsc (rsc_location) single: rsc; rsc_location attribute - + rsc - :ref:`id ` - @@ -72,12 +72,12 @@ ___________________ constraint must either have a ``rsc``, have a ``rsc-pattern``, or contain at least one resource set. * - .. rsc_pattern: - + .. index:: single: rsc_location; attribute, rsc-pattern single: attribute; rsc-pattern (rsc_location) single: rsc-pattern; rsc_location attribute - + rsc-pattern - :ref:`text ` - @@ -93,12 +93,12 @@ ___________________ must either have a ``rsc``, have a ``rsc-pattern``, or contain at least one resource set. * - .. rsc_location_node: - + .. index:: single: rsc_location; attribute, node single: attribute; node (rsc_location) single: node; rsc_location attribute - + node - :ref:`text ` - @@ -106,12 +106,12 @@ ___________________ constraint must either have a ``node`` and ``score``, or contain at least one rule. * - .. rsc_location_score: - + .. index:: single: rsc_location; attribute, score single: attribute; score (rsc_location) single: score; rsc_location attribute - + score - :ref:`score ` - @@ -122,12 +122,12 @@ ___________________ constraint must either have a ``node`` and ``score``, or contain at least one rule. * - .. rsc_location_role: - + .. index:: single: rsc_location; attribute, role single: attribute; role (rsc_location) single: role; rsc_location attribute - + role - :ref:`enumeration ` - ``Started`` @@ -135,7 +135,7 @@ ___________________ :ref:`promotable clones `, is allowed only if ``rsc`` or ``rsc-pattern`` is set, and is ignored if the constraint contains a rule. Allowed values: - + * ``Started`` or ``Unpromoted``: The constraint affects the location of all instances of the resource. (A promoted instance must start in the unpromoted role before being promoted, so any location requirement for @@ -145,12 +145,12 @@ ___________________ promoted. * - .. resource_discovery: - + .. index:: single: rsc_location; attribute, resource-discovery single: attribute; resource-discovery (rsc_location) single: resource-discovery; rsc_location attribute - + resource-discovery - :ref:`enumeration ` - always @@ -165,7 +165,7 @@ ___________________ Pacemaker Remote is used to scale a cluster to hundreds of nodes, limiting resource discovery to allowed nodes can significantly boost performance. Allowed values: - + * ``always:`` Always perform resource discovery for the specified resource on this node. * ``never:`` Never perform resource discovery for the specified resource @@ -319,91 +319,94 @@ resource actions should occur. Ordering Properties ___________________ -.. table:: **Attributes of a rsc_order Element** +.. list-table:: **Attributes of a rsc_order Element** :class: longtable - :widths: 1 2 4 - - +--------------+----------------------------+-------------------------------------------------------------------+ - | Field | Default | Description | - +==============+============================+===================================================================+ - | id | | .. index:: | - | | | single: rsc_order; attribute, id | - | | | single: attribute; id (rsc_order) | - | | | single: id; rsc_order attribute | - | | | | - | | | A unique name for the constraint | - +--------------+----------------------------+-------------------------------------------------------------------+ - | first | | .. index:: | - | | | single: rsc_order; attribute, first | - | | | single: attribute; first (rsc_order) | - | | | single: first; rsc_order attribute | - | | | | - | | | Name of the resource that the ``then`` resource | - | | | depends on | - +--------------+----------------------------+-------------------------------------------------------------------+ - | then | | .. index:: | - | | | single: rsc_order; attribute, then | - | | | single: attribute; then (rsc_order) | - | | | single: then; rsc_order attribute | - | | | | - | | | Name of the dependent resource | - +--------------+----------------------------+-------------------------------------------------------------------+ - | first-action | start | .. index:: | - | | | single: rsc_order; attribute, first-action | - | | | single: attribute; first-action (rsc_order) | - | | | single: first-action; rsc_order attribute | - | | | | - | | | The action that the ``first`` resource must complete | - | | | before ``then-action`` can be initiated for the ``then`` | - | | | resource. Allowed values: ``start``, ``stop``, | - | | | ``promote``, ``demote``. | - +--------------+----------------------------+-------------------------------------------------------------------+ - | then-action | value of ``first-action`` | .. index:: | - | | | single: rsc_order; attribute, then-action | - | | | single: attribute; then-action (rsc_order) | - | | | single: first-action; rsc_order attribute | - | | | | - | | | The action that the ``then`` resource can execute only | - | | | after the ``first-action`` on the ``first`` resource has | - | | | completed. Allowed values: ``start``, ``stop``, | - | | | ``promote``, ``demote``. | - +--------------+----------------------------+-------------------------------------------------------------------+ - | kind | Mandatory | .. index:: | - | | | single: rsc_order; attribute, kind | - | | | single: attribute; kind (rsc_order) | - | | | single: kind; rsc_order attribute | - | | | | - | | | How to enforce the constraint. Allowed values: | - | | | | - | | | * ``Mandatory:`` ``then-action`` will never be initiated | - | | | for the ``then`` resource unless and until ``first-action`` | - | | | successfully completes for the ``first`` resource. | - | | | | - | | | * ``Optional:`` The constraint applies only if both specified | - | | | resource actions are scheduled in the same transition | - | | | (that is, in response to the same cluster state). This | - | | | means that ``then-action`` is allowed on the ``then`` | - | | | resource regardless of the state of the ``first`` resource, | - | | | but if both actions happen to be scheduled at the same time, | - | | | they will be ordered. | - | | | | - | | | * ``Serialize:`` Ensure that the specified actions are never | - | | | performed concurrently for the specified resources. | - | | | ``First-action`` and ``then-action`` can be executed in either | - | | | order, but one must complete before the other can be initiated. | - | | | An example use case is when resource start-up puts a high load | - | | | on the host. | - +--------------+----------------------------+-------------------------------------------------------------------+ - | symmetrical | TRUE for ``Mandatory`` and | .. index:: | - | | ``Optional`` kinds. FALSE | single: rsc_order; attribute, symmetrical | - | | for ``Serialize`` kind. | single: attribute; symmetrical (rsc)order) | - | | | single: symmetrical; rsc_order attribute | - | | | | - | | | If true, the reverse of the constraint applies for the | - | | | opposite action (for example, if B starts after A starts, | - | | | then B stops before A stops). ``Serialize`` orders cannot | - | | | be symmetrical. | - +--------------+----------------------------+-------------------------------------------------------------------+ + :widths: 15 30 55 + :header-rows: 1 + + * - Field + - Default + - Description + * - id + - + - .. index:: + single: rsc_order; attribute, id + single: attribute; id (rsc_order) + single: id; rsc_order attribute + + A unique name for the constraint + * - first + - + - .. index:: + single: rsc_order; attribute, first + single: attribute; first (rsc_order) + single: first; rsc_order attribute + + Name of the resource that the ``then`` resource depends on + * - then + - + - .. index:: + single: rsc_order; attribute, then + single: attribute; then (rsc_order) + single: then; rsc_order attribute + + Name of the dependent resource + * - first-action + - start + - .. index:: + single: rsc_order; attribute, first-action + single: attribute; first-action (rsc_order) + single: first-action; rsc_order attribute + + The action that the ``first`` resource must complete before + ``then-action`` can be initiated for the ``then`` resource. Allowed + values: ``start``, ``stop``, ``promote``, ``demote``. + * - then-action + - value of ``first-action`` + - .. index:: + single: rsc_order; attribute, then-action + single: attribute; then-action (rsc_order) + single: first-action; rsc_order attribute + + The action that the ``then`` resource can execute only after the + ``first-action`` on the ``first`` resource has completed. Allowed + values: ``start``, ``stop``, ``promote``, ``demote``. + * - kind + - Mandatory + - .. index:: + single: rsc_order; attribute, kind + single: attribute; kind (rsc_order) + single: kind; rsc_order attribute + + How to enforce the constraint. Allowed values: + + * ``Mandatory:`` ``then-action`` will never be initiated for the + ``then`` resource unless and until ``first-action`` successfully + completes for the ``first`` resource. + + * ``Optional:`` The constraint applies only if both specified resource + actions are scheduled in the same transition (that is, in response to + the same cluster state). This means that ``then-action`` is allowed + on the ``then`` resource regardless of the state of the ``first`` + resource, but if both actions happen to be scheduled at the same time, + they will be ordered. + + * ``Serialize:`` Ensure that the specified actions are never performed + concurrently for the specified resources. ``First-action`` and + ``then-action`` can be executed in either order, but one must complete + before the other can be initiated. An example use case is when resource + start-up puts a high load on the host. + * - symmetrical + - TRUE for ``Mandatory`` and ``Optional`` kinds. FALSE for ``Serialize`` + kind. + - .. index:: + single: rsc_order; attribute, symmetrical + single: attribute; symmetrical (rsc)order) + single: symmetrical; rsc_order attribute + + If true, the reverse of the constraint applies for the opposite action (for + example, if B starts after A starts, then B stops before A stops). + ``Serialize`` orders cannot be symmetrical. ``Promote`` and ``demote`` apply to :ref:`promotable ` clone resources. @@ -477,99 +480,99 @@ consider whether you should colocate A with B, or B with A. Colocation Properties _____________________ -.. table:: **Attributes of a rsc_colocation Constraint** +.. list-table:: **Attributes of a rsc_colocation Constraint** :class: longtable - :widths: 2 2 5 - - +----------------+----------------+--------------------------------------------------------+ - | Field | Default | Description | - +================+================+========================================================+ - | id | | .. index:: | - | | | single: rsc_colocation; attribute, id | - | | | single: attribute; id (rsc_colocation) | - | | | single: id; rsc_colocation attribute | - | | | | - | | | A unique name for the constraint (required). | - +----------------+----------------+--------------------------------------------------------+ - | rsc | | .. index:: | - | | | single: rsc_colocation; attribute, rsc | - | | | single: attribute; rsc (rsc_colocation) | - | | | single: rsc; rsc_colocation attribute | - | | | | - | | | The name of a resource that should be located | - | | | relative to ``with-rsc``. A colocation constraint must | - | | | either contain at least one | - | | | :ref:`resource set `, or specify both | - | | | ``rsc`` and ``with-rsc``. | - +----------------+----------------+--------------------------------------------------------+ - | with-rsc | | .. index:: | - | | | single: rsc_colocation; attribute, with-rsc | - | | | single: attribute; with-rsc (rsc_colocation) | - | | | single: with-rsc; rsc_colocation attribute | - | | | | - | | | The name of the resource used as the colocation | - | | | target. The cluster will decide where to put this | - | | | resource first and then decide where to put ``rsc``. | - | | | A colocation constraint must either contain at least | - | | | one :ref:`resource set `, or specify | - | | | both ``rsc`` and ``with-rsc``. | - +----------------+----------------+--------------------------------------------------------+ - | node-attribute | #uname | .. index:: | - | | | single: rsc_colocation; attribute, node-attribute | - | | | single: attribute; node-attribute (rsc_colocation) | - | | | single: node-attribute; rsc_colocation attribute | - | | | | - | | | If ``rsc`` and ``with-rsc`` are specified, this node | - | | | attribute must be the same on the node running ``rsc`` | - | | | and the node running ``with-rsc`` for the constraint | - | | | to be satisfied. (For details, see | - | | | :ref:`s-coloc-attribute`.) | - +----------------+----------------+--------------------------------------------------------+ - | score | 0 | .. index:: | - | | | single: rsc_colocation; attribute, score | - | | | single: attribute; score (rsc_colocation) | - | | | single: score; rsc_colocation attribute | - | | | | - | | | Positive values indicate the resources should run on | - | | | the same node. Negative values indicate the resources | - | | | should run on different nodes. Values of | - | | | +/- ``INFINITY`` change "should" to "must". | - +----------------+----------------+--------------------------------------------------------+ - | rsc-role | Started | .. index:: | - | | | single: clone; ordering constraint, rsc-role | - | | | single: ordering constraint; rsc-role (clone) | - | | | single: rsc-role; clone ordering constraint | - | | | | - | | | If ``rsc`` and ``with-rsc`` are specified, and ``rsc`` | - | | | is a :ref:`promotable clone `, | - | | | the constraint applies only to ``rsc`` instances in | - | | | this role. Allowed values: ``Started``, ``Stopped``, | - | | | ``Promoted``, ``Unpromoted``. For details, see | - | | | :ref:`promotable-clone-constraints`. | - +----------------+----------------+--------------------------------------------------------+ - | with-rsc-role | Started | .. index:: | - | | | single: clone; ordering constraint, with-rsc-role | - | | | single: ordering constraint; with-rsc-role (clone) | - | | | single: with-rsc-role; clone ordering constraint | - | | | | - | | | If ``rsc`` and ``with-rsc`` are specified, and | - | | | ``with-rsc`` is a | - | | | :ref:`promotable clone `, the | - | | | constraint applies only to ``with-rsc`` instances in | - | | | this role. Allowed values: ``Started``, ``Stopped``, | - | | | ``Promoted``, ``Unpromoted``. For details, see | - | | | :ref:`promotable-clone-constraints`. | - +----------------+----------------+--------------------------------------------------------+ - | influence | value of | .. index:: | - | | ``critical`` | single: rsc_colocation; attribute, influence | - | | meta-attribute | single: attribute; influence (rsc_colocation) | - | | for ``rsc`` | single: influence; rsc_colocation attribute | - | | | | - | | | Whether to consider the location preferences of | - | | | ``rsc`` when ``with-rsc`` is already active. Allowed | - | | | values: ``true``, ``false``. For details, see | - | | | :ref:`s-coloc-influence`. *(since 2.1.0)* | - +----------------+----------------+--------------------------------------------------------+ + :widths: 15 30 55 + :header-rows: 1 + + * - Field + - Default + - Description + * - id + - + - .. index:: + single: rsc_colocation; attribute, id + single: attribute; id (rsc_colocation) + single: id; rsc_colocation attribute + + A unique name for the constraint (required). + * - rsc + - + - .. index:: + single: rsc_colocation; attribute, rsc + single: attribute; rsc (rsc_colocation) + single: rsc; rsc_colocation attribute + + The name of a resource that should be located relative to ``with-rsc``. + A colocation constraint must either contain at least one :ref:`resource + set `, or specify both ``rsc`` and ``with-rsc``. + * - with-rsc + - + - .. index:: + single: rsc_colocation; attribute, with-rsc + single: attribute; with-rsc (rsc_colocation) + single: with-rsc; rsc_colocation attribute + + The name of the resource used as the colocation target. The cluster will + decide where to put this resource first and then decide where to put + ``rsc``. A colocation constraint must either contain at least one + :ref:`resource set `, or specify both ``rsc`` and + ``with-rsc``. + * - node-attribute + - #uname + - .. index:: + single: rsc_colocation; attribute, node-attribute + single: attribute; node-attribute (rsc_colocation) + single: node-attribute; rsc_colocation attribute + + If ``rsc`` and ``with-rsc`` are specified, this node attribute must be + the same on the node running ``rsc`` and the node running ``with-rsc`` + for the constraint to be satisfied. (For details, see + :ref:`s-coloc-attribute`.) + * - score + - 0 + - .. index:: + single: rsc_colocation; attribute, score + single: attribute; score (rsc_colocation) + single: score; rsc_colocation attribute + + Positive values indicate the resources should run on the same node. + Negative values indicate the resources should run on different nodes. + Values of +/- ``INFINITY`` change "should" to "must". + * - rsc-role + - Started + - .. index:: + single: clone; ordering constraint, rsc-role + single: ordering constraint; rsc-role (clone) + single: rsc-role; clone ordering constraint + + If ``rsc`` and ``with-rsc`` are specified, and ``rsc`` is a + :ref:`promotable clone `, the constraint applies + only to ``rsc`` instances in this role. Allowed values: ``Started``, + ``Stopped``, ``Promoted``, ``Unpromoted``. For details, see + :ref:`promotable-clone-constraints`. + * - with-rsc-role + - Started + - .. index:: + single: clone; ordering constraint, with-rsc-role + single: ordering constraint; with-rsc-role (clone) + single: with-rsc-role; clone ordering constraint + + If ``rsc`` and ``with-rsc`` are specified, and ``with-rsc`` is a + :ref:`promotable clone `, the constraint applies + only to ``with-rsc`` instances in this role. Allowed values: ``Started``, + ``Stopped``, ``Promoted``, ``Unpromoted``. For details, see + :ref:`promotable-clone-constraints`. + * - influence + - value of ``critical`` meta-attribute for ``rsc`` + - .. index:: + single: rsc_colocation; attribute, influence + single: attribute; influence (rsc_colocation) + single: influence; rsc_colocation attribute + + Whether to consider the location preferences of ``rsc`` when ``with-rsc`` + is already active. Allowed values: ``true``, ``false``. For details, + see :ref:`s-coloc-influence`. *(since 2.1.0)* Mandatory Placement ___________________ @@ -710,82 +713,84 @@ have an effect in all contexts. .. index:: pair: XML element; resource_set -.. table:: **Attributes of a resource_set Element** +.. list-table:: **Attributes of a resource_set Element** :class: longtable - :widths: 2 2 5 - - +-------------+------------------+--------------------------------------------------------+ - | Field | Default | Description | - +=============+==================+========================================================+ - | id | | .. index:: | - | | | single: resource_set; attribute, id | - | | | single: attribute; id (resource_set) | - | | | single: id; resource_set attribute | - | | | | - | | | A unique name for the set (required) | - +-------------+------------------+--------------------------------------------------------+ - | sequential | true | .. index:: | - | | | single: resource_set; attribute, sequential | - | | | single: attribute; sequential (resource_set) | - | | | single: sequential; resource_set attribute | - | | | | - | | | Whether the members of the set must be acted on in | - | | | order. Meaningful within ``rsc_order`` and | - | | | ``rsc_colocation``. | - +-------------+------------------+--------------------------------------------------------+ - | require-all | true | .. index:: | - | | | single: resource_set; attribute, require-all | - | | | single: attribute; require-all (resource_set) | - | | | single: require-all; resource_set attribute | - | | | | - | | | Whether all members of the set must be active before | - | | | continuing. With the current implementation, the | - | | | cluster may continue even if only one member of the | - | | | set is started, but if more than one member of the set | - | | | is starting at the same time, the cluster will still | - | | | wait until all of those have started before continuing | - | | | (this may change in future versions). Meaningful | - | | | within ``rsc_order``. | - +-------------+------------------+--------------------------------------------------------+ - | role | | .. index:: | - | | | single: resource_set; attribute, role | - | | | single: attribute; role (resource_set) | - | | | single: role; resource_set attribute | - | | | | - | | | The constraint applies only to resource set members | - | | | that are :ref:`s-resource-promotable` in this | - | | | role. Meaningful within ``rsc_location``, | - | | | ``rsc_colocation`` and ``rsc_ticket``. | - | | | Allowed values: ``Started``, ``Promoted``, | - | | | ``Unpromoted``. For details, see | - | | | :ref:`promotable-clone-constraints`. | - +-------------+------------------+--------------------------------------------------------+ - | action | start | .. index:: | - | | | single: resource_set; attribute, action | - | | | single: attribute; action (resource_set) | - | | | single: action; resource_set attribute | - | | | | - | | | The action that applies to *all members* of the set. | - | | | Meaningful within ``rsc_order``. Allowed values: | - | | | ``start``, ``stop``, ``promote``, ``demote``. | - +-------------+------------------+--------------------------------------------------------+ - | score | | .. index:: | - | | | single: resource_set; attribute, score | - | | | single: attribute; score (resource_set) | - | | | single: score; resource_set attribute | - | | | | - | | | *Advanced use only.* Use a specific score for this | - | | | set. Meaningful within ``rsc_location`` or | - | | | ``rsc_colocation``. | - +-------------+------------------+--------------------------------------------------------+ - | kind | | .. index:: | - | | | single: resource_set; attribute, kind | - | | | single: attribute; kind (resource_set) | - | | | single: kind; resource_set attribute | - | | | | - | | | *Advanced use only.* Use a specific kind for this | - | | | set. Meaningful within ``rsc_order``. | - +-------------+------------------+--------------------------------------------------------+ + :widths: 15 15 70 + :header-rows: 1 + + * - Field + - Default + - Description + * - id + - + - .. index:: + single: resource_set; attribute, id + single: attribute; id (resource_set) + single: id; resource_set attribute + + A unique name for the set (required) + * - sequential + - true + - .. index:: + single: resource_set; attribute, sequential + single: attribute; sequential (resource_set) + single: sequential; resource_set attribute + + Whether the members of the set must be acted on in order. Meaningful + within ``rsc_order`` and ``rsc_colocation``. + * - require-all + - true + - .. index:: + single: resource_set; attribute, require-all + single: attribute; require-all (resource_set) + single: require-all; resource_set attribute + + Whether all members of the set must be active before continuing. With + the current implementation, the cluster may continue even if only one + member of the set is started, but if more than one member of the set is + starting at the same time, the cluster will still wait until all of + those have started before continuing (this may change in future + versions). Meaningful within ``rsc_order``. + * - role + - + - .. index:: + single: resource_set; attribute, role + single: attribute; role (resource_set) + single: role; resource_set attribute + + The constraint applies only to resource set members that are + :ref:`s-resource-promotable` in this role. Meaningful within + ``rsc_location``, ``rsc_colocation`` and ``rsc_ticket``. Allowed + values: ``Started``, ``Promoted``, ``Unpromoted``. For details, see + :ref:`promotable-clone-constraints`. + * - action + - start + - .. index:: + single: resource_set; attribute, action + single: attribute; action (resource_set) + single: action; resource_set attribute + + The action that applies to *all members* of the set. Meaningful within + ``rsc_order``. Allowed values: ``start``, ``stop``, ``promote``, + ``demote``. + * - score + - + - .. index:: + single: resource_set; attribute, score + single: attribute; score (resource_set) + single: score; resource_set attribute + + *Advanced use only.* Use a specific score for this set. Meaningful + within ``rsc_location`` or ``rsc_colocation``. + * - kind + - + - .. index:: + single: resource_set; attribute, kind + single: attribute; kind (resource_set) + single: kind; resource_set attribute + + *Advanced use only.* Use a specific kind for this set. Meaningful within + ``rsc_order``. Anti-colocation Chains ______________________ diff --git a/doc/sphinx/Pacemaker_Explained/fencing.rst b/doc/sphinx/Pacemaker_Explained/fencing.rst index dce479e3c61..6ae836c2585 100644 --- a/doc/sphinx/Pacemaker_Explained/fencing.rst +++ b/doc/sphinx/Pacemaker_Explained/fencing.rst @@ -153,21 +153,22 @@ Special Meta-Attributes for Fencing Resources The table below lists special resource meta-attributes that may be set for any fencing resource. -.. table:: **Additional Properties of Fencing Resources** - :widths: 2 1 2 4 - - - +----------------------+---------+--------------------+----------------------------------------+ - | Field | Type | Default | Description | - +======================+=========+====================+========================================+ - | provides | string | | .. index:: | - | | | | single: provides | - | | | | | - | | | | Any special capability provided by the | - | | | | fence device. Currently, only one such | - | | | | capability is meaningful: | - | | | | :ref:`unfencing `. | - +----------------------+---------+--------------------+----------------------------------------+ +.. list-table:: **Additional Properties of Fencing Resources** + :widths: 10 10 10 70 + :header-rows: 1 + + * - Field + - Type + - Default + - Description + * - provides + - string + - + - .. index:: + single: provides + + Any special capability provided by the fence device. Currently, only one + such capability is meaningful: :ref:`unfencing `. .. _fencing-attributes: @@ -181,7 +182,7 @@ for ``pacemaker-fenced``. .. list-table:: **Additional Properties of Fencing Resources** :class: longtable - :widths: 2 1 2 4 + :widths: 22 10 20 48 :header-rows: 1 * - Name @@ -195,7 +196,7 @@ for ``pacemaker-fenced``. stonith-timeout - :ref:`timeout ` - - + - - This is not used by Pacemaker (see the ``pcmk_reboot_timeout``, ``pcmk_off_timeout``, etc., properties instead), but it may be used by Linux-HA fence agents. @@ -206,7 +207,7 @@ for ``pacemaker-fenced``. pcmk_host_map - :ref:`text ` - - + - - A mapping of node names to ports for devices that do not understand the node names. For example, ``node1:1;node2:2,3`` tells the cluster to use port 1 for ``node1`` and ports 2 and 3 for ``node2``. If @@ -220,7 +221,7 @@ for ``pacemaker-fenced``. pcmk_host_list - :ref:`text ` - - + - - Comma-separated list of nodes that can be targeted by this device (for example, ``node1,node2,node3``). If pcmk_host_check is ``static-list``, either this or ``pcmk_host_map`` must be set. @@ -235,7 +236,8 @@ for ``pacemaker-fenced``. - The method Pacemaker should use to determine which nodes can be targeted by this device. Allowed values: - * ``static-list:`` targets are listed in the ``pcmk_host_list`` or ``pcmk_host_map`` attribute + * ``static-list:`` targets are listed in the ``pcmk_host_list`` or + ``pcmk_host_map`` attribute * ``dynamic-list:`` query the device via the agent's ``list`` action * ``status:`` query the device via the agent's ``status`` action * ``none:`` assume the device can fence any node @@ -1075,53 +1077,53 @@ Some possible uses of topologies include: * Wait up to a certain time for a kernel dump to complete, then cut power to the node -.. table:: **Attributes of a fencing-level Element** +.. list-table:: **Attributes of a fencing-level Element** :class: longtable - :widths: 1 4 - - +------------------+-----------------------------------------------------------------------------------------+ - | Attribute | Description | - +==================+=========================================================================================+ - | id | .. index:: | - | | pair: fencing-level; id | - | | | - | | A unique name for this element (required) | - +------------------+-----------------------------------------------------------------------------------------+ - | target | .. index:: | - | | pair: fencing-level; target | - | | | - | | The name of a single node to which this level applies | - +------------------+-----------------------------------------------------------------------------------------+ - | target-pattern | .. index:: | - | | pair: fencing-level; target-pattern | - | | | - | | An extended regular expression (as defined in `POSIX | - | | `_) | - | | matching the names of nodes to which this level applies | - +------------------+-----------------------------------------------------------------------------------------+ - | target-attribute | .. index:: | - | | pair: fencing-level; target-attribute | - | | | - | | The name of a node attribute that is set (to ``target-value``) for nodes to which this | - | | level applies | - +------------------+-----------------------------------------------------------------------------------------+ - | target-value | .. index:: | - | | pair: fencing-level; target-value | - | | | - | | The node attribute value (of ``target-attribute``) that is set for nodes to which this | - | | level applies | - +------------------+-----------------------------------------------------------------------------------------+ - | index | .. index:: | - | | pair: fencing-level; index | - | | | - | | The order in which to attempt the levels. Levels are attempted in ascending order | - | | *until one succeeds*. Valid values are 1 through 9. | - +------------------+-----------------------------------------------------------------------------------------+ - | devices | .. index:: | - | | pair: fencing-level; devices | - | | | - | | A comma-separated list of devices that must all be tried for this level | - +------------------+-----------------------------------------------------------------------------------------+ + :widths: 25 75 + :header-rows: 1 + + * - Attribute + - Description + * - id + - .. index:: + pair: fencing-level; id + + A unique name for this element (required) + * - target + - .. index:: + pair: fencing-level; target + + The name of a single node to which this level applies + * - target-pattern + - .. index:: + pair: fencing-level; target-pattern + + An extended regular expression (as defined in `POSIX + `_) + matching the names of nodes to which this level applies + * - target-attribute + - .. index:: + pair: fencing-level; target-attribute + + The name of a node attribute that is set (to ``target-value``) for nodes to which this + level applies + * - target-value + - .. index:: + pair: fencing-level; target-value + + The node attribute value (of ``target-attribute``) that is set for nodes to which this + level applies + * - index + - .. index:: + pair: fencing-level; index + + The order in which to attempt the levels. Levels are attempted in ascending order + *until one succeeds*. Valid values are 1 through 9. + * - devices + - .. index:: + pair: fencing-level; devices + + A comma-separated list of devices that must all be tried for this level .. note:: **Fencing topology with different devices for different nodes** @@ -1134,7 +1136,7 @@ Some possible uses of topologies include: - + diff --git a/doc/sphinx/Pacemaker_Explained/index.rst b/doc/sphinx/Pacemaker_Explained/index.rst index 68139809c04..2361737b47e 100644 --- a/doc/sphinx/Pacemaker_Explained/index.rst +++ b/doc/sphinx/Pacemaker_Explained/index.rst @@ -4,15 +4,10 @@ Pacemaker Explained *Configuring Pacemaker Clusters* -Abstract --------- This document definitively explains Pacemaker's features and capabilities, particularly the XML syntax used in Pacemaker's Cluster Information Base (CIB). -Table of Contents ------------------ - .. toctree:: :maxdepth: 3 :numbered: @@ -34,9 +29,5 @@ Table of Contents status multi-site-clusters ap-samples - -Index ------ - -* :ref:`genindex` -* :ref:`search` + :ref:`genindex` + :ref:`search` diff --git a/doc/sphinx/Pacemaker_Explained/intro.rst b/doc/sphinx/Pacemaker_Explained/intro.rst index a1240c308c8..7ba18fde57c 100644 --- a/doc/sphinx/Pacemaker_Explained/intro.rst +++ b/doc/sphinx/Pacemaker_Explained/intro.rst @@ -12,7 +12,7 @@ For those that are allergic to XML, multiple higher-level front-ends (both command-line and GUI) are available. These tools will not be covered in this document, though the concepts explained here should make the functionality of these tools more easily understood. - + Users may be interested in other parts of the `Pacemaker documentation set `_, such as *Clusters from Scratch*, a step-by-step guide to setting up an diff --git a/doc/sphinx/Pacemaker_Explained/local-options.rst b/doc/sphinx/Pacemaker_Explained/local-options.rst index 64e45d0a026..ea754be539b 100644 --- a/doc/sphinx/Pacemaker_Explained/local-options.rst +++ b/doc/sphinx/Pacemaker_Explained/local-options.rst @@ -16,7 +16,7 @@ of the following types: .. list-table:: **Configuration Value Types** :class: longtable - :widths: 1 3 + :widths: 25 75 :header-rows: 1 * - Type @@ -203,7 +203,7 @@ whose location varies by OS (most commonly ``/etc/sysconfig/pacemaker`` or .. list-table:: **Local Options** :class: longtable - :widths: 2 2 2 5 + :widths: 25 15 10 50 :header-rows: 1 * - Name @@ -223,10 +223,10 @@ whose location varies by OS (most commonly ``/etc/sysconfig/pacemaker`` or ``pam_start``). * - .. _pcmk_logfacility: - + .. index:: pair: node option; PCMK_logfacility - + PCMK_logfacility - :ref:`enumeration ` - daemon diff --git a/doc/sphinx/Pacemaker_Explained/multi-site-clusters.rst b/doc/sphinx/Pacemaker_Explained/multi-site-clusters.rst index 59d3f9345c7..e5fd0de3888 100644 --- a/doc/sphinx/Pacemaker_Explained/multi-site-clusters.rst +++ b/doc/sphinx/Pacemaker_Explained/multi-site-clusters.rst @@ -24,7 +24,7 @@ That leads to significant challenges: - How do we manage failover between sites? - How do we deal with high latency in case of resources that need to be - stopped? + stopped? In the following sections, learn how to meet these challenges. @@ -63,7 +63,7 @@ the automated CTR mechanism described below. A ticket can only be owned by one site at a time. Initially, none of the sites has a ticket. Each ticket must be granted once by the cluster -administrator. +administrator. The presence or absence of tickets for a site is stored in the CIB as a cluster status. With regards to a certain ticket, there are only two states @@ -129,7 +129,7 @@ Configuring Ticket Dependencies The **rsc_ticket** constraint lets you specify the resources depending on a certain ticket. Together with the constraint, you can set a **loss-policy** that defines -what should happen to the respective resources if the ticket is revoked. +what should happen to the respective resources if the ticket is revoked. The attribute **loss-policy** can have the following values: @@ -188,7 +188,7 @@ resources within a resource set. Each of the resources just depends on ``ticketA``. Referencing resource templates in ``rsc_ticket`` constraints, and even -referencing them within resource sets, is also supported. +referencing them within resource sets, is also supported. If you want other resources to depend on further tickets, create as many constraints as necessary with ``rsc_ticket``. @@ -204,7 +204,7 @@ If you want to re-distribute a ticket, you should wait for the dependent resources to stop cleanly at the previous site before you grant the ticket to the new site. -Use the **crm_ticket** command line tool to grant and revoke tickets. +Use the **crm_ticket** command line tool to grant and revoke tickets. To grant a ticket to this site: @@ -222,7 +222,7 @@ To revoke a ticket from this site: If you are managing tickets manually, use the **crm_ticket** command with great care, because it cannot check whether the same ticket is already - granted elsewhere. + granted elsewhere. Granting and Revoking Tickets via a Cluster Ticket Registry ___________________________________________________________ @@ -262,19 +262,19 @@ implicitly by being disconnected from the voting body) need to relinquish the ticket after a time-out. Thus, it is made sure that a ticket will only be re-distributed after it has been relinquished by the previous site. The resources that depend on that ticket will fail over -to the new site holding the ticket. The nodes that have run the +to the new site holding the ticket. The nodes that have run the resources before will be treated according to the **loss-policy** you set within the **rsc_ticket** constraint. Before the booth can manage a certain ticket within the multi-site cluster, you initially need to grant it to a site manually via the **booth** command-line tool. After you have initially granted a ticket to a site, **boothd** -will take over and manage the ticket automatically. +will take over and manage the ticket automatically. .. important:: The **booth** command-line tool can be used to grant, list, or - revoke tickets and can be run on any machine where **boothd** is running. + revoke tickets and can be run on any machine where **boothd** is running. If you are managing tickets via Booth, use only **booth** for manual intervention, not **crm_ticket**. That ensures the same ticket will only be owned by one cluster site at a time. @@ -286,7 +286,7 @@ Booth Requirements Pacemaker. * Booth must be installed on all cluster nodes and on all arbitrators that will - be part of the multi-site cluster. + be part of the multi-site cluster. * Nodes belonging to the same cluster site should be synchronized via NTP. However, time synchronization is not required between the individual cluster sites. diff --git a/doc/sphinx/Pacemaker_Explained/nodes.rst b/doc/sphinx/Pacemaker_Explained/nodes.rst index e88e63a10e0..576ce7d391d 100644 --- a/doc/sphinx/Pacemaker_Explained/nodes.rst +++ b/doc/sphinx/Pacemaker_Explained/nodes.rst @@ -293,120 +293,99 @@ For true/false values, the cluster considers a value of "1", "y", "yes", "on", or "true" (case-insensitively) to be true, "0", "n", "no", "off", "false", or unset to be false, and anything else to be an error. -.. table:: **Node attributes with special significance** +.. list-table:: **Node Attributes With Special Significance** :class: longtable - :widths: 1 2 - - +----------------------------+-----------------------------------------------------+ - | Name | Description | - +============================+=====================================================+ - | fail-count-* | .. index:: | - | | pair: node attribute; fail-count | - | | | - | | Attributes whose names start with | - | | ``fail-count-`` are managed by the cluster | - | | to track how many times particular resource | - | | operations have failed on this node. These | - | | should be queried and cleared via the | - | | ``crm_failcount`` or | - | | ``crm_resource --cleanup`` commands rather | - | | than directly. | - +----------------------------+-----------------------------------------------------+ - | last-failure-* | .. index:: | - | | pair: node attribute; last-failure | - | | | - | | Attributes whose names start with | - | | ``last-failure-`` are managed by the cluster | - | | to track when particular resource operations | - | | have most recently failed on this node. | - | | These should be cleared via the | - | | ``crm_failcount`` or | - | | ``crm_resource --cleanup`` commands rather | - | | than directly. | - +----------------------------+-----------------------------------------------------+ - | maintenance | .. _node_maintenance: | - | | | - | | .. index:: | - | | pair: node attribute; maintenance | - | | | - | | If true, the cluster will not start or stop any | - | | resources on this node. Any resources active on the | - | | node become unmanaged, and any recurring operations | - | | for those resources (except those specifying | - | | ``role`` as ``Stopped``) will be paused. The | - | | :ref:`maintenance-mode ` cluster | - | | option, if true, overrides this. If this attribute | - | | is true, it overrides the | - | | :ref:`is-managed ` and | - | | :ref:`maintenance ` | - | | meta-attributes of affected resources and | - | | :ref:`enabled ` meta-attribute for | - | | affected recurring actions. Pacemaker should not be | - | | restarted on a node that is in single-node | - | | maintenance mode. | - +----------------------------+-----------------------------------------------------+ - | probe_complete | .. index:: | - | | pair: node attribute; probe_complete | - | | | - | | This is managed by the cluster to detect | - | | when nodes need to be reprobed, and should | - | | never be used directly. | - +----------------------------+-----------------------------------------------------+ - | resource-discovery-enabled | .. index:: | - | | pair: node attribute; resource-discovery-enabled | - | | | - | | If the node is a remote node, fencing is enabled, | - | | and this attribute is explicitly set to false | - | | (unset means true in this case), resource discovery | - | | (probes) will not be done on this node. This is | - | | highly discouraged; the ``resource-discovery`` | - | | location constraint property is preferred for this | - | | purpose. | - +----------------------------+-----------------------------------------------------+ - | shutdown | .. index:: | - | | pair: node attribute; shutdown | - | | | - | | This is managed by the cluster to orchestrate the | - | | shutdown of a node, and should never be used | - | | directly. | - +----------------------------+-----------------------------------------------------+ - | site-name | .. index:: | - | | pair: node attribute; site-name | - | | | - | | If set, this will be used as the value of the | - | | ``#site-name`` node attribute used in rules. (If | - | | not set, the value of the ``cluster-name`` cluster | - | | option will be used as ``#site-name`` instead.) | - +----------------------------+-----------------------------------------------------+ - | standby | .. index:: | - | | pair: node attribute; standby | - | | | - | | If true, the node is in standby mode. This is | - | | typically set and queried via the ``crm_standby`` | - | | command rather than directly. | - +----------------------------+-----------------------------------------------------+ - | terminate | .. index:: | - | | pair: node attribute; terminate | - | | | - | | If the value is true or begins with any nonzero | - | | number, the node will be fenced. This is typically | - | | set by tools rather than directly. | - +----------------------------+-----------------------------------------------------+ - | #digests-* | .. index:: | - | | pair: node attribute; #digests | - | | | - | | Attributes whose names start with ``#digests-`` are | - | | managed by the cluster to detect when | - | | :ref:`unfencing` needs to be redone, and should | - | | never be used directly. | - +----------------------------+-----------------------------------------------------+ - | #node-unfenced | .. index:: | - | | pair: node attribute; #node-unfenced | - | | | - | | When the node was last unfenced (as seconds since | - | | the epoch). This is managed by the cluster and | - | | should never be used directly. | - +----------------------------+-----------------------------------------------------+ + :widths: 30 70 + :header-rows: 1 + + * - Name + - Description + * - fail-count-* + - .. index:: + pair: node attribute; fail-count + + Attributes whose names start with ``fail-count-`` are managed by the + cluster to track how many times particular resource operations have + failed on this node. These should be queried and cleared via the + ``crm_failcount`` or ``crm_resource --cleanup`` commands rather than + directly. + * - last-failure-* + - .. index:: + pair: node attribute; last-failure + + Attributes whose names start with ``last-failure-`` are managed by the + cluster to track when particular resource operations have most recently + failed on this node. These should be cleared via the ``crm_failcount`` + or ``crm_resource --cleanup`` commands rather than directly. + * - maintenance + - .. _node_maintenance: + + .. index:: + pair: node attribute; maintenance + + If true, the cluster will not start or stop any resources on this node. + Any resources active on the node become unmanaged, and any recurring + operations for those resources (except those specifying ``role`` as + ``Stopped``) will be paused. The :ref:`maintenance-mode ` + cluster option, if true, overrides this. If this attribute is true, it + overrides the :ref:`is-managed ` and + :ref:`maintenance ` meta-attributes of affected resources + and :ref:`enabled ` meta-attribute for affected recurring + actions. Pacemaker should not be restarted on a node that is in + single-node maintenance mode. + * - probe_complete + - .. index:: + pair: node attribute; probe_complete + + This is managed by the cluster to detect when nodes need to be reprobed, + and should never be used directly. + * - resource-discovery-enabled + - .. index:: + pair: node attribute; resource-discovery-enabled + + If the node is a remote node, fencing is enabled, and this attribute is + explicitly set to false (unset means true in this case), resource + discovery (probes) will not be done on this node. This is highly + discouraged; the ``resource-discovery`` location constraint property is + preferred for this purpose. + * - shutdown + - .. index:: + pair: node attribute; shutdown + + This is managed by the cluster to orchestrate the shutdown of a node, and + should never be used directly. + * - site-name + - .. index:: + pair: node attribute; site-name + + If set, this will be used as the value of the ``#site-name`` node + attribute used in rules. (If not set, the value of the ``cluster-name`` + cluster option will be used as ``#site-name`` instead.) + * - standby + - .. index:: + pair: node attribute; standby + + If true, the node is in standby mode. This is typically set and queried + via the ``crm_standby`` command rather than directly. + * - terminate + - .. index:: + pair: node attribute; terminate + + If the value is true or begins with any nonzero number, the node will be + fenced. This is typically set by tools rather than directly. + * - #digests-* + - .. index:: + pair: node attribute; #digests + + Attributes whose names start with ``#digests-`` are managed by the cluster + to detect when :ref:`unfencing` needs to be redone, and should never be + used directly. + * - #node-unfenced + - .. index:: + pair: node attribute; #node-unfenced + + When the node was last unfenced (as seconds since the epoch). This is + managed by the cluster and should never be used directly. .. index:: single: node; health @@ -433,38 +412,37 @@ Pacemaker will treat any node attribute whose name starts with ``#health`` as an indicator of node health. Node health attributes may have one of the following values: -.. table:: **Allowed Values for Node Health Attributes** - :widths: 1 4 - - +------------+--------------------------------------------------------------+ - | Value | Intended significance | - +============+==============================================================+ - | ``red`` | .. index:: | - | | single: red; node health attribute value | - | | single: node attribute; health (red) | - | | | - | | This indicator is unhealthy | - +------------+--------------------------------------------------------------+ - | ``yellow`` | .. index:: | - | | single: yellow; node health attribute value | - | | single: node attribute; health (yellow) | - | | | - | | This indicator is close to unhealthy (whether worsening or | - | | recovering) | - +------------+--------------------------------------------------------------+ - | ``green`` | .. index:: | - | | single: green; node health attribute value | - | | single: node attribute; health (green) | - | | | - | | This indicator is healthy | - +------------+--------------------------------------------------------------+ - | *integer* | .. index:: | - | | single: score; node health attribute value | - | | single: node attribute; health (score) | - | | | - | | A numeric score to apply to all resources on this node (0 or | - | | positive is healthy, negative is unhealthy) | - +------------+--------------------------------------------------------------+ +.. list-table:: **Allowed Values for Node Health Attributes** + :widths: 25 75 + :header-rows: 1 + + * - Value + - Intended significance + * - ``red`` + - .. index:: + single: red; node health attribute value + single: node attribute; health (red) + + This indicator is unhealthy + * - ``yellow`` + - .. index:: + single: yellow; node health attribute value + single: node attribute; health (yellow) + + This indicator is close to unhealthy (whether worsening or recovering) + * - ``green`` + - .. index:: + single: green; node health attribute value + single: node attribute; health (green) + + This indicator is healthy + * - *integer* + - .. index:: + single: score; node health attribute value + single: node attribute; health (score) + + A numeric score to apply to all resources on this node (0 or positive is + healthy, negative is unhealthy) .. note:: @@ -493,58 +471,54 @@ and ``green`` to scores. Allowed values are: -.. table:: **Node Health Strategies** - :widths: 1 4 - - +----------------+----------------------------------------------------------+ - | Value | Effect | - +================+==========================================================+ - | none | .. index:: | - | | single: node-health-strategy; none | - | | single: none; node-health-strategy value | - | | | - | | Do not track node health attributes at all. | - +----------------+----------------------------------------------------------+ - | migrate-on-red | .. index:: | - | | single: node-health-strategy; migrate-on-red | - | | single: migrate-on-red; node-health-strategy value | - | | | - | | Assign the value of ``-INFINITY`` to ``red``, and 0 to | - | | ``yellow`` and ``green``. This will cause all resources | - | | to move off the node if any attribute is ``red``. | - +----------------+----------------------------------------------------------+ - | only-green | .. index:: | - | | single: node-health-strategy; only-green | - | | single: only-green; node-health-strategy value | - | | | - | | Assign the value of ``-INFINITY`` to ``red`` and | - | | ``yellow``, and 0 to ``green``. This will cause all | - | | resources to move off the node if any attribute is | - | | ``red`` or ``yellow``. | - +----------------+----------------------------------------------------------+ - | progressive | .. index:: | - | | single: node-health-strategy; progressive | - | | single: progressive; node-health-strategy value | - | | | - | | Assign the value of the ``node-health-red`` cluster | - | | option to ``red``, the value of ``node-health-yellow`` | - | | to ``yellow``, and the value of ``node-health-green`` to | - | | ``green``. Each node is additionally assigned a score of | - | | ``node-health-base`` (this allows resources to start | - | | even if some attributes are ``yellow``). This strategy | - | | gives the administrator finer control over how important | - | | each value is. | - +----------------+----------------------------------------------------------+ - | custom | .. index:: | - | | single: node-health-strategy; custom | - | | single: custom; node-health-strategy value | - | | | - | | Track node health attributes using the same values as | - | | ``progressive`` for ``red``, ``yellow``, and ``green``, | - | | but do not take them into account. The administrator is | - | | expected to implement a policy by defining :ref:`rules` | - | | referencing node health attributes. | - +----------------+----------------------------------------------------------+ +.. list-table:: **Node Health Strategies** + :widths: 25 75 + :header-rows: 1 + + * - Value + - Effect + * - none + - .. index:: + single: node-health-strategy; none + single: none; node-health-strategy value + + Do not track node health attributes at all. + * - migrate-on-red + - .. index:: + single: node-health-strategy; migrate-on-red + single: migrate-on-red; node-health-strategy value + + Assign the value of ``-INFINITY`` to ``red``, and 0 to ``yellow`` and + ``green``. This will cause all resources to move off the node if any + attribute is ``red``. + * - only-green + - .. index:: + single: node-health-strategy; only-green + single: only-green; node-health-strategy value + + Assign the value of ``-INFINITY`` to ``red`` and ``yellow``, and 0 to + ``green``. This will cause all resources to move off the node if any + attribute is ``red`` or ``yellow``. + * - progressive + - .. index:: + single: node-health-strategy; progressive + single: progressive; node-health-strategy value + + Assign the value of the ``node-health-red`` cluster option to ``red``, + the value of ``node-health-yellow`` to ``yellow``, and the value of + ``node-health-green`` to ``green``. Each node is additionally assigned a + score of ``node-health-base`` (this allows resources to start even if + some attributes are ``yellow``). This strategy gives the administrator + finer control over how important each value is. + * - custom + - .. index:: + single: node-health-strategy; custom + single: custom; node-health-strategy value + + Track node health attributes using the same values as ``progressive`` for + ``red``, ``yellow``, and ``green``, but do not take them into account. + The administrator is expected to implement a policy by defining :ref:`rules` + referencing node health attributes. Exempting a Resource from Health Restrictions diff --git a/doc/sphinx/Pacemaker_Explained/operations.rst b/doc/sphinx/Pacemaker_Explained/operations.rst index b5488268e82..8a3c1a1955f 100644 --- a/doc/sphinx/Pacemaker_Explained/operations.rst +++ b/doc/sphinx/Pacemaker_Explained/operations.rst @@ -53,7 +53,7 @@ If not specified, the default from the table below is used. .. list-table:: **Operation Properties** :class: longtable - :widths: 2 2 3 4 + :widths: 17 13 30 40 :header-rows: 1 * - Name @@ -61,34 +61,34 @@ If not specified, the default from the table below is used. - Default - Description * - .. _op_id: - + .. index:: pair: op; id single: id; action property single: action; property, id - + id - :ref:`id ` - - + - - A unique identifier for the XML element *(required)* * - .. _op_name: - + .. index:: pair: op; name single: name; action property single: action; property, name - + name - :ref:`text ` - - + - - An action name supported by the resource agent *(required)* * - .. _op_interval: - + .. index:: pair: op; interval single: interval; action property single: action; property, interval - + interval - :ref:`duration ` - 0 @@ -99,38 +99,38 @@ If not specified, the default from the table below is used. operation to instances that are scheduled as needed during normal cluster operation. *(required)* * - .. _op_description: - + .. index:: pair: op; description single: description; action property single: action; property, description - + description - :ref:`text ` - - + - - Arbitrary text for user's use (ignored by Pacemaker) * - .. _op_role: - + .. index:: pair: op; role single: role; action property single: action; property, role - + role - :ref:`enumeration ` - - + - - If this is set, the operation configuration applies only on nodes where the cluster expects the resource to be in the specified role. This makes sense only for recurring monitors. Allowed values: ``Started``, ``Stopped``, and in the case of :ref:`promotable clone resources `, ``Unpromoted`` and ``Promoted``. * - .. _op_timeout: - + .. index:: pair: op; timeout single: timeout; action property single: action; property, timeout - + timeout - :ref:`timeout ` - 20s @@ -138,12 +138,12 @@ If not specified, the default from the table below is used. time, the action will be considered failed. **Note:** timeouts for fencing agents are handled specially (see the :ref:`fencing` chapter). * - .. _op_on_fail: - + .. index:: pair: op; on-fail single: on-fail; action property single: action; property, on-fail - + on-fail - :ref:`enumeration ` - * If ``name`` is ``stop``: ``fence`` if @@ -154,7 +154,7 @@ If not specified, the default from the table below is used. * Otherwise: ``restart`` - How the cluster should respond to a failure of this action. Allowed values: - + * ``ignore:`` Pretend the resource did not fail * ``block:`` Do not perform any further operations on the resource * ``stop:`` Stop the resource and leave it stopped @@ -169,12 +169,12 @@ If not specified, the default from the table below is used. * ``standby:`` Put the node on which the resource failed in standby mode (forcing *all* resources away) * - .. _op_enabled: - + .. index:: pair: op; enabled single: enabled; action property single: action; property, enabled - + enabled - :ref:`boolean ` - true @@ -185,12 +185,12 @@ If not specified, the default from the table below is used. recurring operations. Maintenance mode, which does stop configured monitors, overrides this setting. * - .. _op_interval_origin: - + .. index:: pair: op; interval-origin single: interval-origin; action property single: action; property, interval-origin - + interval-origin - :ref:`ISO 8601 ` - @@ -202,12 +202,12 @@ If not specified, the default from the table below is used. ``interval`` to ``24h``. At most one of ``interval-origin`` and ``start-delay`` may be set. * - .. _op_start_delay: - + .. index:: pair: op; start-delay single: start-delay; action property single: action; property, start-delay - + start-delay - :ref:`duration ` - @@ -223,12 +223,12 @@ If not specified, the default from the table below is used. actions needed, then act on the result when it actually runs. At most one of ``interval-origin`` and ``start-delay`` may be set. * - .. _op_record_pending: - + .. index:: pair: op; record-pending single: record-pending; action property single: action; property, record-pending - + record-pending - :ref:`boolean ` - true @@ -342,7 +342,7 @@ Setting Global Defaults for Operations ###################################### You can change the global default values for operation properties -in a given cluster. These are defined in an ``op_defaults`` section +in a given cluster. These are defined in an ``op_defaults`` section of the CIB's ``configuration`` section, and can be set with ``crm_attribute``. For example, diff --git a/doc/sphinx/Pacemaker_Explained/resources.rst b/doc/sphinx/Pacemaker_Explained/resources.rst index 16d437f71a1..6d812fe6a69 100644 --- a/doc/sphinx/Pacemaker_Explained/resources.rst +++ b/doc/sphinx/Pacemaker_Explained/resources.rst @@ -26,7 +26,7 @@ being managed. .. index:: single: resource; standard - + Resource Standards ################## @@ -167,46 +167,46 @@ Resource Properties These values tell the cluster which resource agent to use for the resource, where to find that resource agent and what standards it conforms to. -.. table:: **Properties of a Primitive Resource** - :widths: 1 4 - - +-------------+------------------------------------------------------------------+ - | Field | Description | - +=============+==================================================================+ - | id | .. index:: | - | | single: id; resource | - | | single: resource; property, id | - | | | - | | Your name for the resource | - +-------------+------------------------------------------------------------------+ - | class | .. index:: | - | | single: class; resource | - | | single: resource; property, class | - | | | - | | The standard the resource agent conforms to. Allowed values: | - | | ``lsb``, ``ocf``, ``service``, ``stonith``, and ``systemd`` | - +-------------+------------------------------------------------------------------+ - | description | .. index:: | - | | single: description; resource | - | | single: resource; property, description | - | | | - | | Arbitrary text for user's use (ignored by Pacemaker) | - +-------------+------------------------------------------------------------------+ - | type | .. index:: | - | | single: type; resource | - | | single: resource; property, type | - | | | - | | The name of the Resource Agent you wish to use. E.g. | - | | ``IPaddr`` or ``Filesystem`` | - +-------------+------------------------------------------------------------------+ - | provider | .. index:: | - | | single: provider; resource | - | | single: resource; property, provider | - | | | - | | The OCF spec allows multiple vendors to supply the same resource | - | | agent. To use the OCF resource agents supplied by the Heartbeat | - | | project, you would specify ``heartbeat`` here. | - +-------------+------------------------------------------------------------------+ +.. list-table:: **Properties of a Primitive Resource** + :widths: 25 75 + :header-rows: 1 + + * - Field + - Description + * - id + - .. index:: + single: id; resource + single: resource; property, id + + Your name for the resource + * - class + - .. index:: + single: class; resource + single: resource; property, class + + The standard the resource agent conforms to. Allowed values: ``lsb``, + ``ocf``, ``service``, ``stonith``, and ``systemd`` + * - description + - .. index:: + single: description; resource + single: resource; property, description + + Arbitrary text for user's use (ignored by Pacemaker) + * - type + - .. index:: + single: type; resource + single: resource; property, type + + The name of the Resource Agent you wish to use. E.g. ``IPaddr`` or + ``Filesystem`` + * - provider + - .. index:: + single: provider; resource + single: resource; property, provider + + The OCF spec allows multiple vendors to supply the same resource agent. + To use the OCF resource agents supplied by the Heartbeat project, you + would specify ``heartbeat`` here. The XML definition of a resource can be queried with the **crm_resource** tool. For example: @@ -254,9 +254,9 @@ Meta-attributes are used by the cluster to decide how a resource should behave and can be easily set using the ``--meta`` option of the **crm_resource** command. -.. list-table:: **Meta-attributes of a Primitive Resource** +.. list-table:: **Meta-Attributes of a Primitive Resource** :class: longtable - :widths: 2 2 3 5 + :widths: 20 15 20 45 :header-rows: 1 * - Name @@ -265,7 +265,7 @@ behave and can be easily set using the ``--meta`` option of the - Description * - .. _meta_priority: - + .. index:: single: priority; resource option single: resource; option, priority @@ -277,7 +277,7 @@ behave and can be easily set using the ``--meta`` option of the resources in order to keep higher-priority ones active. * - .. _meta_critical: - + .. index:: single: critical; resource option single: resource; option, critical @@ -292,7 +292,7 @@ behave and can be easily set using the ``--meta`` option of the :ref:`s-coloc-influence`. *(since 2.1.0)* * - .. _meta_target_role: - + .. index:: single: target-role; resource option single: resource; option, target-role @@ -314,7 +314,7 @@ behave and can be easily set using the ``--meta`` option of the * - .. _meta_is_managed: .. _is_managed: - + .. index:: single: is-managed; resource option single: resource; option, is-managed @@ -328,7 +328,7 @@ behave and can be easily set using the ``--meta`` option of the * - .. _meta_maintenance: .. _rsc_maintenance: - + .. index:: single: maintenance; resource option single: resource; option, maintenance @@ -344,7 +344,7 @@ behave and can be easily set using the ``--meta`` option of the * - .. _meta_resource_stickiness: .. _resource-stickiness: - + .. index:: single: resource-stickiness; resource option single: resource; option, resource-stickiness @@ -359,7 +359,7 @@ behave and can be easily set using the ``--meta`` option of the * - .. _meta_requires: .. _requires: - + .. index:: single: requires; resource option single: resource; option, requires @@ -383,7 +383,7 @@ behave and can be easily set using the ``--meta`` option of the :ref:`unfenced `. * - .. _meta_migration_threshold: - + .. index:: single: migration-threshold; resource option single: resource; option, migration-threshold @@ -401,7 +401,7 @@ behave and can be easily set using the ``--meta`` option of the ``start-failure-is-fatal`` is ``false``. * - .. _meta_failure_timeout: - + .. index:: single: failure-timeout; resource option single: resource; option, failure-timeout @@ -422,7 +422,7 @@ behave and can be easily set using the ``--meta`` option of the or days is reasonable. * - .. _meta_multiple_active: - + .. index:: single: multiple-active; resource option single: resource; option, multiple-active @@ -445,7 +445,7 @@ behave and can be easily set using the ``--meta`` option of the ordered after this will still need to be restarted) *(since 2.1.3)* * - .. _meta_allow_migrate: - + .. index:: single: allow-migrate; resource option single: resource; option, allow-migrate @@ -457,7 +457,7 @@ behave and can be easily set using the ``--meta`` option of the needs to be moved (see :ref:`live-migration`) * - .. _meta_allow_unhealthy_nodes: - + .. index:: single: allow-unhealthy-nodes; resource option single: resource; option, allow-unhealthy-nodes @@ -470,7 +470,7 @@ behave and can be easily set using the ``--meta`` option of the 2.1.3)* * - .. _meta_container_attribute_target: - + .. index:: single: container-attribute-target; resource option single: resource; option, container-attribute-target @@ -669,7 +669,7 @@ built-in **ocf:pacemaker:remote** resource agent. .. list-table:: **ocf:pacemaker:remote Instance Attributes** :class: longtable - :widths: 2 2 3 5 + :widths: 25 10 15 50 :header-rows: 1 * - Name @@ -678,7 +678,7 @@ built-in **ocf:pacemaker:remote** resource agent. - Description * - .. _remote_server: - + .. index:: pair: remote node; server @@ -690,7 +690,7 @@ built-in **ocf:pacemaker:remote** resource agent. this address. * - .. _remote_port: - + .. index:: pair: remote node; port @@ -702,7 +702,7 @@ built-in **ocf:pacemaker:remote** resource agent. this port. * - .. _remote_reconnect_interval: - + .. index:: pair: remote node; reconnect_interval @@ -738,9 +738,9 @@ supported by **VirtualDomain** may be used to create guest nodes; if the guest can survive the hypervisor being fenced, it is unsuitable for use as a guest node. -.. list-table:: **Guest node meta-attributes** +.. list-table:: **Guest Node Meta-Attributes** :class: longtable - :widths: 2 2 3 5 + :widths: 25 10 20 45 :header-rows: 1 * - Name @@ -749,7 +749,7 @@ node. - Description * - .. _meta_remote_node: - + .. index:: single: remote-node; resource option single: resource; option, remote-node @@ -762,7 +762,7 @@ node. started. This value *must not* be the same as any resource or node ID. * - .. _meta_remote_addr: - + .. index:: single: remote-addr; resource option single: resource; option, remote-addr @@ -775,7 +775,7 @@ node. configured to accept connections on this address. * - .. _meta_remote_port: - + .. index:: single: remote-port; resource option single: resource; option, remote-port @@ -788,7 +788,7 @@ node. configured to listen on this port. * - .. _meta_remote_connect_timeout: - + .. index:: single: remote-connect-timeout; resource option single: resource; option, remote-connect-timeout diff --git a/doc/sphinx/Pacemaker_Explained/reusing-configuration.rst b/doc/sphinx/Pacemaker_Explained/reusing-configuration.rst index 01c7a974ae4..3a60b3b3b2a 100644 --- a/doc/sphinx/Pacemaker_Explained/reusing-configuration.rst +++ b/doc/sphinx/Pacemaker_Explained/reusing-configuration.rst @@ -254,11 +254,11 @@ Then instead of duplicating the rule for all your other resources, you can inste .. topic:: **Referencing rules from other constraints** .. code-block:: xml - + - + .. important:: The cluster will insist that the ``rule`` exists somewhere. Attempting diff --git a/doc/sphinx/Pacemaker_Explained/rules.rst b/doc/sphinx/Pacemaker_Explained/rules.rst index 13134daafad..35587d30668 100644 --- a/doc/sphinx/Pacemaker_Explained/rules.rst +++ b/doc/sphinx/Pacemaker_Explained/rules.rst @@ -32,34 +32,34 @@ Each context that supports rules may contain a single ``rule`` element. .. list-table:: **Attributes of a rule Element** :class: longtable - :widths: 2 2 2 5 + :widths: 15 15 10 60 :header-rows: 1 - + * - Name - Type - Default - Description - + * - .. _rule_id: - + .. index:: pair: rule; id - + id - :ref:`id ` - - A unique name for this element (required) * - .. _boolean_op: - + .. index:: pair: rule; boolean-op - + boolean-op - :ref:`enumeration ` - ``and`` - How to combine conditions if this rule contains more than one. Allowed values: - + * ``and``: the rule is satisfied only if all conditions are satisfied * ``or``: the rule is satisfied if any condition is satisfied @@ -118,7 +118,7 @@ It may contain a ``date_spec`` or ``duration`` element depending on the .. list-table:: **Attributes of a date_expression Element** :class: longtable - :widths: 1 1 1 4 + :widths: 15 15 20 50 :header-rows: 1 * - Name @@ -132,7 +132,7 @@ It may contain a ``date_spec`` or ``duration`` element depending on the id - :ref:`id ` - - + - - A unique name for this element (required) * - .. _date_expression_start: @@ -141,7 +141,7 @@ It may contain a ``date_spec`` or ``duration`` element depending on the start - :ref:`ISO 8601 ` - - + - - The beginning of the desired time range. Meaningful with an ``operation`` of ``in_range`` or ``gt``. * - .. _date_expression_end: @@ -151,7 +151,7 @@ It may contain a ``date_spec`` or ``duration`` element depending on the end - :ref:`ISO 8601 ` - - + - - The end of the desired time range. Meaningful with an ``operation`` of ``in_range`` or ``lt``. * - .. _date_expression_operation: @@ -194,7 +194,7 @@ combination of dates and times that satisfy the expression. .. list-table:: **Attributes of a date_spec Element** :class: longtable - :widths: 1 1 1 4 + :widths: 15 15 10 60 :header-rows: 1 * - Name @@ -208,7 +208,7 @@ combination of dates and times that satisfy the expression. id - :ref:`id ` - - + - - A unique name for this element (required) * - .. _date_spec_seconds: @@ -217,7 +217,7 @@ combination of dates and times that satisfy the expression. seconds - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current time's second is within this range. Allowed integers: 0 to 59. * - .. _date_spec_minutes: @@ -227,7 +227,7 @@ combination of dates and times that satisfy the expression. minutes - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current time's minute is within this range. Allowed integers: 0 to 59. * - .. _date_spec_hours: @@ -237,7 +237,7 @@ combination of dates and times that satisfy the expression. hours - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current time's hour is within this range. Allowed integers: 0 to 23 where 0 is midnight and 23 is 11 p.m. @@ -248,7 +248,7 @@ combination of dates and times that satisfy the expression. monthdays - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current date's day of the month is in this range. Allowed integers: 1 to 31. * - .. _date_spec_weekdays: @@ -258,7 +258,7 @@ combination of dates and times that satisfy the expression. weekdays - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current date's ordinal day of the week is in this range. Allowed integers: 1-7 (where 1 is Monday and 7 is Sunday). @@ -269,7 +269,7 @@ combination of dates and times that satisfy the expression. yeardays - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current date's ordinal day of the year is in this range. Allowed integers: 1-366. * - .. _date_spec_months: @@ -279,7 +279,7 @@ combination of dates and times that satisfy the expression. months - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current date's month is in this range. Allowed integers: 1-12 where 1 is January and 12 is December. @@ -290,7 +290,7 @@ combination of dates and times that satisfy the expression. weeks - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current date's ordinal week of the year is in this range. Allowed integers: 1-53. * - .. _date_spec_years: @@ -300,7 +300,7 @@ combination of dates and times that satisfy the expression. years - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current date's year according to the Gregorian calendar is in this range. * - .. _date_spec_weekyears: @@ -310,7 +310,7 @@ combination of dates and times that satisfy the expression. weekyears - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current date's year in which the week started (according to the ISO 8601 standard) is in this range. @@ -321,7 +321,7 @@ combination of dates and times that satisfy the expression. moon - :ref:`range ` - - + - - If this is set, the expression is satisfied only if the current date's phase of the moon is in this range. Allowed values are 0 to 7 where 0 is the new moon and 4 is the full moon. *(deprecated since 2.1.6)* @@ -357,7 +357,7 @@ ending value for ``in_range`` operations when ``end`` is not supplied. .. list-table:: **Attributes of a duration Element** :class: longtable - :widths: 1 1 1 4 + :widths: 15 15 10 60 :header-rows: 1 * - Name @@ -371,7 +371,7 @@ ending value for ``in_range`` operations when ``end`` is not supplied. id - :ref:`id ` - - + - - A unique name for this element (required) * - .. _duration_seconds: @@ -542,42 +542,42 @@ node attribute. It is allowed in rules in location constraints and in .. list-table:: **Attributes of an expression Element** :class: longtable - :widths: 1 1 3 5 + :widths: 15 15 30 40 :header-rows: 1 - + * - Name - Type - Default - Description - + * - .. _expression_id: - + .. index:: pair: expression; id - + id - :ref:`id ` - - A unique name for this element (required) * - .. _expression_attribute: - + .. index:: pair: expression; attribute - + attribute - :ref:`text ` - - Name of the node attribute to test (required) * - .. _expression_operation: - + .. index:: pair: expression; operation - + operation - :ref:`enumeration ` - - + - - The comparison to perform (required). Allowed values: - + * ``defined:`` The expression is satisfied if the node has the named attribute * ``not_defined:`` The expression is satisfied if the node does not have @@ -595,10 +595,10 @@ node attribute. It is allowed in rules in location constraints and in * ``ne:`` The expression is satisfied if the node attribute value is not equal to the reference value * - .. _expression_type: - + .. index:: pair: expression; type - + type - :ref:`enumeration ` - The default type for ``lt``, ``gt``, ``lte``, and ``gte`` operations is @@ -612,25 +612,25 @@ node attribute. It is allowed in rules in location constraints and in comparison. ``number`` performs a double-precision floating-point comparison *(32-bit integer before 2.0.5)*. * - .. _expression_value: - + .. index:: pair: expression; value - + value - :ref:`text ` - - Reference value to compare node attribute against (used only with, and required for, operations other than ``defined`` and ``not_defined``) * - .. _expression_value_source: - + .. index:: pair: expression; value-source - + value-source - :ref:`enumeration ` - ``literal`` - How the reference value is obtained. Allowed values: - + * ``literal``: ``value`` contains the literal reference value to compare * ``param``: ``value`` contains the name of a resource parameter to compare (valid only in the context of a location constraint) @@ -645,7 +645,7 @@ in rule expressions. .. list-table:: **Built-in Node Attributes** :class: longtable - :widths: 1 4 + :widths: 25 75 :header-rows: 1 * - Name @@ -685,7 +685,7 @@ element. .. list-table:: **Attributes of a rsc_expression Element** :class: longtable - :widths: 1 1 1 4 + :widths: 15 15 10 60 :header-rows: 1 * - Name @@ -699,7 +699,7 @@ element. id - :ref:`id ` - - + - - A unique name for this element (required) * - .. _rsc_expression_class: @@ -708,7 +708,7 @@ element. class - :ref:`text ` - - + - - If this is set, the expression is satisfied only if the resource's agent standard matches this value * - .. _rsc_expression_provider: @@ -718,7 +718,7 @@ element. provider - :ref:`text ` - - + - - If this is set, the expression is satisfied only if the resource's agent provider matches this value * - .. _rsc_expression_type: @@ -728,7 +728,7 @@ element. type - :ref:`text ` - - + - - If this is set, the expression is satisfied only if the resource's agent type matches this value @@ -769,7 +769,7 @@ on a resource operation name and interval. It is allowed in rules in a .. list-table:: **Attributes of an op_expression Element** :class: longtable - :widths: 1 1 1 4 + :widths: 15 15 10 60 :header-rows: 1 * - Name @@ -783,7 +783,7 @@ on a resource operation name and interval. It is allowed in rules in a id - :ref:`id ` - - + - - A unique name for this element (required) * - .. _op_expression_name: @@ -792,7 +792,7 @@ on a resource operation name and interval. It is allowed in rules in a name - :ref:`text ` - - + - - The expression is satisfied only if the operation's name matches this value (required) * - .. _op_expression_interval: @@ -802,7 +802,7 @@ on a resource operation name and interval. It is allowed in rules in a interval - :ref:`duration ` - - + - - If this is set, the expression is satisfied only if the operation's interval matches this value @@ -846,19 +846,19 @@ attributes. These have an effect only when set for the constraint's top-level .. list-table:: **Extra Attributes of a rule Element in a Location Constraint** :class: longtable - :widths: 2 2 1 5 + :widths: 20 15 10 55 :header-rows: 1 - + * - Name - Type - Default - Description - + * - .. _rule_role: - + .. index:: pair: rule; role - + role - :ref:`enumeration ` - ``Started`` @@ -866,25 +866,25 @@ attributes. These have an effect only when set for the constraint's top-level as if ``role`` were set to this in the ``rsc_location`` element. * - .. _rule_score: - + .. index:: pair: rule; score - + score - :ref:`score ` - - + - - If this is set in the constraint's top-level rule, the constraint acts as if ``score`` were set to this in the ``rsc_location`` element. Only one of ``score`` and ``score-attribute`` may be set. * - .. _rule_score_attribute: - + .. index:: pair: rule; score-attribute - + score-attribute - :ref:`text ` - - + - - If this is set in the constraint's top-level rule, the constraint acts as if ``score`` were set to the value of this node attribute on each node where the rule is satisfied. Only one of ``score`` and diff --git a/doc/sphinx/Pacemaker_Explained/status.rst b/doc/sphinx/Pacemaker_Explained/status.rst index 2cdf20c7e88..82d846b00c6 100644 --- a/doc/sphinx/Pacemaker_Explained/status.rst +++ b/doc/sphinx/Pacemaker_Explained/status.rst @@ -39,7 +39,7 @@ allow the cluster to determine whether the node is healthy. .. list-table:: **Attributes of a node_state Element** :class: longtable - :widths: 1 1 3 + :widths: 20 20 60 :header-rows: 1 * - Name @@ -183,9 +183,9 @@ Each ``lrm_resource`` element contains an ``lrm_rsc_op`` element for each recorded action performed for that resource on that node. (Not all actions are recorded, just enough to determine the resource's state.) -.. list-table:: **Attributes of an lrm_rsc_op element** +.. list-table:: **Attributes of an lrm_rsc_op Element** :class: longtable - :widths: 1 1 3 + :widths: 20 20 60 :header-rows: 1 * - Name @@ -370,7 +370,7 @@ recorded, just enough to determine the resource's state.) Simple Operation History Example ________________________________ - + .. topic:: A monitor operation (determines current state of the ``apcstonith`` resource) .. code-block:: xml @@ -391,7 +391,7 @@ operation) for the ``apcstonith`` resource. The cluster schedules probes for every configured resource on a node when the node first starts, in order to determine the resource's current state before it takes any further action. - + From the ``transition-key``, we can see that this was the 22nd action of the 2nd graph produced by this instance of the controller (2668bbeb-06d5-40f9-936d-24cb7f87006a). @@ -402,10 +402,10 @@ that the cluster expects to find the resource inactive. By looking at the As that is the only action recorded for this node, we can conclude that the cluster started the resource elsewhere. - + Complex Operation History Example _________________________________ - + .. topic:: Resource history of a ``pingd`` clone with multiple entries .. code-block:: xml @@ -432,7 +432,7 @@ _________________________________ transition-key="23:2:7:2668bbeb-06d5-40f9-936d-24cb7f87006a" last-rc-change="1239008085" exec-time="20" queue-time="0"/> - + When more than one history entry exists, it is important to first sort them by ``call-id`` before interpreting them.