diff --git a/docs/guides/configuration-guide/ceph.md b/docs/guides/configuration-guide/ceph.md index c07edd032d..1ecef3b3e2 100644 --- a/docs/guides/configuration-guide/ceph.md +++ b/docs/guides/configuration-guide/ceph.md @@ -3,6 +3,9 @@ sidebar_label: Ceph sidebar_position: 30 --- +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + # Ceph The official Ceph documentation is located on https://docs.ceph.com/en/latest/rados/configuration/ @@ -13,6 +16,9 @@ It is **strongly advised** to use the documentation for the version being used. * Quincy - https://docs.ceph.com/en/quincy/rados/configuration/ * Reef - https://docs.ceph.com/en/reef/rados/configuration/ +Its a good idea to review all options in the follwing list. + + ## Unique Identifier The File System ID is a unique identifier for the cluster. @@ -23,11 +29,17 @@ and must be unique. It can be generated with `uuidgen`. fsid: c2120a4a-669c-4769-a32c-b7e9d7b848f4 ``` +## Configure the mon address on the mon nodes + +Set the variable `monitor_address` in the inventory files of the mon hosts to give ceph-ansible the +advise which ip adress should be used to reach the monitor instances. +(`inventory/host_vars/.yml`). + ## Client The `client.admin` keyring is placed in the file `environments/infrastructure/files/ceph/ceph.client.admin.keyring`. -## Swappiness +## Systemctl Parameters, Swappiness and Friends The swappiness is set via the `os_tuning_params` dictionary. The dictionary can only be completely overwritten via an entry in the file `environments/ceph/configuration.yml`. @@ -125,7 +137,7 @@ vm.min_free_kbytes=4194303 ceph-control ``` -## Extra pools +## Configuration of custom ceph pools Extra pools can be defined via the `openstack_pools_extra` parameter. @@ -156,134 +168,178 @@ pools are to be created is `ceph.rbd`, then the parameters would be stored in ## OSD devices -1. For each Ceph storage node edit the file `inventory/host_vars/.yml` - add a configuration like the following to it. Ensure that no `devices` parameter - is present in the file. - - 1. Parameters - - * With the optional parmaeter `ceph_osd_db_wal_devices_buffer_space_percent` it is possible to - set the percentage of VGs to leave free. The parameter is not set by default. Can be helpful - for SSD performance of some older SSD models or to extend lifetime of SSDs in general. +For more advanced OSD layout requirements leave out the `devices` key +and instead use `lvm_volumes`. Details for this can be found on the +[OSD Scenario](https://docs.ceph.com/projects/ceph-ansible/en/latest/osds/scenarios.html) documentation. + +In order to aid in creating the `lvm_volumes` config entries and provision the LVM devices for them, +OSISM has the two playbooks `ceph-configure-lvm-volumes` and `ceph-create-lvm-devices` available. +TODO: add reference to https://docs.ceph.com/en/latest/rados/operations/pgcalc/ and PG autocaler dryrun + +### Configure the device layout + +For each Ceph storage node edit the file `inventory/host_vars/.yml` +add a configuration like the following to it. Ensure that no `devices` parameter +is present in the file. + +**General information about the parameters** + +* With the optional parmaeter `ceph_osd_db_wal_devices_buffer_space_percent` it is possible to + set the percentage of VGs to leave free. The parameter is not set by default. Can be helpful + for SSD performance of some older SSD models or to extend lifetime of SSDs in general. + + ```yaml + ceph_osd_db_wal_devices_buffer_space_percent: 10 + ``` +* It is possible to configure the devices to be used with the parameters `ceph_osd_devices`, + `ceph_db_devices`, `ceph_wal_devices`, and `ceph_db_wal_devices`. This is described below. +* It is always possible to use device names such as `sda` or device IDs such as + `disk/by-id/wwn-` or `disk/by-id/nvme-eui.`. + The top level dierectory `/dev/` is not prefixed and is added automatically. +* The `db_size` parameter is optional and defaults to `(VG size - buffer space (if enabled)) / num_osds`. +* The `wal_size` parameter is optional and defaults to `2 GB`. +* The `num_osds` parameter specifies the maximum number of OSDs that can be assigned to a WAL device or DB device. +* The optional parameter `wal_pv` can be used to set the device that is to be used as the WAL device. +* The optional parameter `db_pv` can be used to set the device that is to be used as the DB device. + +**Layout variants** + +OSISM basically utilizes LVM volumes for all OSD setup variants. + + + + + +This variant does not use a dedicated wal- or db-device. +This is the most simple variant and this variant can be used if you use a all-flash setup with NVMes. + +The `sda` device will be used as an OSD device without WAL and DB volume device. + +```yaml +ceph_osd_devices: + sda: +``` + + + + +The `nvme0n1` device will be used as an source for DB device volumes. +With the configured values the provisioning mechanism creates 6 logical volumes of 30GB size each on the nvme which can +be used for 6 OSD instances. + + ```yaml + ceph_db_devices: + nvme0n1: + num_osds: 6 + db_size: 30 GB + ``` + +The devices `sda` up to `sdf` will use the previously defined DB volumes from `nvme0n1` for the listed OSD instances. + + ```yaml + ceph_osd_devices: + sda: + db_pv: nvme0n1 + ... + sdf: + db_pv: nvme0n1 + ``` + + + + +The `nvme0n1` device will be used as an source for WAL device volumes. +With the configured values the provisioning mechanism creates 6 logical volumes of 2B size each on `nvme0n1` which can +be used for 6 OSD instances. + + +```yaml +ceph_wal_devices: + nvme0n1: + num_osds: 6 + wal_size: 2 GB +``` + +The devices `sda` up to `sdf` will use the previously defined WAL volumes from `nvme0n1` for the listed OSD instances. - ```yaml - ceph_osd_db_wal_devices_buffer_space_percent: 10 - ``` - * It is possible to configure the devices to be used with the parameters `ceph_osd_devices`, - `ceph_db_devices`, `ceph_wal_devices`, and `ceph_db_wal_devices`. This is described below. - * It is always possible to use device names such as `sda` or device IDs such as - `disk/by-id/wwn-` or `disk/by-id/nvme-eui.`. `/dev/` is not - prefixed and is added automatically. - * The `db_size` parameter is optional and defaults to `(VG size - buffer space (if enabled)) / num_osds`. - * The `wal_size` parameter is optional and defaults to `2 GB`. - * The `num_osds` parameter specifies the maximum number of OSDs that can be assigned to a WAL device or DB device. - * The optional parameter `wal_pv` can be used to set the device that is to be used as the WAL device. - * The optional parameter `db_pv` can be used to set the device that is to be used as the DB device. - - 2. OSD only - - The `sda` device will be used as an OSD device without WAL and DB device. - - ```yaml - ceph_osd_devices: - sda: - ``` - - 3. OSD + DB device - - The `nvme0n1` device will be used as an DB device. It is possible to use this DB device for up to 6 OSDs. Each - OSD is provided with 30 GB. - - ```yaml - ceph_db_devices: - nvme0n1: - num_osds: 6 - db_size: 30 GB - ``` - - The `sda` device will be used as an OSD device with `nvme0n1` as DB device. - - ```yaml - ceph_osd_devices: - sda: - db_pv: nvme0n1 - ``` - - 4. OSD + WAL device - - The `nvme0n1` device will be used as an WAL device. It is possible to use this WAL device for up to 6 OSDs. Each - OSD is provided with 2 GB. - - ```yaml - ceph_wal_devices: - nvme0n1: - num_osds: 6 - wal_size: 2 GB - ``` - - The `sda` device will be used as an OSD device with `nvme0n1` as WAL device. - - ```yaml - ceph_osd_devices: - sda: - wal_pv: nvme0n1 - ``` - - 5. OSD + DB device + WAL device (same device for DB + WAL) - - The `nvme0n1` device will be used as an DB device and a WAL device. It is possible to use those devices for up - to 6 OSDs. - - ```yaml - ceph_db_wal_devices: - nvme0n1: - num_osds: 6 - db_size: 30 GB - wal_size: 2 GB - ``` - - The `sda` device will be used as an OSD device with `nvme0n1` as DB device and `nvme0n1` as WAL device. - - ```yaml - ceph_osd_devices: - sda: - db_pv: nvme0n1 - wal_pv: nvme0n1 - ``` - - 6. OSD + DB device + WAL device (different device for DB + WAL) - - The `nvme0n1` device will be used as an DB device. It is possible to use this DB device for up to 6 OSDs. Each - OSD is provided with 30 GB. - - ```yaml - ceph_db_devices: - nvme0n1: - num_osds: 6 - db_size: 30 GB - ``` - - The `nvme1n1` device will be used as an WAL device. It is possible to use this WAL device for up to 6 OSDs. Each - OSD is provided with 2 GB. - - ```yaml - ceph_wal_devices: - nvme1n1: - num_osds: 6 - wal_size: 2 GB - ``` - - The `sda` device will be used as an OSD device with `nvme0n1` as DB device and `nvme1n1` as WAL device. +```yaml +ceph_osd_devices: + sda: + wal_pv: nvme0n1 +``` + + + + +The `nvme0n1` device will be used as an source for WAL and DB device volumes. +With the configured values the provisioning mechanism creates 6 logical DB volumes of 30GB and 6 logical WAL volumes of 2B size each +on `nvme0n1` which can be used for 6 OSD instances. + + +The `nvme0n1` device will be used as an DB device and a WAL device. It is possible to use those devices for up +to 6 OSDs. + +```yaml +ceph_db_wal_devices: + nvme0n1: + num_osds: 6 + db_size: 30 GB + wal_size: 2 GB +``` + +The `sda` device will be used as an OSD device with `nvme0n1` as DB device and `nvme0n1` as WAL device. + +```yaml +ceph_osd_devices: + sda: + db_pv: nvme0n1 + wal_pv: nvme0n1 +``` + +In the example shown here, both the data structures for the RocksDB and for the write-ahead log are placed on the faster NVMe device. +(This is described in the [Ceph documentation](https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/) as "...whenever a DB device is specified but an explicit WAL device is not, the WAL will be implicitly colocated with the DB on the faster device..."). + + - ```yaml - ceph_osd_devices: - sda: - db_pv: nvme0n1 - wal_pv: nvme1n1 - ``` + -2. Push the configuration to your configuration repository and after that do the following +The `nvme0n1` device will be used as an source for WAL and `nvme1n1` for DB device volumes. +With the configured values the provisioning mechanism creates 6 logical volumes of 30 GB and 6 logical WAL volumes of 2B size each. +```yaml +ceph_db_devices: + nvme0n1: + num_osds: 6 + db_size: 30 GB +``` + +The `nvme1n1` device will be used as an WAL device. It is possible to use this WAL device for up to 6 OSDs. Each +OSD is provided with 2 GB. + +```yaml +ceph_wal_devices: + nvme1n1: + num_osds: 6 + wal_size: 2 GB +``` + +The `sda` device will be used as an OSD device with `nvme0n1` as DB device and `nvme1n1` as WAL device. + +```yaml +ceph_osd_devices: + sda: + db_pv: nvme0n1 + wal_pv: nvme1n1 +``` + + + + +### Provision the configured layout + +1. Commit and push the configuration to your configuration repository +2. Establish the changed configuration + Make sure that you do not have any open changes on the manager node either, as these will be discarded during this step. ``` $ osism apply configuration $ osism reconciler sync @@ -331,14 +387,15 @@ pools are to be created is `ceph.rbd`, then the parameters would be stored in This content from the file in the `/tmp` directory is added in the host vars file. The previous `ceph_osd_devices` is replaced with the new content. -5. Push the updated configuration **again** to your configuration repository and re-run: - +5. Commit and push the configuration to your configuration repository **again** +6. Establish the changed configuration + Make sure that you do not have any open changes on the manager node either, as these will be discarded during this step. ``` $ osism apply configuration $ osism reconciler sync ``` -6. Finally create the LVM devices. +7. Finally create the LVM devices. ``` $ osism apply ceph-create-lvm-devices diff --git a/docs/guides/deploy-guide/services/ceph.mdx b/docs/guides/deploy-guide/services/ceph.mdx index 6a33607ab9..c4c19e7451 100644 --- a/docs/guides/deploy-guide/services/ceph.mdx +++ b/docs/guides/deploy-guide/services/ceph.mdx @@ -11,6 +11,7 @@ import TabItem from '@theme/TabItem'; In OSISM it is also possible to integrate and use existing Ceph clusters. It is not necessary to deploy Ceph with OSISM. If Ceph is deployed with OSISM, it should be noted that OSISM does not claim to provide all possible features of Ceph. + Ceph provided with OSISM is intended to provide the storage for Glance, Nova, Cinder and Manila. In a specific way that has been implemented by OSISM for years. It should be checked in advance whether the way in OSISM the Ceph deployment and the @@ -22,80 +23,100 @@ open source projects, please refer to :::warning -Before starting the Ceph deployment, the configuration and preparation of the -OSD devices must be completed. The steps that are required for this can be found in the -[Ceph Configuration Guide](../../configuration-guide/ceph.md#osd-devices). +Before starting the Ceph deployment, the it is recommended to perform the general ceph configuration. +All the preparing steps are listed in the [Ceph Configuration Guide](../../configuration-guide/ceph). + +At least the [preparation](../../configuration-guide/ceph.md#osd-devices) of the necessary LVM2 volumes for the osd devices must be completed. ::: -1. Deploy services. - * Deploy [ceph-mon](https://docs.ceph.com/en/quincy/man/8/ceph-mon/) services +## Deploy ceph services. - ``` - osism apply ceph-mons - ``` +* Deploy [ceph-mon](https://docs.ceph.com/en/quincy/man/8/ceph-mon/) services - * Deploy ceph-mgr services + ``` + osism apply ceph-mons + ``` - ``` - osism apply ceph-mgrs - ``` +* Deploy ceph-mgr services - * Deploy [ceph-osd](https://docs.ceph.com/en/quincy/man/8/ceph-osd/) services + ``` + osism apply ceph-mgrs + ``` - ``` - osism apply ceph-osds - ``` +* Prepare OSD devices [as described](../../configuration-guide/ceph#osd-devices) in the configuration guide - * Generate pools and keys. This step is only necessary for OSISM >= 7.0.0. +* Deploy [ceph-osd](https://docs.ceph.com/en/quincy/man/8/ceph-osd/) services - ``` - osism apply ceph-pools - ``` + ``` + osism apply ceph-osds + ``` - * Deploy ceph-crash services +* Configure custom pools [as described](../../configuration-guide/ceph#extra-pools) in the configuration guide - ``` - osism apply ceph-crash - ``` +* Generate pools and the related keys. This step is only necessary for OSISM >= 7.0.0. - :::info + ``` + osism apply ceph-pools + ``` - It's all done step by step here. It is also possible to do this in a single step. - This speeds up the entire process and avoids unnecessary restarts of individual - services. +* Deploy ceph-crash services - - - ``` - osism apply ceph - ``` + ``` + osism apply ceph-crash + ``` + +:::info + +It's all done step by step here. It is also possible to do this in a single step. +This speeds up the entire process and avoids unnecessary restarts of individual +services. + + + +``` +osism apply ceph +``` + +Generate pools and keys. + +``` +osism apply ceph-pools +``` + + +``` +osism apply ceph-base +``` + + - Generate pools and keys. +::: + +## Install Ceph Clients + +1. Get ceph keys. This places the necessary keys in `/opt/configuration`. ``` - osism apply ceph-pools + osism apply copy-ceph-keys ``` - - + +2. Encrypt the fetched keys + It is highly recommended to store the Ceph keys encrypted in the Git repository. ``` - osism apply ceph-base + cd /opt/configuration + make ansible_vault_encrypt_ceph_keys ``` - - - ::: - -2. Get ceph keys. This places the necessary keys in `/opt/configuration`. +3. Add the keys permanently to the repository ``` - osism apply copy-ceph-keys + git add **/ceph.*.keyring + git commit -m "Add the downloaded Keyes to the repository" ``` - After run, these keys must be permanently added to the configuration repository - via Git. - + Here is an overview of the individual keys: ``` environments/infrastructure/files/ceph/ceph.client.admin.keyring environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring @@ -108,6 +129,8 @@ OSD devices must be completed. The steps that are required for this can be found environments/kolla/files/overlays/glance/ceph.client.glance.keyring ``` + :::info + If the `osism apply copy-ceph-keys` fails because the keys are not found in the `/share` directory, this can be ignored. The keys of the predefined keys (e.g. for Manila) were then not created as they are not used. If you only use Ceph and do not need the predefined @@ -117,19 +140,22 @@ OSD devices must be completed. The steps that are required for this can be found ```yaml title="environments/ceph/configuration.yml" ceph_kolla_keys: [] ``` + ::: -3. After the Ceph keys have been persisted in the configuration repository, the Ceph +2. After the Ceph keys have been persisted in the configuration repository, the Ceph client can be deployed. ``` osism apply cephclient ``` -4. Enable and prepare the use of the Ceph dashboard. +## Enable Ceph Dashboard - ``` - osism apply ceph-bootstrap-dashboard - ``` +Enable and prepare the use of the Ceph dashboard. + +``` +osism apply ceph-bootstrap-dashboard +``` ## RGW service diff --git a/docs/guides/deploy-guide/services/index.md b/docs/guides/deploy-guide/services/index.md index a7c4cbc7f7..82cdc441c4 100644 --- a/docs/guides/deploy-guide/services/index.md +++ b/docs/guides/deploy-guide/services/index.md @@ -15,14 +15,17 @@ the nodes. How to bootstrap the nodes is documented in the When setting up a new cluster, the services are deployed in a specific order. -1. [Infrastructure](./infrastructure) -2. [Network](./network) -3. [Logging & Monitoring](./logging-monitoring) -4. [Ceph](./ceph) -5. [OpenStack](./openstack) + +1. [Infrastructure](./infrastructure.md) +2. [Kubernetes](./kubernetes.md) +3. [Network](./network.md) +4. [Logging & Monitoring](./logging-monitoring.md) +5. [Ceph](./ceph.mdx) +6. [OpenStack](./openstack.md) In the examples, the pull of images (if supported by a role) is always run first. While this is optional, it is recommended to speed up the execution of the deploy action in the second step. This significantly reduces the times required for the deployment time of new services. + diff --git a/docs/guides/deploy-guide/services/openstack.md b/docs/guides/deploy-guide/services/openstack.md index beb858d9e2..3e1ce24618 100644 --- a/docs/guides/deploy-guide/services/openstack.md +++ b/docs/guides/deploy-guide/services/openstack.md @@ -98,6 +98,7 @@ Not all of the services listed there are supported by OSISM. For the command to be usable, a cloud profile for octavia must currently be added in the clouds.yml file of the OpenStack environment. The `auth_url` is changed accordingly. + ```yaml title="environments/openstack/clouds.yml" clouds: [...] @@ -105,6 +106,8 @@ Not all of the services listed there are supported by OSISM. auth: username: octavia project_name: service + # use this url, when using kolla_enable_tls_external=no + #auth_url: http://api.testbed.osism.xyz:5000/v3 auth_url: https://api.testbed.osism.xyz:5000/v3 project_domain_name: default user_domain_name: default