Skip to content

Latest commit

 

History

History
193 lines (134 loc) · 6.26 KB

migrate-uyuni-to-a-container.adoc

File metadata and controls

193 lines (134 loc) · 6.26 KB

Migrating the {productname} Server to a Containerized Environment

To migrate a legacy {productname} Server (RPM installation) to a container, a new machine is required.

Warning

It is not possible to perform an in-place migration.

Important

Self trusted GPG keys are not migrated. GPG keys that are trusted in the RPM database only are not migrated. Thus synchronizing channels with spacewalk-repo-sync can fail.

The administrator must migrate these keys manually from the 4.3 installation to the container host after the actual server migration.

  1. Copy the keys from the 4.3 server to the container host of the new server.

  2. Later, add each key to the migrated server with the command mgradm gpg add <PATH_TO_KEY_FILE>.

The migration procedure currently does not include any hostname renaming functionality. The fully qualified domain name (FQDN) on the new server will remain identical to that on the old server. Therefore, following migration, it will be necessary to manually adjust the DNS records to point to the new server.

Initial Preparation on the Legacy Server

Procedure: Initial preparation on the legacy server
  1. Stop the {productname} services:

    spacewalk-service stop
  2. Stop the PostgreSQL service:

    systemctl stop postgresql

Prepare the SSH Connection

Procedure: Preparing the SSH connection
  1. Ensure that for root an SSH key exists on the new {productnumber} server. If a key does not exist, create it with:

    ssh-keygen -t rsa
  2. The SSH configuration and agent should be ready on the new server host for a passwordless connection to the legacy server.

    Note

    To establish a passwordless connection, the migration script relies on an SSH agent running on the new server. If the agent is not active yet, initiate it by running eval $(ssh-agent). Then add the SSH key to the running agent with ssh-add followed by the path to the private key. You will be prompted to enter the password for the private key during this process.

  3. Copy the public SSH key to the legacy {productname} Server (<oldserver.fqdn>) with ssh-copy-id. Replace <oldserver.fqdn> with the FQDN of the legacy server:

    ssh-copy-id <oldserver.fqdn>
    The SSH key will be copied into the legacy server's [path]``~/.ssh/authorized_keys`` file.
    For more information, see the [literal]``ssh-copy-id`` manpage.
  4. Establish an SSH connection from the new server to the legacy {productname} Server to check that no password is needed. Also there must not by any problem with the host fingerprint. In case of trouble, remove old fingerprints from the ~/.ssh/known_hosts file. Then try again. The fingerprint will be stored in the local ~/.ssh/known_hosts file.

Perform the Migration

Important

When planning your migration from a legacy {productname} to a containerized {productname}, ensure that your target instance meets or exceeds the specifications of the old setup. This includes, but is not limited to, Memory (RAM), CPU Cores, Storage, and Network Bandwidth.

Procedure: Performing the Migration
  1. This step is optional. If custom persistent storage is required for your infrastructure, use the mgr-storage-server tool.

    • For more information, see mgr-storage-server --help. This tool simplifies creating the container storage and database volumes.

    • Use the command in the following manner:

      mgr-storage-server <storage-disk-device> [<database-disk-device>]

      For example:

      mgr-storage-server /dev/nvme1n1 /dev/nvme2n1
      Note

      This command will create the persistent storage volumes at /var/lib/containers/storage/volumes.

      For more information, see installation-and-upgrade:container-management/persistent-container-volumes.adoc.

  2. Execute the following command to install a new {productname} server. Replace <oldserver.fqdn> with the FQDN of the legacy server:

    mgradm migrate podman <oldserver.fqdn>
  3. Migrate trusted SSL CA certificates.

Important

Trusted SSL CA certificates that were installed as part of an RPM and stored on a legacy {productname} in the /usr/share/pki/trust/anchors/ directory will not be migrated. Because {suse} does not install RPM packages in the container, the administrator must migrate these certificate files manually from the legacy installation after migration:

  1. Copy the file from the legacy server to the new server. For example, as /local/ca.file.

  2. Copy the file into the container with:

    mgradm cp /local/ca.file server:/etc/pki/trust/anchors/
Important

After successfully running the mgradm migrate command, the {salt} setup on all clients will still point to the old legacy server.

To redirect them to the {productnumber} server, it is required to rename the new server at the infrastructure level (DHCP and DNS) to use the same Fully Qualified Domain Name and IP address as legacy server.

Prepare for Kubernetes

Before executing the migration with mgradm migrate command, it is essential to predefine Persistent Volumes, especially considering that the migration job initiates the container from scratch. For more information, see the installation section for comprehensive guidance on preparing these volumes in installation-and-upgrade:container-management/persistent-container-volumes.adoc.

Migrating

Execute the following command to install a new {productname} server, replacing <oldserversource.fqdn> with the appropriate FQDN of the old server:

mgradm migrate podman <oldnserver.fqdn>

or

mgradm migrate kubernetes <oldnserver.fqdn>
Important

After successfully running the mgradm migrate command, the {salt} setup on all clients will still point to the old server. To redirect them to the new server, it is required to rename the new server at the infrastructure level (DHCP and DNS) to use the same FQDN and IP address as the old server.