Table of Contents

Upgrade Guide

1. oVirt Upgrade Overview

This guide explains how to upgrade the following environments to oVirt 4.3 or 4.4 :

  • Self-hosted engine, local database: Both the Data Warehouse database and the Engine database are installed on the Engine.

  • Standalone manager, local database: Both the Data Warehouse database and the Engine database are installed on the Engine.

  • Standalone manager, remote database: Either the Data Warehouse database or the Engine database, or both, are on a separate machine.

Plan any necessary downtime in advance. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended VMs as soon as possible to apply the configuration changes.

Select the appropriate instructions for your environment from the following table. If your Engine and host versions differ (if you have previously upgraded the Engine but not the hosts), follow the instructions that match the Engine’s version.

Table 1. Supported Upgrade Paths
Current Engine version Target Engine version Relevant section

4.3

4.4

Self-hosted engine, local database environment: Upgrading a self-Hosted engine from oVirt 4.3 to 4.4

Local database environment - Upgrading from oVirt 4.3 to 4.4

4.2

4.3

Self-hosted engine, local database environment: Upgrading a Self-Hosted Engine from oVirt 4.2 to 4.3

Local database environment: Upgrading from oVirt 4.2 to 4.3

2. Upgrading a standalone Engine local database environment

3. Upgrading from oVirt 4.4 to 4.5

Upgrading your environment from 4.4 to 4.5 involves the following steps:

3.1. Prerequisites

  • Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.

  • Ensure your environment meets the requirements for oVirt 4.5.

  • When upgrading oVirt Engine, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

You can now update the Engine to the latest version of 4.4.

3.2. Updating the oVirt Engine

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the update.

You can now upgrade the Engine to 4.5.

3.3. Upgrading the oVirt Engine from 4.4 to 4.5

oVirt Engine 4.5 is only supported on Enterprise Linux 8.6 or later.

Prerequisites
  • All data centers and clusters in the environment must have the cluster compatibility level set to version 4.3 or higher.

  • All virtual machines in the environment must have the cluster compatibility level set to version 4.3 or higher.

Connected hosts and virtual machines can continue to work while the Engine is being upgraded.

If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.

Procedure
  1. Enable oVirt 4.5 repositories

    # dnf install -y centos-release-ovirt45

As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available.

+ . Enable version 2.3 of the mod_auth_openidc module.

+

# dnf module -y enable mod_auth_openidc:2.3

Then follow the procedure for updates between minor releases.

You can now update the hosts.

3.4. Migrating hosts oVirt 4.4 to 4.5

Prerequisites
  • Hosts for oVirt 4.5 require Enterprise Linux 8.6 or later.

  • oVirt Engine 4.5 is installed and running.

  • The compatibility level of the data center and cluster to which the hosts belong is set to 4.3 or higher.

  • All data centers and clusters in the environment must have the cluster compatibility level set to version 4.3 higher before you start the procedure.

If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.

Procedure for Enterprise Linux hosts
  1. Enable oVirt 4.5 repositories

# dnf install -y centos-release-ovirt45
Procedure for Enterprise Linux hosts oVirt Nodes:
  1. Enable oVirt 4.5 repositories

# dnf install centos-release-ovirt45 --enablerepo=extras

As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available.

Common procedure
  1. Follow the procedure for updates between minor releases.

  2. Follow the procedure for updating the cluster compatibility version.

If you are using GlusterFS Storage please note that oVirt 4.5 updates Gluster to version 10. Please refer to Upgrade procedure to Gluster 10, from Gluster 9.x, 8.x and 7.x for more details.

You can now update the cluster compatibility version.

3.5. Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites
  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations
  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure
  1. In the Administration Portal, click Compute  Clusters.

  2. Select the cluster to change and click Edit.

  3. On the General tab, change the Compatibility Version to the desired value.

  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

You can now update the cluster compatibility version for virtual machines in the cluster.

3.6. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (pendingchanges).

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure
  1. In the Administration Portal, click Compute  Virtual Machines.

  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

You can now update the data center compatibility version.

3.7. Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites
  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure
  1. In the Administration Portal, click Compute  Data Centers.

  2. Select the data center to change and click Edit.

  3. Change the Compatibility Version to the desired value.

  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

4. Upgrading from oVirt 4.3 to 4.4

Upgrading your environment from 4.3 to 4.4 involves the following steps:

4.1. Prerequisites

  • Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.

  • Ensure your environment meets the requirements for oVirt 4.5.

  • When upgrading oVirt Engine, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

You can now update the Engine to the latest version of 4.3.

4.2. Updating the oVirt Engine

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the update.

You can now upgrade the Engine to 4.4.

4.3. Upgrading the oVirt Engine from 4.3 to 4.4

oVirt Engine 4.4 is only supported on Enterprise Linux versions 8.2 to 8.6. You need to do a clean installation of Enterprise Linux 8.6 and oVirt Engine 4.4, even if you are using the same physical machine that you use to run oVirt Engine 4.3.

The upgrade process requires restoring oVirt Engine 4.3 backup files onto the oVirt Engine 4.4 machine.

Prerequisites
  • All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3.

  • All virtual machines in the environment must have the cluster compatibility level set to version 4.3.

  • If you use an external CA to sign HTTPS certificates, follow the steps in Replacing the oVirt Engine CA Certificate in the Administration Guide. The backup and restore include the 3rd-party certificate, so you should be able to log in to the Administration portal after the upgrade. Ensure the CA certificate is added to system-wide trust stores of all clients to ensure the foreign menu of virt-viewer works. See BZ#1313379 for more information.

Connected hosts and virtual machines can continue to work while the Engine is being upgraded.

Procedure
  1. Log in to the Engine machine.

  2. Back up the oVirt Engine 4.3 environment.

    # engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log
  3. Copy the backup file to a storage device outside of the oVirt environment.

  4. Install Enterprise Linux 8.6. See Performing a standard RHEL installation for more information.

  5. Complete the steps to install oVirt Engine 4.4, including running the command yum install rhvm, but do not run engine-setup. See one of the Installing oVirt guides for more information.

  6. Copy the backup file to the oVirt Engine 4.4 machine and restore it.

    # engine-backup --mode=restore --file=backup.bck --provision-all-databases

    If the backup contained grants for extra database users, this command creates the extra users with random passwords. You must change these passwords manually if the extra users require access to the restored system. See https://access.redhat.com/articles/2686731.

  7. Install optional extension packages if they were installed on the oVirt Engine 4.3 machine.

    # yum install ovirt-engine-extension-aaa-ldap ovirt-engine-extension-aaa-misc

    The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide.

    The configuration for these package extensions must be manually reapplied because they are not migrated as part of the backup and restore process.

  8. Configure the Engine by running the engine-setup command:

    # engine-setup
  9. Decommission the oVirt Engine 4.3 machine if a different machine is used for oVirt Engine 4.4. Two different Engines must not manage the same hosts or storage.

  10. Run engine-setup to configure the Engine.

    # engine-setup

The oVirt Engine 4.4 is now installed, with the cluster compatibility version set to 4.2 or 4.3, whichever was the preexisting cluster compatibility version. Now you need to upgrade the hosts in your environment to oVirt 4.4, after which you can change the cluster compatibility version to 4.4.

You can now update the hosts.

4.4. Migrating hosts and virtual machines from oVirt 4.3 to 4.4

You can migrate hosts and virtual machines from oVirt 4.3 to 4.4 such that you minimize the downtime of virtual machines in your environment.

This process requires migrating all virtual machines from one host so as to make that host available to upgrade to oVirt 4.4. After the upgrade, you can reattach the host to the Engine.

When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.

CPU-passthrough virtual machines might not migrate properly from oVirt 4.3 to oVirt 4.4.

oVirt 4.3 and oVirt 4.4 are based on EL 7 and EL 8, respectively, which have different kernel versions with different CPU flags and microcodes. This can cause problems in migrating CPU-passthrough virtual machines.

Prerequisites
  • Hosts for oVirt 4.4 require Enterprise Linux versions 8.2 to 8.6. A clean installation of Enterprise Linux 8.6, or oVirt Node 4.4 is required, even if you are using the same physical machine that you use to run hosts for oVirt 4.3.

  • oVirt Engine 4.4 is installed and running.

  • The compatibility level of the data center and cluster to which the hosts belong is set to 4.2 or 4.3. All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3 before you start the procedure.

Procedure
  1. Pick a host to upgrade and migrate that host’s virtual machines to another host in the same cluster. You can use Live Migration to minimize virtual machine downtime. For more information, see Migrating Virtual Machines Between Hosts in the Virtual Machine Management Guide.

  2. Put the host into maintenance mode and remove the host from the Engine. For more information, see Removing a Host in the Administration Guide.

  3. Install Enterprise Linux 8.6, or oVirt Node 4.4. For more information, see Installing Hosts for oVirt in one of the Installing oVirt guides.

  4. Install the appropriate packages to enable the host for oVirt 4.4. For more information, see Installing Hosts for oVirt in one of the Installing oVirt guides.

  5. Add this host to the Engine, assigning it to the same cluster. You can now migrate virtual machines onto this host. For more information, see Adding Standard Hosts to the Engine in one of the Installing oVirt guides.

Repeat these steps to migrate virtual machines and upgrade hosts for the rest of the hosts in the same cluster, one by one, until all are running oVirt 4.4.

4.5. Upgrading oVirt Node while preserving local storage

Environments with local storage cannot migrate virtual machines to a host in another cluster because the local storage is not shared with other storage domains. To upgrade oVirt Node 4.3 hosts that have a local storage domain, reinstall the host while preserving the local storage, create a new local storage domain in the 4.4 environment, and import the previous local storage into the new domain.

Prerequisites
  • oVirt Engine 4.4 is installed and running.

  • The compatibility level of the data center and cluster to which the host belongs is set to 4.2 or 4.3.

Procedure
  1. Ensure that the local storage on the oVirt Node 4.3 host’s local storage is in maintenance mode before starting this process. Complete these steps:

    1. Open the Data Centers tab.

    2. Click the Storage tab in the Details pane and select the storage domain in the results list.

    3. Click Maintenance.

  2. Reinstall the oVirt Node, as described in Installing oVirt Node in the Installation Guide.

    When selecting the device on which to install oVirt Node from the Installation Destination screen, do not select the device(s) storing the virtual machines. Only select the device where the operating system should be installed.

    If you are using Kickstart to install the host, ensure that you preserve the devices containing the virtual machines by adding the following to the Kickstart file, replacing `device` with the relevant device.

    # clearpart --all --drives=device

    For more information on using Kickstart, see Kickstart references in Red Hat Enterprise Linux 8 Performing an advanced RHEL installation.

  3. On the reinstalled host, create a directory, for example /data in which to recover the previous environment.

    # mkdir /data
  4. Mount the previous local storage in the new directory. In our example, /dev/sdX1 is the local storage:

    # mount /dev/sdX1 /data
  5. Set the following permissions for the new directory.

    # chown -R 36:36 /data
    # chmod -R 0755 /data
  6. oVirt recommends that you also automatically mount the local storage via /etc/fstab in case the server requires a reboot:

    # blkid | grep -i sdX1
    /dev/sdX1: UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" TYPE="ext4"
    # vi /etc/fstab
    UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" /data    ext4    defaults     0       0
  7. In the Administration Portal, create a data center and select Local in the Storage Type drop-down menu.

  8. Configure a cluster on the new data center. See Creating a New Cluster in the Administration Guide for more information.

  9. Add the host to the Engine. See Adding Standard Hosts to the oVirt Manager in one of the Installing oVirt guides for more information.

  10. On the host, create a new directory that will be used to create the initial local storage domain. For example:

    # mkdir -p /localfs
    # chown 36:36 /localfs
    # chmod -R 0755 /localfs
  11. In the Administration Portal, open the Storage tab and click New Domain to create a new local storage domain.

  12. Set the name to localfs and set the path to /localfs.

  13. Once the local storage is active, click Import Domain and set the domain’s details. For example, define Data as the name, Local on Host as the storage type and /data as the path.

  14. Click OK to confirm the message that appears informing you that storage domains are already attached to the data center.

  15. Activate the new storage domain:

    1. Open the Data Centers tab.

    2. Click the Storage tab in the details pane and select the new data storage domain in the results list.

    3. Click Activate.

  16. Once the new storage domain is active, import the virtual machines and their disks:

    1. In the Storage tab, select data.

    2. Select the VM Import tab in the details pane, select the virtual machines and click Import. See Importing Virtual Machines from a Data Domain in the Virtual Machine Management Guide for more details.

  17. Once you have ensured that all virtual machines have been successfully imported and are functioning properly, you can move localfs to maintenance mode.

  18. Click the Storage tab and select localfs from the results list.

    1. Click the Data Center tab in the details pane.

    2. Click Maintenance, then click OK to move the storage domain to maintenance mode.

    3. Click Detach. The Detach Storage confirmation window opens.

    4. Click OK.

You have now upgraded the host to version 4.4, created a new local storage domain, and imported the 4.3 storage domain and its virtual machines.

4.6. Upgrading oVirt Node while preserving Gluster storage

Environments with Gluster as storage can take a backup of Gluster storage and be restored after the oVirt Node upgrade. Try to keep workloads on all virtual machines using Gluster storage as light as possible to shorten the time required to upgrade. If there are highly write-intensive workloads, expect more time to restore.

Prerequisites
  • If there are geo-replication schedules on the storage domains, remove those schedules to avoid upgrade conflicts.

  • No geo-replication sync are currently running.

  • Additional disk space of 100 GB is required on 3 hosts for creating a new volume for the new oVirt Node 4.4 Engine deployment.

  • All data centers and clusters in the environment must have a cluster compatibility level of 4.3 before you start the procedure.

Restriction
  • Network-Bound Disk Encryption (NBDE) is supported only with new deployments with oVirt 4.4. This feature cannot be enabled during the upgrade.

Procedure
  1. Create a new Gluster volume for oVirt Node 4.4 Engine deployment.

    1. Create a new brick on each host for the new oVirt Node 4.4 self-hosted engine virtual machine(VM).

    2. If you have a spare disk in the setup, follow the document Create Volume from the web console.

    3. If there is enough space for a new Engine 100GB brick in the existing Volume Group(VG), it can be used as a new Engine Logical Volume (LV).

      Run the following commands on all the hosts, unless specified otherwise explicitly:

    4. Check the free size of the Volume Group (VG).

      # vgdisplay <VG_NAME> | grep -i free
    5. Create one more Logical Volume in this VG.

      # lvcreate -n gluster_lv_newengine -L 100G <EXISTING_VG>
    6. Format the new Logical Volume (LV) as XFS.

      # mkfs.xfs  <LV_NAME>
    7. Create the mount point for the new brick.

      # mkdir /gluster_bricks/newengine
    8. Create an entry corresponding to the newly created filesystem in /etc/fstab and mount the filesystem.

    9. Set the SELinux Labels on the brick mount points.

      # semanage fcontext -a -t glusterd_brick_t /gluster_bricks/newengine
       restorecon -Rv /gluster_bricks/newengine
    10. Create a new gluster volume by executing the gluster command on one of the hosts in the cluster:

      # gluster volume create newengine replica 3 host1:/gluster_bricks/newengine/newengine host2:/gluster_bricks/newengine/newengine host3:/gluster_bricks/newengine/newengine
    11. Set the required volume options on the newly created volume. Run the following commands on one of the hosts in the cluster:

      # gluster volume set newengine group virt
       gluster volume set newengine network.ping-timeout 30
       gluster volume set newengine cluster.granular-entry-heal enable
       gluster volume set newengine network.remote-dio off
       gluster volume set newengine performance.strict-o-direct on
       gluster volume set newengine storage.owner-uid 36
       gluster volume set newengine storage.owner-gid 36
    12. Start the newly created Gluster volume. Run the following command on one of the hosts in the cluster.

      # gluster volume start newengine
  2. Back up the Gluster configuration on all oVirt Node 4.3 nodes using the backup playbook.

    1. The backup playbook is available with the latest version of oVirt Node 4.3. If this playbook is not available, create a playbook and inventory file:

      /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/archive_config.yml

      Example:

       all:
        hosts:
          host1:
          host2:
          host3:
        vars:
          backup_dir: /archive
          nbde_setup: false
          upgrade: true
    2. Edit the backup inventory file with correct details.

        Common variables
        backup_dir ->  Absolute path to directory that contains the extracted contents of the backup archive
        nbde_setup -> Set to false as the {virt-product-fullname} 4.3 setup doesn’t support NBDE
        upgrade -> Default value true . This value will make no effect with backup
    3. Switch to the directory and execute the playbook.

      ansible-playbook -i archive_config_inventory.yml archive_config.yml --tags backupfiles
    4. The generated backup configuration tar file is generated under /root with the name oVirt Node-<HOSTNAME>-backup.tar.gz. On all the hosts, copy the backup configuration tar file to the backup host.

  3. Using the Manager Administration Portal, migrate the VMs running on the first host to other hosts in the cluster.

  4. Backup Engine configurations.

    # engine-backup --mode=backup --scope=all --file=<backup-file.tar.gz> --log=<logfile>

    Before creating a backup, do the following:

    • Enable Global Maintenance for the self-hosted engine(SHE).

    • Log in to the Engine VM using SSH and stop the ovirt-engine service.

    • Copy the backup file from the self-hosted engine VM to the remote host.

    • Shut down the Engine.

  5. Check for any pending self-heal tasks on all the replica 3 volumes. Wait for the heal to be completed.

  6. Run the following command on one of the hosts:

    # gluster volume heal <volume> info summary
  7. Stop the glusterfs brick process and unmount all the bricks on the first host to maintain file system consistency. Run the following on the first host:

    # pkill glusterfsd; pkill glusterfs
    # systemctl stop glusterd
    # umount /gluster_bricks/*
  8. Reinstall the host with oVirt Node 4.4 ISO, only formatting the OS disk.

    Make sure that the installation does not format the other disks, as bricks are created on top of those disks.

  9. Once the node is up following the oVirt Node 4.4 installation reboot, subscribe to oVirt Node 4.4 repos as outlined in the Installation Guide, or install the downloaded oVirt Node 4.4 appliance.

    # yum install <appliance>
  10. Disable the devices used for Gluster bricks.

    1. Create the new SSH private and public key pairs.

    2. Establish SSH public key authentication ( passwordless SSH ) to the same host, using frontend and backend network FQDN.

    3. Create the inventory file:

      /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/blacklist_inventory.yml

      Example:

       hc_nodes:
        hosts:
          host1-backend-FQDN.example.com:
            blacklist_mpath_devices:
               - sda
               - sdb
    4. Run the playbook

      ansible-playbook -i blacklist_inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/gluster_deployment.yml --tags blacklistdevices*
  11. Copy the Engine backup and host config tar files from the backup host to the newly installed host and untar the content using scp.

  12. Restore the Gluster configuration files.

    1. Extract the contents of the Gluster configuration files

       # mkdir /archive
       # tar -xvf /root/ovirt-host-host1.example.com.tar.gz -C /archive/
    2. Edit the inventory file to perform restoration of the configuration files. The Inventory file is available at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/archive_config_inventory.yml

      Example playbook content:

       all:
         hosts:
       	host1.example.com:
         vars:
       	backup_dir: /archive
       	nbde_setup: false
       	upgrade: true
      Use only one host under ‘hosts’ section of restoration playbook.
    3. Execute the playbook to restore configuration files

      ansible-playbook -i archive_config_inventory.yml archive_config.yml --tags restorefiles
  13. Perform Engine deployment with the option --restore-from-file pointing to the backed-up archive from the Engine. This Engine deployment can be done interactively using the hosted-engine --deploy command, providing the storage corresponds to the newly created Engine volume. The same can also be done using ovirt-ansible-hosted-engine-setup in an automated procedure. The following procedure is an automated method for deploying a HostedEngine VM using the backup:

    1. Create a playbook for HostedEngine deployment in the newly installed host:

      /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he.yml

      - name: Deploy oVirt hosted engine
        hosts: localhost
        roles:
          - role: ovirt.hosted_engine_setup
    2. Update the HostedEngine related information using the template file:

      /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json

      Example:

      # cat /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/he_gluster_vars.json
      
      {
        "he_appliance_password": "<password>",
        "he_admin_password": "<password>",
        "he_domain_type": "glusterfs",
        "he_fqdn": "<hostedengine.example.com>",
        "he_vm_mac_addr": "<00:18:15:20:59:01>",
        "he_default_gateway": "<19.70.12.254>",
        "he_mgmt_network": "ovirtmgmt",
        "he_storage_domain_name": "HostedEngine",
        "he_storage_domain_path": "</newengine>",
        "he_storage_domain_addr": "<host1.example.com>",
        "he_mount_options": "backup-volfile-servers=<host2.example.com>:<host3.example.com>",
        "he_bridge_if": "<eth0>",
        "he_enable_hc_gluster_service": true,
        "he_mem_size_MB": "16384",
        "he_cluster": "Default",
        "he_restore_from_file": "/root/engine-backup.tar.gz",
        "he_vcpus": 4
      }
      • In the above he_gluster_vars.json, There are 2 important values: “he_restore_from_file” and “he_storage_domain_path”. The first option “he_restore_from_file” should point to the absolute file name of the Engine backup archive copied to the local machine. The second option “he_storage_domain_path” should refer to the newly created Gluster volume.

      • Also note that the previous version of oVirt Node Version running inside the Engine VM is down and that will be discarded. MAC Address and FQDN corresponding to the older Engine VM can be reused for the new Engine as well.

    3. For static Engine network configuration, add more options as listed below:

        “he_vm_ip_addr”:  “<engine VM ip address>”
        “he_vm_ip_prefix”:  “<engine VM ip prefix>”
        “he_dns_addr”:  “<engine VM DNS server>”
        “he_default_gateway”:  “<engine VM default gateway>”

      If there is no specific DNS available, try to include 2 more options: “he_vm_etc_hosts”: true and “he_network_test”: “ping”

    4. Run the playbook to deploy HostedEngine Deployment.

      # cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment
      # ansible-playbook he.yml --extra-vars "@he_gluster_vars.json"
    5. Wait for the self-hosted engine deployment to complete.

      If there are any failures during self-hosted engine deployment, find the problem looking at the log messages under /var/log/ovirt-hosted-engine-setup, fix the problem. Clean the failed self-hosted engine deployment using the command ovirt-hosted-engine-cleanup and rerun the deployment.

  14. Log in to the oVirt Node 4.4 Administration Portal on the newly installed oVirt manager. Make sure all the hosts are in the ‘up’ state, and wait for the self-heal on the Gluster volumes to be completed.

  15. Upgrade the next host

    1. Move the next host (ideally, the next one in order), to Maintenance mode from the Administration Portal. Stop the Gluster service while moving this host to Maintenance mode.

    2. From the command line of the host, unmount Gluster bricks

      # umount /gluster_bricks/*
    3. Reinstall this host with oVirt Node 4.4.

      Make sure that the installation does not format the other disks, as bricks are created on top of those disks.

    4. If multipath configuration is not available on the newly installed host, disable the Gluster devices. The inventory file is already created in the first host as part of the step Disable the devices used for Gluster bricks.

      1. Set up SSH public key authentication from the first host to the newly installed host.

      2. Update the inventory with the new host name.

      3. Execute the playbook.

    5. Copy the Gluster configuration tar files from the backup host to the newly installed host and untar the content.

    6. Restore Gluster configuration on the newly installed host by executing the playbook as described in the step Restoring the Gluster configurations files on this host.

      Edit the playbook on the newly installed host and execute it as described in the step Perform manager deployment with the option --restore-from-file…​. Do not change hostname and execute on the same host.

    7. Reinstall the host in oVirt Node Administration Portal Copy the authorized key from the first deployed host in oVirt Node 4.4

      # scp root@host1.example.com:/root/.ssh/authorized_keys /root/.ssh/
      1. In the Administration Portal, The host will be in ‘Maintenance’. Go to Compute  Hosts  Installation  Reinstall.

      2. In the New Host dialog box HostedEngine tab, and select the deploy self-hosted engine deployment action.

      3. Wait for the host to reach Up status.

    8. Make sure that there are no errors in the volumes related to GFID mismatch. If there are any errors, resolve them.

      grep -i "gfid mismatch" /var/log/glusterfs/*
  16. Repeat the step Upgrade the next host for all the oVirt Node in the cluster.

  17. (optional) If a separate Gluster logical network exists in the cluster, attach the Gluster logical network to the required interface on each host.

  18. Remove the old Engine storage domain. Identify the old Engine storage domain by the name hosted_storage with no gold star next to it, listed under Storage  Domains.

    1. Go to the Storage  Domains  hosted_storage  Data center tab, and select Maintenance.

    2. Wait for the storage domain to move into Maintenance mode.

    3. Once the storage domain moves into Maintenance mode, click Detach, the storage domain will move to unattached.

    4. Select the unattached storage domain, click Remove, and confirm OK.

  19. Stop and remove the old Engine volume.

    1. Go to Storage  Volumes, and select the old Engine volume. Click Stop, and confirm OK.

    2. Select the same volume, click Remove, and confirm OK.

  20. Update the cluster compatibility version.

    1. Go to Compute  Clusters and select the cluster Default, click Edit, update the Compatibility Version to 4.4 and click OK.

      There will be a warning for changing compatibility version, which requires VMs on the cluster to be restarted. Click OKto confirm.

  21. There are new Gluster volume options available with oVirt Node 4.4, apply those volume options on all the volumes. Execute the following on one of the nodes in the cluster:

    # for vol in gluster volume list; do gluster volume set $vol group virt; done
  22. Remove the archives and extracted the contents of the backup configuration files on all nodes.

Creating an additional Gluster volume using the Web Console
  1. Log in to the Engine web console.

  2. Go to Virtualization  Hosted Engine and click Manage Gluster.

  3. Click Create Volume. In the Create Volume window, do the following:

    1. In the Hosts tab, select three different ovirt-ng-nodes with unused disks and click Next.

    2. In the Volumes tab, specify the details of the volume you want to create and click Next.

    3. In the Bricks tab, specify the details of the disks to be used to create the volume and click Next.

    4. In the Review tab, check the generated configuration file for any incorrect information. When you are satisfied, click Deploy.

You can now update the cluster compatibility version.

4.7. Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites
  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations
  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure
  1. In the Administration Portal, click Compute  Clusters.

  2. Select the cluster to change and click Edit.

  3. On the General tab, change the Compatibility Version to the desired value.

  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

You can now update the cluster compatibility version for virtual machines in the cluster.

4.8. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (pendingchanges).

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure
  1. In the Administration Portal, click Compute  Virtual Machines.

  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

You can now update the data center compatibility version.

4.9. Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites
  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure
  1. In the Administration Portal, click Compute  Data Centers.

  2. Select the data center to change and click Edit.

  3. Change the Compatibility Version to the desired value.

  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

5. Upgrading from oVirt 4.2 to 4.3

Upgrading your environment from 4.2 to 4.3 involves the following steps:

5.1. Prerequisites

  • Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.

  • Ensure your environment meets the requirements for oVirt 4.5.

  • When upgrading oVirt Engine, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

5.2. Updating the oVirt Engine

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the update.

5.3. Upgrading the oVirt Engine from 4.2 to 4.3

You need to be logged into the machine that you are upgrading.

If the upgrade fails, the engine-setup command attempts to restore your oVirt Engine installation to its previous state. For this reason, do not remove the previous version’s repositories until after the upgrade is complete. If the upgrade fails, the engine-setup script explains how to restore your installation.

Procedure
  1. Enable the oVirt 4.3 repositories:

    All other repositories remain the same across oVirt releases.

  2. Update the setup packages:

    # yum update ovirt\*setup\*
  3. Run engine-setup and follow the prompts to upgrade the oVirt Engine:

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully
  4. Update the base operating system:

    # yum update

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the upgrade.

The Engine is now upgraded to version 4.3.

You can now update the hosts.

5.4. Updating All Hosts in a Cluster

You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of oVirt. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates.

Update one cluster at a time.

Limitations
  • On oVirt Node, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update.

  • If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster.

  • In a self-hosted engine environment, the Engine virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.

  • The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.

  • You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead.

Procedure
  1. In the Administration Portal, click Compute  Clusters and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster.

  2. Click Upgrade.

  3. Select the hosts to update, then click Next.

  4. Configure the options:

    • Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update.

    • Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is 60. You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly.

    • Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Engine to check for host updates less frequently than the default.

    • Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot.

    • Use Maintenance Policy sets the cluster’s scheduling policy to cluster_maintenance during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option.

  5. Click Next.

  6. Review the summary of the hosts and virtual machines that are affected.

  7. Click Upgrade.

  8. A cluster upgrade status screen displays with a progress bar showing the precentage of completion, and a list of steps in the upgrade process that have completed. You can click Go to Event Log to open the log entries for the upgrade. Closing this screen does not interrupt the upgrade process.

You can track the progress of host updates:

  • in the Compute  Clusters view, the Upgrade Status column displays a progress bar that displays the percentage of completion.

  • in the Compute  Hosts view

  • in the Events section of the Notification Drawer (EventsIcon).

You can track the progress of individual virtual machine migrations in the Status column of the Compute  Virtual Machines view. In large environments, you may need to filter the results to show a particular group of virtual machines.

5.5. Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites
  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations
  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure
  1. In the Administration Portal, click Compute  Clusters.

  2. Select the cluster to change and click Edit.

  3. On the General tab, change the Compatibility Version to the desired value.

  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

5.6. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (pendingchanges).

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure
  1. In the Administration Portal, click Compute  Virtual Machines.

  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

5.7. Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites
  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure
  1. In the Administration Portal, click Compute  Data Centers.

  2. Select the data center to change and click Edit.

  3. Change the Compatibility Version to the desired value.

  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must do so now.

5.8. Replacing SHA-1 Certificates with SHA-256 Certificates

oVirt 4.5 uses SHA-256 signatures, which provide a more secure way to sign SSL certificates than SHA-1. Newly installed systems do not require any special steps to enable oVirt’s public key infrastructure (PKI) to use SHA-256 signatures.

Do NOT let certificates expire. If they expire, the environment becomes non-responsive and recovery is an error prone and time consuming process. For information on renewing certificates, see Renewing certificates before they expire in the Administration Guide.

Preventing Warning Messages from Appearing in the Browser

  1. Log in to the Engine machine as the root user.

  2. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256:

    # cat /etc/pki/ovirt-engine/openssl.conf

    If it still includes default_md = sha1, back up the existing configuration and change the default to sha256:

    # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."$(date +"%Y%m%d%H%M%S")"
    # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf
  3. Define the certificate that should be re-signed:

    # names="apache"
  4. On the Engine, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates:

    # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf
    # for name in $names; do
        subject="$(
            openssl \
                x509 \
                -in /etc/pki/ovirt-engine/certs/"${name}".cer \
                -noout \
                -subject \
                -nameopt compat \
            | sed \
                's;subject=\(.*\);\1;' \
        )"
       /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \
            --name="${name}" \
            --password=mypass \ <1>
            --subject="${subject}" \
            --san=DNS:"${ENGINE_FQDN}" \
            --keep-key
    done
    1 Do not change this the password value.
  5. Restart the httpd service:

    # systemctl restart httpd
  6. Connect to the Administration Portal to confirm that the warning no longer appears.

  7. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority’s certificate, navigate to http://your-manager-fqdn/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA, replacing your-manager-fqdn with the fully qualified domain name (FQDN).

Replacing All Signed Certificates with SHA-256

  1. Log in to the Engine machine as the root user.

  2. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256:

    # cat /etc/pki/ovirt-engine/openssl.conf

    If it still includes default_md = sha1, back up the existing configuration and change the default to sha256:

    # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."$(date +"%Y%m%d%H%M%S")"
    # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf
  3. Re-sign the CA certificate by backing it up and creating a new certificate in ca.pem.new:

    # cp -p /etc/pki/ovirt-engine/private/ca.pem /etc/pki/ovirt-engine/private/ca.pem."$(date +"%Y%m%d%H%M%S")"
    # openssl x509 -signkey /etc/pki/ovirt-engine/private/ca.pem -in /etc/pki/ovirt-engine/ca.pem -out /etc/pki/ovirt-engine/ca.pem.new -days 3650 -sha256
  4. Replace the existing certificate with the new certificate:

    # mv /etc/pki/ovirt-engine/ca.pem.new /etc/pki/ovirt-engine/ca.pem
  5. Define the certificates that should be re-signed:

    # names="engine apache websocket-proxy jboss imageio-proxy"

    If you replaced the oVirt Engine SSL Certificate after the upgrade, run the following instead:

    # names="engine websocket-proxy jboss imageio-proxy"

    For more details see Replacing the oVirt Engine CA Certificate in the Administration Guide.

  6. On the Engine, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates:

    # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf
    # for name in $names; do
        subject="$(
            openssl \
                x509 \
                -in /etc/pki/ovirt-engine/certs/"${name}".cer \
                -noout \
                -subject \
                -nameopt compat \
            | sed \
                's;subject=\(.*\);\1;' \
        )"
       /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \
            --name="${name}" \
            --password=mypass \ <1>
            --subject="${subject}" \
            --san=DNS:"${ENGINE_FQDN}" \
            --keep-key
    done
    1 Do not change this the password value.
  7. Restart the following services:

    # systemctl restart httpd
    # systemctl restart ovirt-engine
    # systemctl restart ovirt-websocket-proxy
    # systemctl restart ovirt-imageio
  8. Connect to the Administration Portal to confirm that the warning no longer appears.

  9. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority’s certificate, navigate to http://your-manager-fqdn/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA, replacing your-manager-fqdn with the fully qualified domain name (FQDN).

  10. Enroll the certificates on the hosts. Repeat the following procedure for each host.

    1. In the Administration Portal, click Compute  Hosts.

    2. Select the host and click Management  Maintenance and OK.

    3. Once the host is in maintenance mode, click Installation  Enroll Certificate.

    4. Click Management  Activate.

6. Upgrading a standalone Engine remote database environment

7. Upgrading a Remote Database Environment from oVirt 4.4 to 4.5

Upgrading your environment from 4.4 to 4.5 involves the following steps:

7.1. Prerequisites

  • Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.

  • Ensure your environment meets the requirements for oVirt 4.5.

  • When upgrading oVirt Engine, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

You can now update the Engine to the latest version of 4.4.

7.2. Updating the oVirt Engine

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the update.

You can now upgrade the Engine to 4.5.

7.3. Upgrading the oVirt Engine from 4.4 to 4.5

oVirt Engine 4.5 is only supported on Enterprise Linux 8.6 or later.

Prerequisites
  • All data centers and clusters in the environment must have the cluster compatibility level set to version 4.3 or higher.

  • All virtual machines in the environment must have the cluster compatibility level set to version 4.3 or higher.

Connected hosts and virtual machines can continue to work while the Engine is being upgraded.

If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.

Procedure
  1. Enable oVirt 4.5 repositories

    # dnf install -y centos-release-ovirt45

As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available.

+ . Enable version 2.3 of the mod_auth_openidc module.

+

# dnf module -y enable mod_auth_openidc:2.3

Then follow the procedure for updates between minor releases.

You can now update the hosts.

7.4. Migrating hosts oVirt 4.4 to 4.5

Prerequisites
  • Hosts for oVirt 4.5 require Enterprise Linux 8.6 or later.

  • oVirt Engine 4.5 is installed and running.

  • The compatibility level of the data center and cluster to which the hosts belong is set to 4.3 or higher.

  • All data centers and clusters in the environment must have the cluster compatibility level set to version 4.3 higher before you start the procedure.

If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.

Procedure for Enterprise Linux hosts
  1. Enable oVirt 4.5 repositories

# dnf install -y centos-release-ovirt45
Procedure for Enterprise Linux hosts oVirt Nodes:
  1. Enable oVirt 4.5 repositories

# dnf install centos-release-ovirt45 --enablerepo=extras

As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available.

Common procedure
  1. Follow the procedure for updates between minor releases.

  2. Follow the procedure for updating the cluster compatibility version.

If you are using GlusterFS Storage please note that oVirt 4.5 updates Gluster to version 10. Please refer to Upgrade procedure to Gluster 10, from Gluster 9.x, 8.x and 7.x for more details.

You can now update the cluster compatibility version.

7.5. Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites
  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations
  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure
  1. In the Administration Portal, click Compute  Clusters.

  2. Select the cluster to change and click Edit.

  3. On the General tab, change the Compatibility Version to the desired value.

  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

You can now update the cluster compatibility version for virtual machines in the cluster.

7.6. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (pendingchanges).

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure
  1. In the Administration Portal, click Compute  Virtual Machines.

  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

You can now update the data center compatibility version.

7.7. Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites
  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure
  1. In the Administration Portal, click Compute  Data Centers.

  2. Select the data center to change and click Edit.

  3. Change the Compatibility Version to the desired value.

  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

8. Upgrading a Remote Database Environment from oVirt 4.3 to 4.4

Upgrading your environment from 4.3 to 4.4 involves the following steps:

8.1. Prerequisites

  • Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.

  • Ensure your environment meets the requirements for oVirt 4.5.

  • When upgrading oVirt Engine, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

You can now update the Engine to the latest version of 4.3.

8.2. Updating the oVirt Engine

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the update.

You can now upgrade the Engine to 4.4.

8.3. Upgrading the oVirt Engine from 4.3 to 4.4

oVirt Engine 4.4 is only supported on Enterprise Linux versions 8.2 to 8.6. You need to do a clean installation of Enterprise Linux 8.6 and oVirt Engine 4.4, even if you are using the same physical machine that you use to run oVirt Engine 4.3.

The upgrade process requires restoring oVirt Engine 4.3 backup files onto the oVirt Engine 4.4 machine.

Prerequisites
  • All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3.

  • All virtual machines in the environment must have the cluster compatibility level set to version 4.3.

  • If you use an external CA to sign HTTPS certificates, follow the steps in Replacing the oVirt Engine CA Certificate in the Administration Guide. The backup and restore include the 3rd-party certificate, so you should be able to log in to the Administration portal after the upgrade. Ensure the CA certificate is added to system-wide trust stores of all clients to ensure the foreign menu of virt-viewer works. See BZ#1313379 for more information.

Connected hosts and virtual machines can continue to work while the Engine is being upgraded.

Procedure
  1. Log in to the Engine machine.

  2. Back up the oVirt Engine 4.3 environment.

    # engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log
  3. Copy the backup file to a storage device outside of the oVirt environment.

  4. Install Enterprise Linux 8.6. See Performing a standard RHEL installation for more information.

  5. Complete the steps to install oVirt Engine 4.4, including running the command yum install rhvm, but do not run engine-setup. See one of the Installing oVirt guides for more information.

  6. Copy the backup file to the oVirt Engine 4.4 machine and restore it.

    # engine-backup --mode=restore --file=backup.bck --provision-all-databases

    If the backup contained grants for extra database users, this command creates the extra users with random passwords. You must change these passwords manually if the extra users require access to the restored system. See https://access.redhat.com/articles/2686731.

  7. Install optional extension packages if they were installed on the oVirt Engine 4.3 machine.

    # yum install ovirt-engine-extension-aaa-ldap ovirt-engine-extension-aaa-misc

    The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide.

    The configuration for these package extensions must be manually reapplied because they are not migrated as part of the backup and restore process.

  8. Configure the Engine by running the engine-setup command:

    # engine-setup
  9. Decommission the oVirt Engine 4.3 machine if a different machine is used for oVirt Engine 4.4. Two different Engines must not manage the same hosts or storage.

The oVirt Engine 4.4 is now installed, with the cluster compatibility version set to 4.2 or 4.3, whichever was the preexisting cluster compatibility version.

Now you need to upgrade the remote databases in your environment.

'engine-setup' also stops the Data Warehouse service on the remote Data Warehouse machine.

If you intend to postpone the next parts of this procedure, log in to the Data Warehouse machine and start the Data Warehouse service:

# systemctl start ovirt-engine-dwhd.service

8.4. Upgrading the remote Data Warehouse service and database

Run this procedure on the remote machine with the Data Warehouse service and database.

Notice that part of this procedure requires you to install Enterprise Linux 8.6, or oVirt Node 4.4.

Prerequisites
  • You are logged in to the Data Warehouse machine.

  • A storage device outside the oVirt environment.

Procedure
  1. Back up the Data Warehouse machine.

    Grafana is not supported on oVirt 4.3, but on oVirt 4.4, this command also includes the Grafana service and the Grafana database.

    # engine-backup --file=<backupfile>
  2. Copy the backup file to a storage device.

  3. Stop and disable the Data Warehouse service:

    # systemctl stop ovirt-engine-dwhd
    # systemctl disable ovirt-engine-dwhd
  4. Reinstall the Data Warehouse machine with Enterprise Linux 8.6, or oVirt Node 4.4.

  5. Prepare a PostgreSQL database. For information, see Preparing a Remote PostgreSQL Database in Installing oVirt as a standalone Engine with remote databases.

  6. Enable the correct repositories on the server and install the Data Warehouse service. For detailed instructions, see Installing and Configuring Data Warehouse on a Separate Machine for oVirt 4.4. Complete the steps in that procedure up to and including the dnf install ovirt-engine-dwh-setup command. Then continue to the next step in this procedure.

  7. Copy the backup file from the storage device to the Data Warehouse machine.

  8. Restore the backup file:

    # engine-backup --mode=restore --file=backup.bck --provision-all-databases
  9. On the Data Warehouse machine, run the engine-setup command:

    # engine-setup
  10. On the Engine machine, restart the Engine to connect it to the Data Warehouse database:

    # systemctl restart ovirt-engine
Additional resources

You can now update the hosts.

8.5. Migrating hosts and virtual machines from oVirt 4.3 to 4.4

You can migrate hosts and virtual machines from oVirt 4.3 to 4.4 such that you minimize the downtime of virtual machines in your environment.

This process requires migrating all virtual machines from one host so as to make that host available to upgrade to oVirt 4.4. After the upgrade, you can reattach the host to the Engine.

When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.

CPU-passthrough virtual machines might not migrate properly from oVirt 4.3 to oVirt 4.4.

oVirt 4.3 and oVirt 4.4 are based on EL 7 and EL 8, respectively, which have different kernel versions with different CPU flags and microcodes. This can cause problems in migrating CPU-passthrough virtual machines.

Prerequisites
  • Hosts for oVirt 4.4 require Enterprise Linux versions 8.2 to 8.6. A clean installation of Enterprise Linux 8.6, or oVirt Node 4.4 is required, even if you are using the same physical machine that you use to run hosts for oVirt 4.3.

  • oVirt Engine 4.4 is installed and running.

  • The compatibility level of the data center and cluster to which the hosts belong is set to 4.2 or 4.3. All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3 before you start the procedure.

Procedure
  1. Pick a host to upgrade and migrate that host’s virtual machines to another host in the same cluster. You can use Live Migration to minimize virtual machine downtime. For more information, see Migrating Virtual Machines Between Hosts in the Virtual Machine Management Guide.

  2. Put the host into maintenance mode and remove the host from the Engine. For more information, see Removing a Host in the Administration Guide.

  3. Install Enterprise Linux 8.6, or oVirt Node 4.4. For more information, see Installing Hosts for oVirt in one of the Installing oVirt guides.

  4. Install the appropriate packages to enable the host for oVirt 4.4. For more information, see Installing Hosts for oVirt in one of the Installing oVirt guides.

  5. Add this host to the Engine, assigning it to the same cluster. You can now migrate virtual machines onto this host. For more information, see Adding Standard Hosts to the Engine in one of the Installing oVirt guides.

Repeat these steps to migrate virtual machines and upgrade hosts for the rest of the hosts in the same cluster, one by one, until all are running oVirt 4.4.

8.6. Upgrading oVirt Node while preserving local storage

Environments with local storage cannot migrate virtual machines to a host in another cluster because the local storage is not shared with other storage domains. To upgrade oVirt Node 4.3 hosts that have a local storage domain, reinstall the host while preserving the local storage, create a new local storage domain in the 4.4 environment, and import the previous local storage into the new domain.

Prerequisites
  • oVirt Engine 4.4 is installed and running.

  • The compatibility level of the data center and cluster to which the host belongs is set to 4.2 or 4.3.

Procedure
  1. Ensure that the local storage on the oVirt Node 4.3 host’s local storage is in maintenance mode before starting this process. Complete these steps:

    1. Open the Data Centers tab.

    2. Click the Storage tab in the Details pane and select the storage domain in the results list.

    3. Click Maintenance.

  2. Reinstall the oVirt Node, as described in Installing oVirt Node in the Installation Guide.

    When selecting the device on which to install oVirt Node from the Installation Destination screen, do not select the device(s) storing the virtual machines. Only select the device where the operating system should be installed.

    If you are using Kickstart to install the host, ensure that you preserve the devices containing the virtual machines by adding the following to the Kickstart file, replacing `device` with the relevant device.

    # clearpart --all --drives=device

    For more information on using Kickstart, see Kickstart references in Red Hat Enterprise Linux 8 Performing an advanced RHEL installation.

  3. On the reinstalled host, create a directory, for example /data in which to recover the previous environment.

    # mkdir /data
  4. Mount the previous local storage in the new directory. In our example, /dev/sdX1 is the local storage:

    # mount /dev/sdX1 /data
  5. Set the following permissions for the new directory.

    # chown -R 36:36 /data
    # chmod -R 0755 /data
  6. oVirt recommends that you also automatically mount the local storage via /etc/fstab in case the server requires a reboot:

    # blkid | grep -i sdX1
    /dev/sdX1: UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" TYPE="ext4"
    # vi /etc/fstab
    UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" /data    ext4    defaults     0       0
  7. In the Administration Portal, create a data center and select Local in the Storage Type drop-down menu.

  8. Configure a cluster on the new data center. See Creating a New Cluster in the Administration Guide for more information.

  9. Add the host to the Engine. See Adding Standard Hosts to the oVirt Manager in one of the Installing oVirt guides for more information.

  10. On the host, create a new directory that will be used to create the initial local storage domain. For example:

    # mkdir -p /localfs
    # chown 36:36 /localfs
    # chmod -R 0755 /localfs
  11. In the Administration Portal, open the Storage tab and click New Domain to create a new local storage domain.

  12. Set the name to localfs and set the path to /localfs.

  13. Once the local storage is active, click Import Domain and set the domain’s details. For example, define Data as the name, Local on Host as the storage type and /data as the path.

  14. Click OK to confirm the message that appears informing you that storage domains are already attached to the data center.

  15. Activate the new storage domain:

    1. Open the Data Centers tab.

    2. Click the Storage tab in the details pane and select the new data storage domain in the results list.

    3. Click Activate.

  16. Once the new storage domain is active, import the virtual machines and their disks:

    1. In the Storage tab, select data.

    2. Select the VM Import tab in the details pane, select the virtual machines and click Import. See Importing Virtual Machines from a Data Domain in the Virtual Machine Management Guide for more details.

  17. Once you have ensured that all virtual machines have been successfully imported and are functioning properly, you can move localfs to maintenance mode.

  18. Click the Storage tab and select localfs from the results list.

    1. Click the Data Center tab in the details pane.

    2. Click Maintenance, then click OK to move the storage domain to maintenance mode.

    3. Click Detach. The Detach Storage confirmation window opens.

    4. Click OK.

You have now upgraded the host to version 4.4, created a new local storage domain, and imported the 4.3 storage domain and its virtual machines.

You can now update the cluster compatibility version.

8.7. Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites
  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations
  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure
  1. In the Administration Portal, click Compute  Clusters.

  2. Select the cluster to change and click Edit.

  3. On the General tab, change the Compatibility Version to the desired value.

  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

You can now update the cluster compatibility version for virtual machines in the cluster.

8.8. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (pendingchanges).

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure
  1. In the Administration Portal, click Compute  Virtual Machines.

  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

You can now update the data center compatibility version.

8.9. Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites
  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure
  1. In the Administration Portal, click Compute  Data Centers.

  2. Select the data center to change and click Edit.

  3. Change the Compatibility Version to the desired value.

  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

9. Upgrading a Remote Database Environment from oVirt 4.2 to 4.3

Upgrading your environment from 4.2 to 4.3 involves the following steps:

9.1. Prerequisites

  • Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.

  • Ensure your environment meets the requirements for oVirt 4.5.

  • When upgrading oVirt Engine, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

You can now update the Engine to the latest version of 4.2.

9.2. Updating the oVirt Engine

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the update.

9.3. Upgrading remote databases from PostgreSQL 9.5 to 10

oVirt 4.3 uses PostgreSQL 10 instead of PostgreSQL 9.5. If your databases are installed locally, the upgrade script automatically upgrades them from version 9.5 to 10. However, if either of your databases (Engine or Data Warehouse) is installed on a separate machine, you must perform the following procedure on each remote database before upgrading the Engine.

  1. Stop the service running on the machine:

    • When upgrading the Engine database, stop the ovirt-engine service on the Engine machine:

      # systemctl stop ovirt-engine
    • When upgrading the Data Warehouse database, stop the ovirt-engine-dwhd service on the Data Warehouse machine:

      # systemctl stop ovirt-engine-dwhd
  2. Enable the required repository to receive the PostgreSQL 10 package:

  3. Install the PostgreSQL 10 packages:

    # yum install rh-postgresql10 rh-postgresql10-postgresql-contrib
  4. Stop and disable the PostgreSQL 9.5 service:

    # systemctl stop rh-postgresql95-postgresql
    # systemctl disable rh-postgresql95-postgresql
  5. Upgrade the PostgreSQL 9.5 database to PostgreSQL 10:

    # scl enable rh-postgresql10 -- postgresql-setup --upgrade-from=rh-postgresql95-postgresql --upgrade
  6. Start and enable the rh-postgresql10-postgresql.service and check that it is running:

    # systemctl start rh-postgresql10-postgresql.service
    # systemctl enable rh-postgresql10-postgresql.service
    # systemctl status rh-postgresql10-postgresql.service

    Ensure that you see output similar to the following:

    rh-postgresql10-postgresql.service - PostgreSQL database server
       Loaded: loaded (/usr/lib/systemd/system/rh-postgresql10-postgresql.service;
    enabled; vendor preset: disabled)
       Active: active (running) since ...
  7. Copy the pg_hba.conf client configuration file from the PostgreSQL 9.5 environment to the PostgreSQL 10 environment:

    # cp -p /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf  /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf
  8. Update the following parameters in /var/opt/rh/rh-postgresql10/lib/pgsql/data/postgresql.conf:

    listen_addresses='*'
    autovacuum_vacuum_scale_factor=0.01
    autovacuum_analyze_scale_factor=0.075
    autovacuum_max_workers=6
    maintenance_work_mem=65536
    max_connections=150
    work_mem = 8192
  9. Restart the PostgreSQL 10 service to apply the configuration changes:

    # systemctl restart rh-postgresql10-postgresql.service

You can now upgrade the Engine to 4.3.

9.4. Upgrading the oVirt Engine from 4.2 to 4.3

Follow these same steps when upgrading any of the following:

  • the oVirt Engine

  • a remote machine with the Data Warehouse service

You need to be logged into the machine that you are upgrading.

If the upgrade fails, the engine-setup command attempts to restore your oVirt Engine installation to its previous state. For this reason, do not remove the previous version’s repositories until after the upgrade is complete. If the upgrade fails, the engine-setup script explains how to restore your installation.

Procedure
  1. Enable the oVirt 4.3 repositories:

    All other repositories remain the same across oVirt releases.

  2. Update the setup packages:

    # yum update ovirt\*setup\*
  3. Run engine-setup and follow the prompts to upgrade the oVirt Engine, the remote database or remote service:

    # engine-setup

    During the upgrade process for the Engine, the engine-setup script might prompt you to disconnect the remote Data Warehouse database. You must disconnect it to continue the setup.

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully
  4. Update the base operating system:

    # yum update

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the upgrade.

The Engine is now upgraded to version 4.3.

9.4.1. Completing the remote Data Warehouse database upgrade

Complete these additional steps when upgrading a remote Data Warehouse database from PostgreSQL 9.5 to 10.

Procedure
  1. The ovirt-engine-dwhd service is now running on the Engine machine. If the ovirt-engine-dwhd service is on a remote machine, stop and disable the ovirt-engine-dwhd service on the Engine machine, and remove the configuration files that engine-setup created:

    # systemctl stop ovirt-engine-dwhd
    # systemctl disable ovirt-engine-dwhd
    # rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*
  2. Repeat the steps in Upgrading the Engine to 4.3 on the machine hosting the ovirt-engine-dwhd service.

You can now update the hosts.

9.5. Updating All Hosts in a Cluster

You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of oVirt. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates.

Update one cluster at a time.

Limitations
  • On oVirt Node, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update.

  • If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster.

  • In a self-hosted engine environment, the Engine virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.

  • The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.

  • You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead.

Procedure
  1. In the Administration Portal, click Compute  Clusters and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster.

  2. Click Upgrade.

  3. Select the hosts to update, then click Next.

  4. Configure the options:

    • Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update.

    • Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is 60. You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly.

    • Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Engine to check for host updates less frequently than the default.

    • Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot.

    • Use Maintenance Policy sets the cluster’s scheduling policy to cluster_maintenance during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option.

  5. Click Next.

  6. Review the summary of the hosts and virtual machines that are affected.

  7. Click Upgrade.

  8. A cluster upgrade status screen displays with a progress bar showing the precentage of completion, and a list of steps in the upgrade process that have completed. You can click Go to Event Log to open the log entries for the upgrade. Closing this screen does not interrupt the upgrade process.

You can track the progress of host updates:

  • in the Compute  Clusters view, the Upgrade Status column displays a progress bar that displays the percentage of completion.

  • in the Compute  Hosts view

  • in the Events section of the Notification Drawer (EventsIcon).

You can track the progress of individual virtual machine migrations in the Status column of the Compute  Virtual Machines view. In large environments, you may need to filter the results to show a particular group of virtual machines.

9.6. Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites
  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations
  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure
  1. In the Administration Portal, click Compute  Clusters.

  2. Select the cluster to change and click Edit.

  3. On the General tab, change the Compatibility Version to the desired value.

  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

9.7. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (pendingchanges).

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure
  1. In the Administration Portal, click Compute  Virtual Machines.

  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

9.8. Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites
  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure
  1. In the Administration Portal, click Compute  Data Centers.

  2. Select the data center to change and click Edit.

  3. Change the Compatibility Version to the desired value.

  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must do so now.

9.9. Replacing SHA-1 Certificates with SHA-256 Certificates

oVirt 4.5 uses SHA-256 signatures, which provide a more secure way to sign SSL certificates than SHA-1. Newly installed systems do not require any special steps to enable oVirt’s public key infrastructure (PKI) to use SHA-256 signatures.

Do NOT let certificates expire. If they expire, the environment becomes non-responsive and recovery is an error prone and time consuming process. For information on renewing certificates, see Renewing certificates before they expire in the Administration Guide.

Preventing Warning Messages from Appearing in the Browser

  1. Log in to the Engine machine as the root user.

  2. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256:

    # cat /etc/pki/ovirt-engine/openssl.conf

    If it still includes default_md = sha1, back up the existing configuration and change the default to sha256:

    # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."$(date +"%Y%m%d%H%M%S")"
    # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf
  3. Define the certificate that should be re-signed:

    # names="apache"
  4. On the Engine, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates:

    # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf
    # for name in $names; do
        subject="$(
            openssl \
                x509 \
                -in /etc/pki/ovirt-engine/certs/"${name}".cer \
                -noout \
                -subject \
                -nameopt compat \
            | sed \
                's;subject=\(.*\);\1;' \
        )"
       /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \
            --name="${name}" \
            --password=mypass \ <1>
            --subject="${subject}" \
            --san=DNS:"${ENGINE_FQDN}" \
            --keep-key
    done
    1 Do not change this the password value.
  5. Restart the httpd service:

    # systemctl restart httpd
  6. Connect to the Administration Portal to confirm that the warning no longer appears.

  7. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority’s certificate, navigate to http://your-manager-fqdn/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA, replacing your-manager-fqdn with the fully qualified domain name (FQDN).

Replacing All Signed Certificates with SHA-256

  1. Log in to the Engine machine as the root user.

  2. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256:

    # cat /etc/pki/ovirt-engine/openssl.conf

    If it still includes default_md = sha1, back up the existing configuration and change the default to sha256:

    # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."$(date +"%Y%m%d%H%M%S")"
    # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf
  3. Re-sign the CA certificate by backing it up and creating a new certificate in ca.pem.new:

    # cp -p /etc/pki/ovirt-engine/private/ca.pem /etc/pki/ovirt-engine/private/ca.pem."$(date +"%Y%m%d%H%M%S")"
    # openssl x509 -signkey /etc/pki/ovirt-engine/private/ca.pem -in /etc/pki/ovirt-engine/ca.pem -out /etc/pki/ovirt-engine/ca.pem.new -days 3650 -sha256
  4. Replace the existing certificate with the new certificate:

    # mv /etc/pki/ovirt-engine/ca.pem.new /etc/pki/ovirt-engine/ca.pem
  5. Define the certificates that should be re-signed:

    # names="engine apache websocket-proxy jboss imageio-proxy"

    If you replaced the oVirt Engine SSL Certificate after the upgrade, run the following instead:

    # names="engine websocket-proxy jboss imageio-proxy"

    For more details see Replacing the oVirt Engine CA Certificate in the Administration Guide.

  6. On the Engine, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates:

    # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf
    # for name in $names; do
        subject="$(
            openssl \
                x509 \
                -in /etc/pki/ovirt-engine/certs/"${name}".cer \
                -noout \
                -subject \
                -nameopt compat \
            | sed \
                's;subject=\(.*\);\1;' \
        )"
       /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \
            --name="${name}" \
            --password=mypass \ <1>
            --subject="${subject}" \
            --san=DNS:"${ENGINE_FQDN}" \
            --keep-key
    done
    1 Do not change this the password value.
  7. Restart the following services:

    # systemctl restart httpd
    # systemctl restart ovirt-engine
    # systemctl restart ovirt-websocket-proxy
    # systemctl restart ovirt-imageio
  8. Connect to the Administration Portal to confirm that the warning no longer appears.

  9. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority’s certificate, navigate to http://your-manager-fqdn/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA, replacing your-manager-fqdn with the fully qualified domain name (FQDN).

  10. Enroll the certificates on the hosts. Repeat the following procedure for each host.

    1. In the Administration Portal, click Compute  Hosts.

    2. Select the host and click Management  Maintenance and OK.

    3. Once the host is in maintenance mode, click Installation  Enroll Certificate.

    4. Click Management  Activate.

10. Upgrading a self-hosted engine environment

11. Upgrading a self-Hosted engine from oVirt 4.4 to 4.5

Upgrading a self-hosted engine environment from version 4.4 to 4.5 involves the following steps:

11.1. Prerequisites

  • Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.

  • Ensure your environment meets the requirements for oVirt 4.5.

  • When upgrading oVirt Engine, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

11.2. Migrating virtual machines from the self-hosted engine host

Only the Engine virtual machine should remain on the host until after you have finished upgrading the host. Migrate any virtual machines other than the Engine virtual machine to another host in the same cluster.

You can use Live Migration to minimize virtual machine down-time. For more information, see Migrating Virtual Machines Between Hosts in the Virtual Machine Management Guide for more information.

11.3. Enabling global maintenance mode

You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Engine virtual machine.

Procedure
  1. Log in to one of the self-hosted engine nodes and enable global maintenance mode:

    # hosted-engine --set-maintenance --mode=global
  2. Confirm that the environment is in global maintenance mode before proceeding:

    # hosted-engine --vm-status

    You should see a message indicating that the cluster is in global maintenance mode.

You can now update the Engine to the latest version of 4.4.

11.4. Updating the oVirt Engine

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the update.

You can now upgrade the Engine to 4.5.

11.5. Upgrading the oVirt Engine from 4.4 to 4.5

Prerequisites
  • All data centers and clusters in the environment must have the cluster compatibility level set to version 4.3 or higher.

  • All virtual machines in the environment must have the cluster compatibility level set to version 4.3 or higher.

Connected hosts and virtual machines can continue to work while the Engine is being upgraded.

If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.

Procedure
  1. Enable oVirt 4.5 repositories

    # dnf install -y centos-release-ovirt45

As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available.

+ . Enable version 2.3 of the mod_auth_openidc module.

+

# dnf module -y enable mod_auth_openidc:2.3

Then follow the procedure for updates between minor releases.

You can now update the self-hosted engine nodes, and then any standard hosts. The procedure is the same for both host types.

11.6. Migrating hosts oVirt 4.4 to 4.5

Prerequisites
  • Hosts for oVirt 4.5 require Enterprise Linux 8.6 or later.

  • oVirt Engine 4.5 is installed and running.

  • The compatibility level of the data center and cluster to which the hosts belong is set to 4.3 or higher.

  • All data centers and clusters in the environment must have the cluster compatibility level set to version 4.3 higher before you start the procedure.

If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.

Procedure for Enterprise Linux hosts
  1. Enable oVirt 4.5 repositories

# dnf install -y centos-release-ovirt45
Procedure for Enterprise Linux hosts oVirt Nodes:
  1. Enable oVirt 4.5 repositories

# dnf install centos-release-ovirt45 --enablerepo=extras

As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available.

Common procedure
  1. Follow the procedure for updates between minor releases.

  2. Follow the procedure for updating the cluster compatibility version.

If you are using GlusterFS Storage please note that oVirt 4.5 updates Gluster to version 10. Please refer to Upgrade procedure to Gluster 10, from Gluster 9.x, 8.x and 7.x for more details.

11.7. Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites
  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations
  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure
  1. In the Administration Portal, click Compute  Clusters.

  2. Select the cluster to change and click Edit.

  3. On the General tab, change the Compatibility Version to the desired value.

  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

11.8. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (pendingchanges).

The Engine virtual machine does not need to be rebooted.

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure
  1. In the Administration Portal, click Compute  Virtual Machines.

  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

11.9. Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites
  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure
  1. In the Administration Portal, click Compute  Data Centers.

  2. Select the data center to change and click Edit.

  3. Change the Compatibility Version to the desired value.

  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

12. Upgrading a self-Hosted engine from oVirt 4.3 to 4.4

Upgrading a self-hosted engine environment from version 4.3 to 4.4 involves the following steps:

12.1. Prerequisites

  • Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.

  • Ensure your environment meets the requirements for oVirt 4.5.

  • When upgrading oVirt Engine, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

12.2. Migrating virtual machines from the self-hosted engine host

Only the Engine virtual machine should remain on the host until after you have finished upgrading the host. Migrate any virtual machines other than the Engine virtual machine to another host in the same cluster.

You can use Live Migration to minimize virtual machine down-time. For more information, see Migrating Virtual Machines Between Hosts in the Virtual Machine Management Guide for more information.

12.3. Enabling global maintenance mode

You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Engine virtual machine.

Procedure
  1. Log in to one of the self-hosted engine nodes and enable global maintenance mode:

    # hosted-engine --set-maintenance --mode=global
  2. Confirm that the environment is in global maintenance mode before proceeding:

    # hosted-engine --vm-status

    You should see a message indicating that the cluster is in global maintenance mode.

You can now update the Engine to the latest version of 4.3.

12.4. Updating the oVirt Engine

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the update.

You can now upgrade the Engine to 4.4.

12.5. Upgrading the oVirt Engine from 4.3 to 4.4

The oVirt Engine 4.4 is only supported on Enterprise Linux versions 8.2 to 8.6. You need to do a clean installation of Enterprise Linux 8.6, or oVirt Node on the self-hosted engine host, even if you are using the same physical machine that you use to run the oVirt 4.3 self-hosted engine.

The upgrade process requires restoring oVirt Engine 4.3 backup files onto the oVirt Engine 4.4 virtual machine.

Prerequisites
  • All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3.

  • All virtual machines in the environment must have the cluster compatibility level set to version 4.3.

  • Make note of the MAC address of the self-hosted engine if you are using DHCP and want to use the same IP address. The deploy script prompts you for this information.

  • During the deployment you need to provide a new storage domain for the Engine machine. The deployment script renames the 4.3 storage domain and retains its data to enable disaster recovery.

  • Set the cluster scheduling policy to cluster_maintenance in order to prevent automatic virtual machine migration during the upgrade.

    In an environment with multiple highly available self-hosted engine nodes, you need to detach the storage domain hosting the version 4.3 Engine after upgrading the Engine to 4.4. Use a dedicated storage domain for the 4.4 self-hosted engine deployment.

  • If you use an external CA to sign HTTPS certificates, follow the steps in Replacing the oVirt Engine CA Certificate in the Administration Guide. The backup and restore include the 3rd-party certificate, so you should be able to log in to the Administration portal after the upgrade. Ensure the CA certificate is added to system-wide trust stores of all clients to ensure the foreign menu of virt-viewer works. See BZ#1313379 for more information.

Connected hosts and virtual machines can continue to work while the Engine is being upgraded.

Procedure
  1. Log in to the Engine virtual machine and shut down the engine service.

    # systemctl stop ovirt-engine
  2. Back up the oVirt Engine 4.3 environment.

    # engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log
  3. Copy the backup file to a storage device outside of the oVirt environment.

  4. Shut down the self-hosted engine.

    # shutdown

    If you want to reuse the self-hosted engine virtual machine to deploy the oVirt Engine 4.4, note the MAC address of the self-hosted engine network interface before you shut it down.

  5. Make sure that the self-hosted engine is shut down.

    # hosted-engine --vm-status | grep -E 'Engine status|Hostname'

    If any of the hosts report the detail field as Up, log in to that specific host and shut it down with the hosted-engine --vm-shutdown command.

  6. Install oVirt Node 4.4 or Enterprise Linux 8.6 on the existing node currently running the Engine virtual machine to use it as the self-hosted engine deployment host. See Installing the Self-hosted Engine Deployment Host for more information.

    It is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

  7. Install the self-hosted engine deployment tool.

    # yum install ovirt-hosted-engine-setup
  8. Copy the backup file to the host.

  9. Log in to the Engine host and deploy the self-hosted engine with the backup file:

    # hosted-engine --deploy --restore-from-file=/path/backup.bck

    tmux enables the deployment script to continue if the connection to the server is interrupted, so you can reconnect and attach to the deployment and continue. Otherwise, if the connection is interrupted during deployment, the deployment fails.

    To run the deployment script using tmux, enter the tmux command before you run the deployment script:

    # tmux
    # hosted-engine --deploy --restore-from-file=backup.bck

    The deployment script automatically disables global maintenance mode and calls the HA agent to start the self-hosted engine virtual machine. The upgraded host with the 4.4 self-hosted engine reports that HA mode is active, but the other hosts report that global maintenance mode is still enabled as they are still connected to the old self-hosted engine storage.

  10. Detach the storage domain that hosts the Engine 4.3 machine. For details, see Detaching a Storage Domain from a Data Center in the Administration Guide.

  11. Log in to the Engine virtual machine and shut down the engine service.

    # systemctl stop ovirt-engine
  12. Install optional extension packages if they were installed on the oVirt Engine 4.3 machine.

    # yum install ovirt-engine-extension-aaa-ldap ovirt-engine-extension-aaa-misc

    The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide.

    The configuration for these package extensions must be manually reapplied because they are not migrated as part of the backup and restore process.

  13. Configure the Engine by running the engine-setup command:

    # engine-setup

The oVirt Engine 4.4 is now installed, with the cluster compatibility version set to 4.2 or 4.3, whichever was the preexisting cluster compatibility version.

You can now update the self-hosted engine nodes, and then any standard hosts. The procedure is the same for both host types.

12.6. Migrating hosts and virtual machines from oVirt 4.3 to 4.4

You can migrate hosts and virtual machines from oVirt 4.3 to 4.4 such that you minimize the downtime of virtual machines in your environment.

This process requires migrating all virtual machines from one host so as to make that host available to upgrade to oVirt 4.4. After the upgrade, you can reattach the host to the Engine.

When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss.

CPU-passthrough virtual machines might not migrate properly from oVirt 4.3 to oVirt 4.4.

oVirt 4.3 and oVirt 4.4 are based on EL 7 and EL 8, respectively, which have different kernel versions with different CPU flags and microcodes. This can cause problems in migrating CPU-passthrough virtual machines.

Prerequisites
  • Hosts for oVirt 4.4 require Enterprise Linux versions 8.2 to 8.6. A clean installation of Enterprise Linux 8.6, or oVirt Node 4.4 is required, even if you are using the same physical machine that you use to run hosts for oVirt 4.3.

  • oVirt Engine 4.4 is installed and running.

  • The compatibility level of the data center and cluster to which the hosts belong is set to 4.2 or 4.3. All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3 before you start the procedure.

Procedure
  1. Pick a host to upgrade and migrate that host’s virtual machines to another host in the same cluster. You can use Live Migration to minimize virtual machine downtime. For more information, see Migrating Virtual Machines Between Hosts in the Virtual Machine Management Guide.

  2. Put the host into maintenance mode and remove the host from the Engine. For more information, see Removing a Host in the Administration Guide.

  3. Install Enterprise Linux 8.6, or oVirt Node 4.4. For more information, see Installing Hosts for oVirt in one of the Installing oVirt guides.

  4. Install the appropriate packages to enable the host for oVirt 4.4. For more information, see Installing Hosts for oVirt in one of the Installing oVirt guides.

  5. Add this host to the Engine, assigning it to the same cluster. You can now migrate virtual machines onto this host. For more information, see Adding Standard Hosts to the Engine in one of the Installing oVirt guides.

Repeat these steps to migrate virtual machines and upgrade hosts for the rest of the hosts in the same cluster, one by one, until all are running oVirt 4.4.

12.7. Upgrading oVirt Node while preserving local storage

Environments with local storage cannot migrate virtual machines to a host in another cluster because the local storage is not shared with other storage domains. To upgrade oVirt Node 4.3 hosts that have a local storage domain, reinstall the host while preserving the local storage, create a new local storage domain in the 4.4 environment, and import the previous local storage into the new domain.

Prerequisites
  • oVirt Engine 4.4 is installed and running.

  • The compatibility level of the data center and cluster to which the host belongs is set to 4.2 or 4.3.

Procedure
  1. Ensure that the local storage on the oVirt Node 4.3 host’s local storage is in maintenance mode before starting this process. Complete these steps:

    1. Open the Data Centers tab.

    2. Click the Storage tab in the Details pane and select the storage domain in the results list.

    3. Click Maintenance.

  2. Reinstall the oVirt Node, as described in Installing oVirt Node in the Installation Guide.

    When selecting the device on which to install oVirt Node from the Installation Destination screen, do not select the device(s) storing the virtual machines. Only select the device where the operating system should be installed.

    If you are using Kickstart to install the host, ensure that you preserve the devices containing the virtual machines by adding the following to the Kickstart file, replacing `device` with the relevant device.

    # clearpart --all --drives=device

    For more information on using Kickstart, see Kickstart references in Red Hat Enterprise Linux 8 Performing an advanced RHEL installation.

  3. On the reinstalled host, create a directory, for example /data in which to recover the previous environment.

    # mkdir /data
  4. Mount the previous local storage in the new directory. In our example, /dev/sdX1 is the local storage:

    # mount /dev/sdX1 /data
  5. Set the following permissions for the new directory.

    # chown -R 36:36 /data
    # chmod -R 0755 /data
  6. oVirt recommends that you also automatically mount the local storage via /etc/fstab in case the server requires a reboot:

    # blkid | grep -i sdX1
    /dev/sdX1: UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" TYPE="ext4"
    # vi /etc/fstab
    UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" /data    ext4    defaults     0       0
  7. In the Administration Portal, create a data center and select Local in the Storage Type drop-down menu.

  8. Configure a cluster on the new data center. See Creating a New Cluster in the Administration Guide for more information.

  9. Add the host to the Engine. See Adding Standard Hosts to the oVirt Manager in one of the Installing oVirt guides for more information.

  10. On the host, create a new directory that will be used to create the initial local storage domain. For example:

    # mkdir -p /localfs
    # chown 36:36 /localfs
    # chmod -R 0755 /localfs
  11. In the Administration Portal, open the Storage tab and click New Domain to create a new local storage domain.

  12. Set the name to localfs and set the path to /localfs.

  13. Once the local storage is active, click Import Domain and set the domain’s details. For example, define Data as the name, Local on Host as the storage type and /data as the path.

  14. Click OK to confirm the message that appears informing you that storage domains are already attached to the data center.

  15. Activate the new storage domain:

    1. Open the Data Centers tab.

    2. Click the Storage tab in the details pane and select the new data storage domain in the results list.

    3. Click Activate.

  16. Once the new storage domain is active, import the virtual machines and their disks:

    1. In the Storage tab, select data.

    2. Select the VM Import tab in the details pane, select the virtual machines and click Import. See Importing Virtual Machines from a Data Domain in the Virtual Machine Management Guide for more details.

  17. Once you have ensured that all virtual machines have been successfully imported and are functioning properly, you can move localfs to maintenance mode.

  18. Click the Storage tab and select localfs from the results list.

    1. Click the Data Center tab in the details pane.

    2. Click Maintenance, then click OK to move the storage domain to maintenance mode.

    3. Click Detach. The Detach Storage confirmation window opens.

    4. Click OK.

You have now upgraded the host to version 4.4, created a new local storage domain, and imported the 4.3 storage domain and its virtual machines.

12.8. Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites
  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations
  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure
  1. In the Administration Portal, click Compute  Clusters.

  2. Select the cluster to change and click Edit.

  3. On the General tab, change the Compatibility Version to the desired value.

  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

12.9. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (pendingchanges).

The Engine virtual machine does not need to be rebooted.

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure
  1. In the Administration Portal, click Compute  Virtual Machines.

  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

12.10. Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites
  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure
  1. In the Administration Portal, click Compute  Data Centers.

  2. Select the data center to change and click Edit.

  3. Change the Compatibility Version to the desired value.

  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

13. Upgrading a Self-Hosted Engine from oVirt 4.2 to 4.3

Upgrading a self-hosted engine environment from version 4.2 to 4.3 involves the following steps:

13.1. Prerequisites

  • Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes.

  • Ensure your environment meets the requirements for oVirt 4.5.

  • When upgrading oVirt Engine, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure.

13.2. Enabling global maintenance mode

You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Engine virtual machine.

Procedure
  1. Log in to one of the self-hosted engine nodes and enable global maintenance mode:

    # hosted-engine --set-maintenance --mode=global
  2. Confirm that the environment is in global maintenance mode before proceeding:

    # hosted-engine --vm-status

    You should see a message indicating that the cluster is in global maintenance mode.

13.3. Updating the oVirt Engine

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the update.

13.4. Upgrading the oVirt Engine from 4.2 to 4.3

You need to be logged into the machine that you are upgrading.

If the upgrade fails, the engine-setup command attempts to restore your oVirt Engine installation to its previous state. For this reason, do not remove the previous version’s repositories until after the upgrade is complete. If the upgrade fails, the engine-setup script explains how to restore your installation.

Procedure
  1. Enable the oVirt 4.3 repositories:

    All other repositories remain the same across oVirt releases.

  2. Update the setup packages:

    # yum update ovirt\*setup\*
  3. Run engine-setup and follow the prompts to upgrade the oVirt Engine:

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully
  4. Update the base operating system:

    # yum update

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the upgrade.

The Engine is now upgraded to version 4.3.

13.5. Disabling global maintenance mode

Procedure
  1. Log in to the Engine virtual machine and shut it down.

  2. Log in to one of the self-hosted engine nodes and disable global maintenance mode:

    # hosted-engine --set-maintenance --mode=none

    When you exit global maintenance mode, ovirt-ha-agent starts the Engine virtual machine, and then the Engine automatically starts. It can take up to ten minutes for the Engine to start.

  3. Confirm that the environment is running:

    # hosted-engine --vm-status

    The listed information includes Engine Status. The value for Engine status should be:

    {"health": "good", "vm": "up", "detail": "Up"}

    When the virtual machine is still booting and the Engine hasn’t started yet, the Engine status is:

    {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}

    If this happens, wait a few minutes and try again.

You can now update the self-hosted engine nodes, and then any standard hosts. The procedure is the same for both host types.

13.6. Updating All Hosts in a Cluster

You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of oVirt. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates.

Update one cluster at a time.

Limitations
  • On oVirt Node, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update.

  • If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster.

  • In a self-hosted engine environment, the Engine virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.

  • The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.

  • You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead.

Procedure
  1. In the Administration Portal, click Compute  Clusters and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster.

  2. Click Upgrade.

  3. Select the hosts to update, then click Next.

  4. Configure the options:

    • Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update.

    • Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is 60. You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly.

    • Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Engine to check for host updates less frequently than the default.

    • Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot.

    • Use Maintenance Policy sets the cluster’s scheduling policy to cluster_maintenance during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option.

  5. Click Next.

  6. Review the summary of the hosts and virtual machines that are affected.

  7. Click Upgrade.

  8. A cluster upgrade status screen displays with a progress bar showing the precentage of completion, and a list of steps in the upgrade process that have completed. You can click Go to Event Log to open the log entries for the upgrade. Closing this screen does not interrupt the upgrade process.

You can track the progress of host updates:

  • in the Compute  Clusters view, the Upgrade Status column displays a progress bar that displays the percentage of completion.

  • in the Compute  Hosts view

  • in the Events section of the Notification Drawer (EventsIcon).

You can track the progress of individual virtual machine migrations in the Status column of the Compute  Virtual Machines view. In large environments, you may need to filter the results to show a particular group of virtual machines.

13.7. Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites
  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations
  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure
  1. In the Administration Portal, click Compute  Clusters.

  2. Select the cluster to change and click Edit.

  3. On the General tab, change the Compatibility Version to the desired value.

  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

13.8. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (pendingchanges).

The Engine virtual machine does not need to be rebooted.

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure
  1. In the Administration Portal, click Compute  Virtual Machines.

  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

13.9. Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites
  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure
  1. In the Administration Portal, click Compute  Data Centers.

  2. Select the data center to change and click Edit.

  3. Change the Compatibility Version to the desired value.

  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

If you previously upgraded to 4.2 without replacing SHA-1 certificates with SHA-256 certificates, you must do so now.

13.10. Replacing SHA-1 Certificates with SHA-256 Certificates

oVirt 4.5 uses SHA-256 signatures, which provide a more secure way to sign SSL certificates than SHA-1. Newly installed systems do not require any special steps to enable oVirt’s public key infrastructure (PKI) to use SHA-256 signatures.

Do NOT let certificates expire. If they expire, the environment becomes non-responsive and recovery is an error prone and time consuming process. For information on renewing certificates, see Renewing certificates before they expire in the Administration Guide.

Preventing Warning Messages from Appearing in the Browser

  1. Log in to the Engine machine as the root user.

  2. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256:

    # cat /etc/pki/ovirt-engine/openssl.conf

    If it still includes default_md = sha1, back up the existing configuration and change the default to sha256:

    # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."$(date +"%Y%m%d%H%M%S")"
    # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf
  3. Define the certificate that should be re-signed:

    # names="apache"
  4. Log in to one of the self-hosted engine nodes and enable global maintenance:

    # hosted-engine --set-maintenance --mode=global
  5. On the Engine, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates:

    # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf
    # for name in $names; do
        subject="$(
            openssl \
                x509 \
                -in /etc/pki/ovirt-engine/certs/"${name}".cer \
                -noout \
                -subject \
                -nameopt compat \
            | sed \
                's;subject=\(.*\);\1;' \
        )"
       /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \
            --name="${name}" \
            --password=mypass \ <1>
            --subject="${subject}" \
            --san=DNS:"${ENGINE_FQDN}" \
            --keep-key
    done
    1 Do not change this the password value.
  6. Restart the httpd service:

    # systemctl restart httpd
  7. Log in to one of the self-hosted engine nodes and disable global maintenance:

    # hosted-engine --set-maintenance --mode=none
  8. Connect to the Administration Portal to confirm that the warning no longer appears.

  9. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority’s certificate, navigate to http://your-manager-fqdn/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA, replacing your-manager-fqdn with the fully qualified domain name (FQDN).

Replacing All Signed Certificates with SHA-256

  1. Log in to the Engine machine as the root user.

  2. Check whether /etc/pki/ovirt-engine/openssl.conf includes the line default_md = sha256:

    # cat /etc/pki/ovirt-engine/openssl.conf

    If it still includes default_md = sha1, back up the existing configuration and change the default to sha256:

    # cp -p /etc/pki/ovirt-engine/openssl.conf /etc/pki/ovirt-engine/openssl.conf."$(date +"%Y%m%d%H%M%S")"
    # sed -i 's/^default_md = sha1/default_md = sha256/' /etc/pki/ovirt-engine/openssl.conf
  3. Re-sign the CA certificate by backing it up and creating a new certificate in ca.pem.new:

    # cp -p /etc/pki/ovirt-engine/private/ca.pem /etc/pki/ovirt-engine/private/ca.pem."$(date +"%Y%m%d%H%M%S")"
    # openssl x509 -signkey /etc/pki/ovirt-engine/private/ca.pem -in /etc/pki/ovirt-engine/ca.pem -out /etc/pki/ovirt-engine/ca.pem.new -days 3650 -sha256
  4. Replace the existing certificate with the new certificate:

    # mv /etc/pki/ovirt-engine/ca.pem.new /etc/pki/ovirt-engine/ca.pem
  5. Define the certificates that should be re-signed:

    # names="engine apache websocket-proxy jboss imageio-proxy"

    If you replaced the oVirt Engine SSL Certificate after the upgrade, run the following instead:

    # names="engine websocket-proxy jboss imageio-proxy"

    For more details see Replacing the oVirt Engine CA Certificate in the Administration Guide.

  6. Log in to one of the self-hosted engine nodes and enable global maintenance:

    # hosted-engine --set-maintenance --mode=global
  7. On the Engine, save a backup of the /etc/ovirt-engine/engine.conf.d and /etc/pki/ovirt-engine directories, and re-sign the certificates:

    # . /etc/ovirt-engine/engine.conf.d/10-setup-protocols.conf
    # for name in $names; do
        subject="$(
            openssl \
                x509 \
                -in /etc/pki/ovirt-engine/certs/"${name}".cer \
                -noout \
                -subject \
                -nameopt compat \
            | sed \
                's;subject=\(.*\);\1;' \
        )"
       /usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh \
            --name="${name}" \
            --password=mypass \ <1>
            --subject="${subject}" \
            --san=DNS:"${ENGINE_FQDN}" \
            --keep-key
    done
    1 Do not change this the password value.
  8. Restart the following services:

    # systemctl restart httpd
    # systemctl restart ovirt-engine
    # systemctl restart ovirt-websocket-proxy
    # systemctl restart ovirt-imageio
  9. Log in to one of the self-hosted engine nodes and disable global maintenance:

    # hosted-engine --set-maintenance --mode=none
  10. Connect to the Administration Portal to confirm that the warning no longer appears.

  11. If you previously imported a CA or https certificate into the browser, find the certificate(s), remove them from the browser, and reimport the new CA certificate. Install the certificate authority according to the instructions provided by your browser. To get the certificate authority’s certificate, navigate to http://your-manager-fqdn/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA, replacing your-manager-fqdn with the fully qualified domain name (FQDN).

  12. Enroll the certificates on the hosts. Repeat the following procedure for each host.

    1. In the Administration Portal, click Compute  Hosts.

    2. Select the host and click Management  Maintenance and OK.

    3. Once the host is in maintenance mode, click Installation  Enroll Certificate.

    4. Click Management  Activate.

14. Updates between minor releases

15. Updating oVirt between minor releases

To update from your current version of 4.5 to the latest version of 4.5, update the Engine, update the hosts, and then change the compatibility version for the cluster, virtual machines, and data center.

If upgrading from version 4.4.9 to a later version fails on oVirt Node, run the dnf reinstall ovirt-node-ng-image-update command to fix the issue.

To update a standalone Engine, follow the standard procedure for minor updates:

15.1. Updating the oVirt Engine

Prerequisites
  • The centos-release-ovirt45 RPM package is installed and updated to the latest version. This package includes the necessary repositories.

As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available.

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated, reboot the machine to complete the update.

15.2. Updating a Self-Hosted Engine

To update a self-hosted engine from your current version to the latest version, you must place the environment in global maintenance mode and then follow the standard procedure for updating between minor versions.

Ensure the Engine has the correct repositories enabled. For the list of required repositories, see the section Updating the oVirt Engine.

Enabling global maintenance mode

You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Engine virtual machine.

Procedure
  1. Log in to one of the self-hosted engine nodes and enable global maintenance mode:

    # hosted-engine --set-maintenance --mode=global
  2. Confirm that the environment is in global maintenance mode before proceeding:

    # hosted-engine --vm-status

    You should see a message indicating that the cluster is in global maintenance mode.

Updating the oVirt Engine

Prerequisites
  • The centos-release-ovirt45 RPM package is installed and updated to the latest version. This package includes the necessary repositories.

As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available.

Procedure
  1. On the Engine machine, check if updated packages are available:

    # engine-upgrade-check
  2. Update the setup packages:

    # dnf update ovirt\*setup\*
  3. Update the oVirt Engine with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service.

    # engine-setup

    When the script completes successfully, the following message appears:

    Execution of setup completed successfully

    The engine-setup script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup.

    The update process might take some time. Do not stop the process before it completes.

  4. Update the base operating system and any optional packages installed on the Engine:

    # yum update --nobest

    If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).

    If any kernel packages were updated:

    1. Disable global maintenance mode

    2. Reboot the machine to complete the update.

Related Information

Disabling global maintenance mode

Disabling global maintenance mode

Procedure
  1. Log in to the Engine virtual machine and shut it down.

  2. Log in to one of the self-hosted engine nodes and disable global maintenance mode:

    # hosted-engine --set-maintenance --mode=none

    When you exit global maintenance mode, ovirt-ha-agent starts the Engine virtual machine, and then the Engine automatically starts. It can take up to ten minutes for the Engine to start.

  3. Confirm that the environment is running:

    # hosted-engine --vm-status

    The listed information includes Engine Status. The value for Engine status should be:

    {"health": "good", "vm": "up", "detail": "Up"}

    When the virtual machine is still booting and the Engine hasn’t started yet, the Engine status is:

    {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}

    If this happens, wait a few minutes and try again.

15.3. Updating All Hosts in a Cluster

You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of oVirt. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates.

Update one cluster at a time.

Limitations
  • On oVirt Node, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update.

  • If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster.

  • In a self-hosted engine environment, the Engine virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.

  • The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.

  • You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead.

Procedure
  1. In the Administration Portal, click Compute  Clusters and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster.

  2. Click Upgrade.

  3. Select the hosts to update, then click Next.

  4. Configure the options:

    • Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update.

    • Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is 60. You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly.

    • Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Engine to check for host updates less frequently than the default.

    • Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot.

    • Use Maintenance Policy sets the cluster’s scheduling policy to cluster_maintenance during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option.

  5. Click Next.

  6. Review the summary of the hosts and virtual machines that are affected.

  7. Click Upgrade.

  8. A cluster upgrade status screen displays with a progress bar showing the precentage of completion, and a list of steps in the upgrade process that have completed. You can click Go to Event Log to open the log entries for the upgrade. Closing this screen does not interrupt the upgrade process.

You can track the progress of host updates:

  • in the Compute  Clusters view, the Upgrade Status column displays a progress bar that displays the percentage of completion.

  • in the Compute  Hosts view

  • in the Events section of the Notification Drawer (EventsIcon).

You can track the progress of individual virtual machine migrations in the Status column of the Compute  Virtual Machines view. In large environments, you may need to filter the results to show a particular group of virtual machines.

You can now update the cluster compatibility version.

15.4. Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Prerequisites
  • To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.

Limitations
  • Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.

    If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.

Procedure
  1. In the Administration Portal, click Compute  Clusters.

  2. Select the cluster to change and click Edit.

  3. On the General tab, change the Compatibility Version to the desired value.

  4. Click OK. The Change Cluster Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version.

You can now update the cluster compatibility version for virtual machines in the cluster.

15.5. Changing Virtual Machine Cluster Compatibility

After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (pendingchanges).

Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the previous configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes.

Procedure
  1. In the Administration Portal, click Compute  Virtual Machines.

  2. Check which virtual machines require a reboot. In the Vms: search bar, enter the following query:

    next_run_config_exists=True

    The search results show all virtual machines with pending changes.

  3. Select each virtual machine and click Restart. Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself.

When the virtual machine starts, the new compatibility version is automatically applied.

You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.

You can now update the data center compatibility version.

15.6. Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.

Prerequisites
  • To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.

Procedure
  1. In the Administration Portal, click Compute  Data Centers.

  2. Select the data center to change and click Edit.

  3. Change the Compatibility Version to the desired value.

  4. Click OK. The Change Data Center Compatibility Version confirmation dialog opens.

  5. Click OK to confirm.

You can also update hosts individually:

15.7. Updating Individual Hosts

Use the host upgrade manager to update individual hosts directly from the Administration Portal.

The upgrade manager only checks hosts with a status of Up or Non-operational, but not Maintenance.

Limitations
  • On oVirt Node, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update.

  • If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. Update a host when its usage is relatively low.

  • In a self-hosted engine environment, the Engine virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.

  • The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.

  • You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines must be shut down before updating the host.

Procedure
  1. Ensure that the correct repositories are enabled. To view a list of currently enabled repositories, run dnf repolist.

    • For oVirt Nodes the centos-release-ovirt45` RPM package enabling the correct repositories is already installed.

    • For Enterprise Linux hosts:

    • If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.

      # dnf update -y centos-release-ovirt45

As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available.

  1. In the Administration Portal, click Compute  Hosts and select the host to be updated.

  2. Click Installation  Check for Upgrade and click OK.

    Open the Notification Drawer (EventsIcon) and expand the Events section to see the result.

  3. If an update is available, click Installation  Upgrade.

  4. Click OK to update the host. Running virtual machines are migrated according to their migration policy. If migration is disabled for any virtual machines, you are prompted to shut them down.

    The details of the host are updated in Compute  Hosts and the status transitions through these stages:

    Maintenance > Installing > Reboot > Up

    If the update fails, the host’s status changes to Install Failed. From Install Failed you can click Installation  Upgrade again.

Repeat this procedure for each host in the oVirt environment.

You should update the hosts from the Administration Portal. However, you can update the hosts using dnf upgrade instead.

15.8. Manually Updating Hosts

This information is provided for advanced system administrators who need to update hosts manually, but oVirt does not support this method. The procedure described in this topic does not include important steps, including certificate renewal, assuming advanced knowledge of such information. oVirt supports updating hosts using the Administration Portal. For details, see Updating individual hosts or Updating all hosts in a cluster in the Administration Guide.

You can use the dnf command to update your hosts. Update your systems regularly, to ensure timely application of security and bug fixes.

Limitations
  • On oVirt Node, the update only preserves modified content in the /etc and /var directories. Modified data in other paths is overwritten during an update.

  • If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. Update a host when its usage is relatively low.

  • In a self-hosted engine environment, the Engine virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.

  • The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.

  • You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines must be shut down before updating the host.

Procedure
  1. Ensure the correct repositories are enabled. You can check which repositories are currently enabled by running dnf repolist.

Upgrading from an older 4.5 to latest 4.5:

  • For oVirt Nodes, the centos-release-ovirt45 RPM package enabling the correct repositories is already installed.

  • For Enterprise Linux hosts:

  • If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.

    # dnf update -y centos-release-ovirt45

As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available.

Upgrading from an older 4.4 to latest 4.4:

  • For oVirt Nodes, the ovirt-release44 RPM package enabling the correct repositories is already installed.

  • For Enterprise Linux hosts ensure ovirt-release44 RPM package is updated to the latest version:

    # dnf update -y ovirt-release44

Common procedure valid for both 4.4 and 4.5:

  1. In the Administration Portal, click Compute  Hosts and select the host to be updated.

  2. Click Management  Maintenance and OK.

  3. For Enterprise Linux hosts:

    1. Identify the current version of Enterprise Linux:

      # cat /etc/redhat-release
    2. Check which version of the redhat-release package is available:

      # dnf --refresh info --available redhat-release

      This command shows any available updates. For example, when upgrading from Enterprise Linux 8.2.z to 8.3, compare the version of the package with the currently installed version:

      Available Packages
      Name         : redhat-release
      Version      : 8.3
      Release      : 1.0.el8
      …​

      The Enterprise Linux Advanced Virtualization module is usually released later than the Enterprise Linux y-stream. If no new Advanced Virtualization module is available yet, or if there is an error enabling it, stop here and cancel the upgrade. Otherwise you risk corrupting the host.

    3. If the Advanced Virtualization stream is available for Enterprise Linux 8.3 or later, reset the virt module:

      # dnf module reset virt

      If this module is already enabled in the Advanced Virtualization stream, this step is not necessary, but it has no negative impact.

      You can see the value of the stream by entering:

      # dnf module list virt
    4. Enable the virt module in the Advanced Virtualization stream with the following command:

      • For oVirt 4.4.2:

        # dnf module enable virt:8.2
      • For oVirt 4.4.3 to 4.4.5:

        # dnf module enable virt:8.3
      • For oVirt 4.4.6 to 4.4.10:

        # dnf module enable virt:av
      • For oVirt 4.5 and later:

        # dnf module enable virt:rhel

        Starting with EL 8.6 the Advanced virtualization packages will use the standard virt:rhel module. For EL 8.4 and 8.5, only one Advanced Virtualization stream is used, rhel:av.

  4. Enable version 14 of the nodejs module:

    # dnf module -y enable nodejs:14
  5. Update the host:

    # dnf upgrade --nobest
  6. Reboot the host to ensure all updates are correctly applied.

    Check the imgbased logs to see if any additional package updates have failed for a oVirt Node. If some packages were not successfully reinstalled after the update, check that the packages are listed in /var/imgbased/persisted-rpms. Add any missing packages then run rpm -Uvh /var/imgbased/persisted-rpms/*.

Repeat this process for each host in the oVirt environment.

Appendix A: Updating the Local Repository for an Offline oVirt Engine Installation

If your oVirt Engine is hosted on a machine that receives packages via FTP from a local repository, you must regularly synchronize the repository to download package updates from the Content Delivery Network, then update or upgrade that machine. Updated packages address security issues, fix bugs, and add enhancements.

  1. On the system hosting the repository, synchronize the repository to download the most recent version of each available package:

    # reposync --newest-only /var/ftp/pub/ovirtrepo

    This command might download a large number of packages, and take a long time to complete.

  2. Ensure that the repository is available on the Engine machine, and then update or upgrade the machine.

Certain portions of this text first appeared in Red Hat Virtualization 4.4 Upgrade Guide. Copyright © 2022 Red Hat, Inc. Licensed under a Creative Commons Attribution-ShareAlike 4.0 Unported License.