- 1. Administering and Maintaining the oVirt Environment
- 1.1. Global Configuration
- 1.2. Dashboard
- 1.3. Searches
- 1.3.1. Performing Searches in oVirt
- 1.3.2. Search Syntax and Examples
- 1.3.3. Search Auto-Completion
- 1.3.4. Search Result Type Options
- 1.3.5. Search Criteria
- 1.3.6. Search: Multiple Criteria and Wildcards
- 1.3.7. Search: Determining Search Order
- 1.3.8. Searching for Data Centers
- 1.3.9. Searching for Clusters
- 1.3.10. Searching for Hosts
- 1.3.11. Searching for Networks
- 1.3.12. Searching for Storage
- 1.3.13. Searching for Disks
- 1.3.14. Searching for Volumes
- 1.3.15. Searching for Virtual Machines
- 1.3.16. Searching for Pools
- 1.3.17. Searching for Templates
- 1.3.18. Searching for Users
- 1.3.19. Searching for Events
- 1.4. Bookmarks
- 1.5. Tags
- 2. Administering the Resources
- 2.1. Quality of Service
- 2.2. Data Centers
- 2.3. Clusters
- 2.4. Logical Networks
- 2.5. Hosts
- 2.6. Storage
- 2.6.1. About oVirt storage
- 2.6.2. Understanding Storage Domains
- 2.6.3. Preparing and Adding NFS Storage
- 2.6.4. Preparing and adding local storage
- 2.6.5. Preparing and Adding POSIX-compliant File System Storage
- 2.6.6. Preparing and Adding Block Storage
- 2.6.7. Preparing and Adding Gluster Storage
- 2.6.8. Importing Existing Storage Domains
- 2.6.9. Storage Tasks
- 2.7. Pools
- 2.8. Virtual Disks
- 2.9. External Providers
- 3. Administering the Environment
- 3.1. Administering the Self-Hosted Engine
- 3.1.1. Maintaining the Self-hosted engine
- 3.1.2. Administering the Engine Virtual Machine
- 3.1.3. Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts
- 3.1.4. Adding Self-Hosted Engine Nodes to the oVirt Engine
- 3.1.5. Reinstalling an Existing Host as a Self-Hosted Engine Node
- 3.1.6. Booting the Engine Virtual Machine in Rescue Mode
- 3.1.7. Removing a Host from a Self-Hosted Engine Environment
- 3.1.8. Updating a Self-Hosted Engine
- 3.1.9. Changing the FQDN of the Engine in a Self-Hosted Engine
- 3.2. Backups and Migration
- 3.2.1. Backing Up and Restoring the oVirt Engine
- 3.2.2. Migrating the Data Warehouse to a Separate Machine
- 3.2.3. Backing Up and Restoring Virtual Machines Using a Backup Storage Domain
- 3.2.4. Backing Up and Restoring Virtual Machines Using the Backup and Restore API
- 3.2.5. Backing Up and Restoring Virtual Machines Using the Incremental Backup and Restore API
- 3.3. Setting up errata viewing with Red Hat Satellite
- 3.4. Renewing certificates before they expire
- 3.5. Automating Configuration Tasks using Ansible
- 3.6. Users and Roles
- 3.6.1. Introduction to Users
- 3.6.2. Introduction to Directory Servers
- 3.6.3. Configuring an External LDAP Provider
- 3.6.4. Configuring LDAP and Kerberos for Single Sign-on
- 3.6.5. Installing and Configuring Red Hat Single Sign-On
- 3.6.6. User Authorization
- 3.6.7. Administering User Tasks From the Administration Portal
- 3.6.8. Administering User Tasks From the Command Line
- 3.6.9. Configuring Additional Local Domains
- 3.7. Quotas and Service Level Agreement Policy
- 3.7.1. Introduction to Quota
- 3.7.2. Shared Quota and Individually Defined Quota
- 3.7.3. Quota Accounting
- 3.7.4. Enabling and Changing a Quota Mode in a Data Center
- 3.7.5. Creating a New Quota Policy
- 3.7.6. Explanation of Quota Threshold Settings
- 3.7.7. Assigning a Quota to an Object
- 3.7.8. Using Quota to Limit Resources by User
- 3.7.9. Editing Quotas
- 3.7.10. Removing Quotas
- 3.7.11. Service Level Agreement Policy Enforcement
- 3.8. Event Notifications
- 3.9. Utilities
- 3.1. Administering the Self-Hosted Engine
- 4. Gathering Information About the Environment
- 4.1. Monitoring and observability
- 4.2. Log Files
- 4.2.1. Engine Installation Log Files
- 4.2.2. oVirt Engine Log Files
- 4.2.3. SPICE Log Files
- 4.2.4. Host Log Files
- 4.2.5. Setting debug-level logging for oVirt services
- 4.2.6. Main configuration files for oVirt services
- 4.2.7. Setting Up a Host Logging Server
- 4.2.8. Enabling SyslogHandler to pass oVirt Engine logs to a remote syslog server
- Appendix A: VDSM Service and Hooks
- Installing a VDSM hook
- Supported VDSM Events
- The VDSM Hook Environment
- The VDSM Hook Domain XML Object
- Defining Custom Properties
- Setting Virtual Machine Custom Properties
- Evaluating Virtual Machine Custom Properties in a VDSM Hook
- Using the VDSM Hooking Module
- VDSM Hook Execution
- VDSM Hook Return Codes
- VDSM Hook Examples
- Appendix B: Custom Network Properties
- Appendix C: oVirt User Interface Plugins
- Appendix D: oVirt and encrypted communication
- Appendix E: Branding
- Appendix F: System Accounts
- Appendix G: Legal notice
Administration Guide
1. Administering and Maintaining the oVirt Environment
The oVirt environment requires an administrator to keep it running. As an administrator, your tasks include:
-
Managing physical and virtual resources such as hosts and virtual machines. This includes upgrading and adding hosts, importing domains, converting virtual machines created on foreign hypervisors, and managing virtual machine pools.
-
Monitoring the overall system resources for potential problems such as extreme load on one of the hosts, insufficient memory or disk space, and taking any necessary actions (such as migrating virtual machines to other hosts to lessen the load or freeing resources by shutting down machines).
-
Responding to the new requirements of virtual machines (for example, upgrading the operating system or allocating more memory).
-
Managing customized object properties using tags.
-
Managing searches saved as public bookmarks.
-
Managing user setup and setting permission levels.
-
Troubleshooting for specific users or virtual machines for overall system functionality.
-
Generating general and specific reports.
1.1. Global Configuration
Accessed by clicking
, the Configure window allows you to configure a number of global resources for your oVirt environment, such as users, roles, system permissions, scheduling policies, instance types, and MAC address pools. This window allows you to customize the way in which users interact with resources in the environment, and provides a central location for configuring options that can be applied to multiple clusters.1.1.1. Roles
Roles are predefined sets of privileges that can be configured from oVirt Engine. Roles provide access and management permissions to different levels of resources in the data center, and to specific physical and virtual resources.
With multilevel administration, any permissions which apply to a container object also apply to all individual objects within that container. For example, when a host administrator role is assigned to a user on a specific host, the user gains permissions to perform any of the available host operations, but only on the assigned host. However, if the host administrator role is assigned to a user on a data center, the user gains permissions to perform host operations on all hosts within the cluster of the data center.
Creating a New Role
If the role you require is not on oVirt’s default list of roles, you can create a new role and customize it to suit your purposes.
-
Click
. This opens the Configure window. The Roles tab is selected by default, showing a list of default User and Administrator roles, and any custom roles. -
Click New.
-
Enter the Name and Description of the new role.
-
Select either Admin or User as the Account Type.
-
Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects in the Check Boxes to Allow Action list. You can also expand or collapse the options for each object.
-
For each of the objects, select or clear the actions you want to permit or deny for the role you are setting up.
-
Click OK to apply the changes. The new role displays on the list of roles.
Editing or Copying a Role
You can change the settings for roles you have created, but you cannot change default roles. To change default roles, clone and modify them to suit your requirements.
-
Click
. This opens the Configure window, which shows a list of default User and Administrator roles, as well as any custom roles. -
Select the role you wish to change.
-
Click Edit or Copy. This opens the Edit Role or Copy Role window.
-
If necessary, edit the Name and Description of the role.
-
Use the Expand All or Collapse All buttons to view more or fewer of the permissions for the listed objects. You can also expand or collapse the options for each object.
-
For each of the objects, select or clear the actions you wish to permit or deny for the role you are editing.
-
Click OK to apply the changes you have made.
User Role and Authorization Examples
The following examples illustrate how to apply authorization controls for various scenarios, using the different features of the authorization system described in this chapter.
Sarah is the system administrator for the accounts department of a company. All the virtual resources for her department are organized under a oVirt cluster called Accounts
. She is assigned the ClusterAdmin role on the accounts cluster. This enables her to manage all virtual machines in the cluster, since the virtual machines are child objects of the cluster. Managing the virtual machines includes editing, adding, or removing virtual resources such as disks, and taking snapshots. It does not allow her to manage any resources outside this cluster. Because ClusterAdmin is an administrator role, it allows her to use the Administration Portal or the VM Portal to manage these resources.
John is a software developer in the accounts department. He uses virtual machines to build and test his software. Sarah has created a virtual desktop called johndesktop
for him. John is assigned the UserVmManager role on the johndesktop
virtual machine. This allows him to access this single virtual machine using the VM Portal. Because he has UserVmManager permissions, he can modify the virtual machine. Because UserVmManager is a user role, it does not allow him to use the Administration Portal.
Penelope is an office manager. In addition to her own responsibilities, she occasionally helps the HR manager with recruitment tasks, such as scheduling interviews and following up on reference checks. As per corporate policy, Penelope needs to use a particular application for recruitment tasks.
While Penelope has her own machine for office management tasks, she wants to create a separate virtual machine to run the recruitment application. She is assigned PowerUserRole permissions for the data center in which her new virtual machine will reside. This is because to create a new virtual machine, she needs to make changes to several components within the data center, including creating the virtual disk in the storage domain.
Note that this is not the same as assigning DataCenterAdmin privileges to Penelope. As a PowerUser for a data center, Penelope can log in to the VM Portal and perform virtual machine-specific actions on virtual machines within the data center. She cannot perform data center-level operations such as attaching hosts or storage to a data center.
Chris works as the network administrator in the IT department. Her day-to-day responsibilities include creating, manipulating, and removing networks in the department’s oVirt environment. For her role, she requires administrative privileges on the resources and on the networks of each resource. For example, if Chris has NetworkAdmin privileges on the IT department’s data center, she can add and remove networks in the data center, and attach and detach networks for all virtual machines belonging to the data center.
Rachel works in the IT department, and is responsible for managing user accounts in oVirt. She needs permission to add user accounts and assign them the appropriate roles and permissions. She does not use any virtual machines herself, and should not have access to administration of hosts, virtual machines, clusters or data centers. There is no built-in role which provides her with this specific set of permissions. A custom role must be created to define the set of permissions appropriate to Rachel’s position.

The UserManager custom role shown above allows manipulation of users, permissions and roles. These actions are organized under System - the top level object of the hierarchy shown in Object Hierarchy. This means they apply to all other objects in the system. The role is set to have an Account Type of Admin. This means that when she is assigned this role, Rachel can use both the Administration Portal and the VM Portal.
1.1.2. System Permissions
Permissions enable users to perform actions on objects, where objects are either individual objects or container objects. Any permissions that apply to a container object also apply to all members of that container.


User Properties
Roles and permissions are the properties of the user. Roles are predefined sets of privileges that permit access to different levels of physical and virtual resources. Multilevel administration provides a finely grained hierarchy of permissions. For example, a data center administrator has permissions to manage all objects in the data center, while a host administrator has system administrator permissions to a single physical host. A user can have permissions to use a single virtual machine but not make any changes to the virtual machine configurations, while another user can be assigned system permissions to a virtual machine.
User and Administrator Roles
oVirt provides a range of pre-configured roles, from an administrator with system-wide permissions to an end user with access to a single virtual machine. While you cannot change or remove the default roles, you can clone and customize them, or create new roles according to your requirements. There are two types of roles:
-
Administrator Role: Allows access to the Administration Portal for managing physical and virtual resources. An administrator role confers permissions for actions to be performed in the VM Portal; however, it has no bearing on what a user can see in the VM Portal.
-
User Role: Allows access to the VM Portal for managing and accessing virtual machines and templates. A user role determines what a user can see in the VM Portal. Permissions granted to a user with an administrator role are reflected in the actions available to that user in the VM Portal.
User Roles Explained
The table below describes basic user roles which confer permissions to access and configure virtual machines in the VM Portal.
Role | Privileges | Notes |
---|---|---|
UserRole |
Can access and use virtual machines and pools. |
Can log in to the VM Portal, use assigned virtual machines and pools, view virtual machine state and details. |
PowerUserRole |
Can create and manage virtual machines and templates. |
Apply this role to a user for the whole environment with the Configure window, or for specific data centers or clusters. For example, if a PowerUserRole is applied on a data center level, the PowerUser can create virtual machines and templates in the data center. |
UserVmManager |
System administrator of a virtual machine. |
Can manage virtual machines and create and use snapshots. A user who creates a virtual machine in the VM Portal is automatically assigned the UserVmManager role on the machine. |
The table below describes advanced user roles which allow you to do more fine tuning of permissions for resources in the VM Portal.
Role | Privileges | Notes |
---|---|---|
UserTemplateBasedVm |
Limited privileges to only use Templates. |
Can use templates to create virtual machines. |
DiskOperator |
Virtual disk user. |
Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached. |
VmCreator |
Can create virtual machines in the VM Portal. |
This role is not applied to a specific virtual machine; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or clusters. When applying this role to a cluster, you must also apply the DiskCreator role on an entire data center, or on specific storage domains. |
TemplateCreator |
Can create, edit, manage and remove virtual machine templates within assigned resources. |
This role is not applied to a specific template; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains. |
DiskCreator |
Can create, edit, manage and remove virtual disks within assigned clusters or data centers. |
This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers or storage domains. |
TemplateOwner |
Can edit and delete the template, assign and manage user permissions for the template. |
This role is automatically assigned to the user who creates a template. Other users who do not have TemplateOwner permissions on a template cannot view or use the template. |
VnicProfileUser |
Logical network and network interface user for virtual machine and template. |
Can attach or detach network interfaces from specific logical networks. |
Administrator Roles Explained
The table below describes basic administrator roles which confer permissions to access and configure resources in the Administration Portal.
Role | Privileges | Notes |
---|---|---|
SuperUser |
System Administrator of the oVirt environment. |
Has full permissions across all objects and levels, can manage all objects across all data centers. |
ClusterAdmin |
Cluster Administrator. |
Possesses administrative permissions for all objects underneath a specific cluster. |
DataCenterAdmin |
Data Center Administrator. |
Possesses administrative permissions for all objects underneath a specific data center except for storage. |
Do not use the administrative user for the directory server as the oVirt administrative user. Create a user in the directory server specifically for use as the oVirt administrative user. |
The table below describes advanced administrator roles which allow you to do more fine tuning of permissions for resources in the Administration Portal.
Role | Privileges | Notes |
---|---|---|
TemplateAdmin |
Administrator of a virtual machine template. |
Can create, delete, and configure the storage domains and network details of templates, and move templates between domains. |
StorageAdmin |
Storage Administrator. |
Can create, delete, configure, and manage an assigned storage domain. |
HostAdmin |
Host Administrator. |
Can attach, remove, configure, and manage a specific host. |
NetworkAdmin |
Network Administrator. |
Can configure and manage the network of a particular data center or cluster. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. |
VmPoolAdmin |
System Administrator of a virtual pool. |
Can create, delete, and configure a virtual pool; assign and remove virtual pool users; and perform basic operations on a virtual machine in the pool. |
GlusterAdmin |
Gluster Storage Administrator. |
Can create, delete, configure, and manage Gluster storage volumes. |
VmImporterExporter |
Import and export Administrator of a virtual machine. |
Can import and export virtual machines. Able to view all virtual machines and templates exported by other users. |
Assigning an Administrator or User Role to a Resource
Assign administrator or user roles to resources to allow users to access or manage that resource.
-
Find and click the resource’s name. This opens the details view.
-
Click the Permissions tab to list the assigned users, each user’s role, and the inherited permissions for the selected resource.
-
Click Add.
-
Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
-
Select a role from the Role to Assign drop-down list.
-
Click OK.
The user now has the inherited permissions of that role enabled for that resource.
Avoid assigning global permissions to regular users on resources such as clusters because permissions are automatically inherited by resources that are lower in a system’s hierarchy. Set Assigning global permissions can cause two problems due to the inheritance of permissions:
Therefore, it is strongly recommended to set |
Removing an Administrator or User Role from a Resource
Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.
-
Find and click the resource’s name. This opens the details view.
-
Click the Permissions tab to list the assigned users, the user’s role, and the inherited permissions for the selected resource.
-
Select the user to remove from the resource.
-
Click Remove.
-
Click OK.
Managing System Permissions for a Data Center
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A data center administrator is a system administration role for a specific data center only. This is useful in virtualization environments with multiple data centers where each data center requires an administrator. The DataCenterAdmin role is a hierarchical model; a user assigned the data center administrator role for a data center can manage all objects in the data center with the exception of storage for that data center. Use the Configure button in the header bar to assign a data center administrator for all data centers in the environment.
The data center administrator role permits the following actions:
-
Create and remove clusters associated with the data center.
-
Add and remove hosts, virtual machines, and pools associated with the data center.
-
Edit user permissions for virtual machines associated with the data center.
You can only assign roles and permissions to existing users. |
You can change the system administrator of a data center by removing the existing system administrator and adding the new system administrator.
Data Center Administrator Roles Explained
Data Center Permission Roles
The table below describes the administrator roles and privileges applicable to data center administration.
Role | Privileges | Notes |
---|---|---|
DataCenterAdmin |
Data Center Administrator |
Can use, create, delete, manage all physical and virtual resources within a specific data center except for storage, including clusters, hosts, templates and virtual machines. |
NetworkAdmin |
Network Administrator |
Can configure and manage the network of a particular data center. A network administrator of a data center inherits network permissions for virtual machines within the data center as well. |
Managing System Permissions for a Cluster
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A cluster administrator is a system administration role for a specific cluster only. This is useful in data centers with multiple clusters, where each cluster requires a system administrator. The ClusterAdmin role is a hierarchical model: a user assigned the cluster administrator role for a cluster can manage all objects in the cluster. Use the Configure button in the header bar to assign a cluster administrator for all clusters in the environment.
The cluster administrator role permits the following actions:
-
Create and remove associated clusters.
-
Add and remove hosts, virtual machines, and pools associated with the cluster.
-
Edit user permissions for virtual machines associated with the cluster.
You can only assign roles and permissions to existing users. |
You can also change the system administrator of a cluster by removing the existing system administrator and adding the new system administrator.
Cluster Administrator Roles Explained
Cluster Permission Roles
The table below describes the administrator roles and privileges applicable to cluster administration.
Role | Privileges | Notes |
---|---|---|
ClusterAdmin |
Cluster Administrator |
Can use, create, delete, manage all physical and virtual resources in a specific cluster, including hosts, templates and virtual machines. Can configure network properties within the cluster such as designating display networks, or marking a network as required or non-required. However, a ClusterAdmin does not have permissions to attach or detach networks from a cluster, to do so NetworkAdmin permissions are required. |
NetworkAdmin |
Network Administrator |
Can configure and manage the network of a particular cluster. A network administrator of a cluster inherits network permissions for virtual machines within the cluster as well. |
Managing System Permissions for a Network
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A network administrator is a system administration role that can be applied for a specific network, or for all networks on a data center, cluster, host, virtual machine, or template. A network user can perform limited administration roles, such as viewing and attaching networks on a specific virtual machine or template. You can use the Configure button in the header bar to assign a network administrator for all networks in the environment.
The network administrator role permits the following actions:
-
Create, edit and remove networks.
-
Edit the configuration of the network, including configuring port mirroring.
-
Attach and detach networks from resources including clusters and virtual machines.
The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. You can also change the administrator of a network by removing the existing administrator and adding the new administrator.
Network Administrator and User Roles Explained
Network Permission Roles
The table below describes the administrator and user roles and privileges applicable to network administration.
Role | Privileges | Notes |
---|---|---|
NetworkAdmin |
Network Administrator for data center, cluster, host, virtual machine, or template. The user who creates a network is automatically assigned NetworkAdmin permissions on the created network. |
Can configure and manage the network of a particular data center, cluster, host, virtual machine, or template. A network administrator of a data center or cluster inherits network permissions for virtual pools within the cluster. To configure port mirroring on a virtual machine network, apply the NetworkAdmin role on the network and the UserVmManager role on the virtual machine. |
VnicProfileUser |
Logical network and network interface user for virtual machine and template. |
Can attach or detach network interfaces from specific logical networks. |
Managing System Permissions for a Host
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A host administrator is a system administration role for a specific host only. This is useful in clusters with multiple hosts, where each host requires a system administrator. You can use the Configure button in the header bar to assign a host administrator for all hosts in the environment.
The host administrator role permits the following actions:
-
Edit the configuration of the host.
-
Set up the logical networks.
-
Remove the host.
You can also change the system administrator of a host by removing the existing system administrator and adding the new system administrator.
Host Administrator Roles Explained
Host Permission Roles
The table below describes the administrator roles and privileges applicable to host administration.
Role | Privileges | Notes |
---|---|---|
HostAdmin |
Host Administrator |
Can configure, manage, and remove a specific host. Can also perform network-related operations on a specific host. |
Managing System Permissions for a Storage Domain
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A storage administrator is a system administration role for a specific storage domain only. This is useful in data centers with multiple storage domains, where each storage domain requires a system administrator. Use the Configure button in the header bar to assign a storage administrator for all storage domains in the environment.
The storage domain administrator role permits the following actions:
-
Edit the configuration of the storage domain.
-
Move the storage domain into maintenance mode.
-
Remove the storage domain.
You can only assign roles and permissions to existing users. |
You can also change the system administrator of a storage domain by removing the existing system administrator and adding the new system administrator.
Storage Administrator Roles Explained
Storage Domain Permission Roles
The table below describes the administrator roles and privileges applicable to storage domain administration.
Role | Privileges | Notes |
---|---|---|
StorageAdmin |
Storage Administrator |
Can create, delete, configure and manage a specific storage domain. |
GlusterAdmin |
Gluster Storage Administrator |
Can create, delete, configure and manage Gluster storage volumes. |
Managing System Permissions for a Virtual Machine Pool
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
A virtual machine pool administrator is a system administration role for virtual machine pools in a data center. This role can be applied to specific virtual machine pools, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual machine pool resources.
The virtual machine pool administrator role permits the following actions:
-
Create, edit, and remove pools.
-
Add and detach virtual machines from the pool.
You can only assign roles and permissions to existing users. |
Virtual Machine Pool Administrator Roles Explained
Pool Permission Roles
The table below describes the administrator roles and privileges applicable to pool administration.
Role | Privileges | Notes |
---|---|---|
VmPoolAdmin |
System Administrator role of a virtual pool. |
Can create, delete, and configure a virtual pool, assign and remove virtual pool users, and perform basic operations on a virtual machine. |
ClusterAdmin |
Cluster Administrator |
Can use, create, delete, manage all virtual machine pools in a specific cluster. |
Managing System Permissions for a Virtual Disk
As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.
oVirt Engine provides two default virtual disk user roles, but no default virtual disk administrator roles. One of these user roles, the DiskCreator role, enables the administration of virtual disks from the VM Portal. This role can be applied to specific virtual machines, to a data center, to a specific storage domain, or to the whole virtualized environment; this is useful to allow different users to manage different virtual resources.
The virtual disk creator role permits the following actions:
-
Create, edit, and remove virtual disks associated with a virtual machine or other resources.
-
Edit user permissions for virtual disks.
You can only assign roles and permissions to existing users. |
Virtual Disk User Roles Explained
Virtual Disk User Permission Roles
The table below describes the user roles and privileges applicable to using and administrating virtual disks in the VM Portal.
Role | Privileges | Notes |
---|---|---|
DiskOperator |
Virtual disk user. |
Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached. |
DiskCreator |
Can create, edit, manage and remove virtual disks within assigned clusters or data centers. |
This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains. |
Setting a Legacy SPICE Cipher
SPICE consoles use FIPS-compliant encryption by default, with a cipher string. The default SPICE cipher string is:
kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL
This string is generally sufficient. However, if you have a virtual machine with an older operating system or SPICE client, where either one or the other does not support FIPS-compliant encryption, you must use a weaker cipher string. Otherwise, a connection security error may occur if you install a new cluster or a new host in an existing cluster and try to connect to that virtual machine.
You can change the cipher string by using an Ansible playbook.
Changing the cipher string
-
On the Engine machine, create a file in the directory
/usr/share/ovirt-engine/playbooks
. For example:# vim /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml
-
Enter the following in the file and save it:
name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption
-
Run the file you just created:
# ansible-playbook -l hostname /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml
Alternatively, you can reconfigure the host with the Ansible playbook ovirt-host-deploy
using the --extra-vars
option with the variable host_deploy_spice_cipher_string
:
# ansible-playbook -l hostname \
--extra-vars host_deploy_spice_cipher_string=”DEFAULT:-RC4:-3DES:-DES” \
/usr/share/ovirt-engine/playbooks/ovirt-host-deploy.yml
1.1.3. Scheduling Policies
A scheduling policy is a set of rules that defines the logic by which virtual machines are distributed amongst hosts in the cluster that scheduling policy is applied to. Scheduling policies determine this logic via a combination of filters, weightings, and a load balancing policy. The filter modules apply hard enforcement and filter out hosts that do not meet the conditions specified by that filter. The weights modules apply soft enforcement, and are used to control the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run.
The oVirt Engine provides five default scheduling policies: Evenly_Distributed, Cluster_Maintenance, None, Power_Saving, and VM_Evenly_Distributed. You can also define new scheduling policies that provide fine-grained control over the distribution of virtual machines. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host’s CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Scheduling Policies in the Administration Guide for more information about the properties of each scheduling policy.

The Evenly_Distributed scheduling policy distributes the memory and CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes, HighUtilization, VCpuToPhysicalCpuRatio, or MaxFreeMemoryForOverUtilized.
The VM_Evenly_Distributed scheduling policy distributes virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold.

The Power_Saving scheduling policy distributes the memory and CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value.
Set the None policy to have no load or power sharing between hosts for running virtual machines. This is the default mode. When a virtual machine is started, the memory and CPU processing load is spread evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes, HighUtilization, or MaxFreeMemoryForOverUtilized.
The Cluster_Maintenance scheduling policy limits activity in a cluster during maintenance tasks. When the Cluster_Maintenance policy is set, no new virtual machines may be started, except highly available virtual machines. If host failure occurs, highly available virtual machines will restart properly and any virtual machine can migrate.
Creating a Scheduling Policy
You can create new scheduling policies to control the logic by which virtual machines are distributed amongst a given cluster in your oVirt environment.
-
Click
. -
Click the Scheduling Policies tab.
-
Click New.
-
Enter a Name and Description for the scheduling policy.
-
Configure filter modules:
-
In the Filter Modules section, drag and drop the preferred filter modules to apply to the scheduling policy from the Disabled Filters section into the Enabled Filters section.
-
Specific filter modules can also be set as the First, to be given highest priority, or Last, to be given lowest priority, for basic optimization. To set the priority, right-click any filter module, hover the cursor over Position and select First or Last.
-
-
Configure weight modules:
-
In the Weights Modules section, drag and drop the preferred weights modules to apply to the scheduling policy from the Disabled Weights section into the Enabled Weights & Factors section.
-
Use the + and - buttons to the left of the enabled weight modules to increase or decrease the weight of those modules.
-
-
Specify a load balancing policy:
-
From the drop-down menu in the Load Balancer section, select the load balancing policy to apply to the scheduling policy.
-
From the drop-down menu in the Properties section, select a load balancing property to apply to the scheduling policy and use the text field to the right of that property to specify a value.
-
Use the + and - buttons to add or remove additional properties.
-
-
Click OK.
Explanation of Settings in the New Scheduling Policy and Edit Scheduling Policy Window
The following table details the options available in the New Scheduling Policy and Edit Scheduling Policy windows.
Field Name | Description |
---|---|
Name |
The name of the scheduling policy. This is the name used to refer to the scheduling policy in the oVirt Engine. |
Description |
A description of the scheduling policy. This field is recommended but not mandatory. |
Filter Modules |
A set of filters for controlling the hosts on which a virtual machine in a cluster can run. Enabling a filter will filter out hosts that do not meet the conditions specified by that filter, as outlined below:
|
Weights Modules |
A set of weightings for controlling the relative priority of factors considered when determining the hosts in a cluster on which a virtual machine can run.
|
Load Balancer |
This drop-down menu allows you to select a load balancing module to apply. Load balancing modules determine the logic used to migrate virtual machines from hosts experiencing high usage to hosts experiencing lower usage. |
Properties |
This drop-down menu allows you to add or remove properties for load balancing modules, and is only available when you have selected a load balancing module for the scheduling policy. No properties are defined by default, and the properties that are available are specific to the load balancing module that is selected. Use the + and - buttons to add or remove additional properties to or from the load balancing module. |
1.1.4. Instance Types
Instance types can be used to define the hardware configuration of a virtual machine. Selecting an instance type when creating or editing a virtual machine will automatically fill in the hardware configuration fields. This allows users to create multiple virtual machines with the same hardware configuration without having to manually fill in every field.
Support for instance types is now deprecated, and will be removed in a future release. |
A set of predefined instance types are available by default, as outlined in the following table:
Name | Memory | vCPUs |
---|---|---|
Tiny |
512 MB |
1 |
Small |
2 GB |
1 |
Medium |
4 GB |
2 |
Large |
8 GB |
2 |
XLarge |
16 GB |
4 |
Administrators can also create, edit, and remove instance types from the Instance Types tab of the Configure window.
Fields in the New Virtual Machine and Edit Virtual Machine windows that are bound to an instance type have a chain link image next to them (). If the value of one of these fields is changed, the virtual machine will be detached from the instance type, changing to Custom, and the chain will appear broken (
). However, if the value is changed back, the chain will relink and the instance type will move back to the selected one.
Creating Instance Types
Administrators can create new instance types, which can then be selected by users when creating or editing virtual machines.
-
Click
. -
Click the Instance Types tab.
-
Click New.
-
Enter a Name and Description for the instance type.
-
Click Show Advanced Options and configure the instance type’s settings as required. The settings that appear in the New Instance Type window are identical to those in the New Virtual Machine window, but with the relevant fields only. See Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows in the Virtual Machine Management Guide.
-
Click OK.
The new instance type will appear in the Instance Types tab in the Configure window, and can be selected from the Instance Type drop-down list when creating or editing a virtual machine.
Editing Instance Types
Administrators can edit existing instance types from the Configure window.
-
Click
. -
Click the Instance Types tab.
-
Select the instance type to be edited.
-
Click Edit.
-
Change the settings as required.
-
Click OK.
The configuration of the instance type is updated. When a new virtual machine based on this instance type is created, or when an existing virtual machine based on this instance type is updated, the new configuration is applied.
Existing virtual machines based on this instance type will display fields, marked with a chain icon, that will be updated. If the existing virtual machines were running when the instance type was changed, the orange Pending Changes icon will appear beside them and the fields with the chain icon will be updated at the next restart.
Removing Instance Types
-
Click
. -
Click the Instance Types tab.
-
Select the instance type to be removed.
-
Click Remove.
-
If any virtual machines are based on the instance type to be removed, a warning window listing the attached virtual machines will appear. To continue removing the instance type, select the Approve Operation check box. Otherwise click Cancel.
-
Click OK.
The instance type is removed from the Instance Types list and can no longer be used when creating a new virtual machine. Any virtual machines that were attached to the removed instance type will now be attached to Custom (no instance type).
1.1.5. MAC Address Pools
MAC address pools define the range(s) of MAC addresses allocated for each cluster. A MAC address pool is specified for each cluster. By using MAC address pools, oVirt can automatically generate and assign MAC addresses to new virtual network devices, which helps to prevent MAC address duplication. MAC address pools are more memory efficient when all MAC addresses related to a cluster are within the range for the assigned MAC address pool.
The same MAC address pool can be shared by multiple clusters, but each cluster has a single MAC address pool assigned. A default MAC address pool is created by oVirt and is used if another MAC address pool is not assigned. For more information about assigning MAC address pools to clusters see Creating a New Cluster.
If more than one oVirt cluster shares a network, do not rely solely on the default MAC address pool because the virtual machines of each cluster will try to use the same range of MAC addresses, leading to conflicts. To avoid MAC address conflicts, check the MAC address pool ranges to ensure that each cluster is assigned a unique MAC address range. |
The MAC address pool assigns the next available MAC address following the last address that was returned to the pool. If there are no further addresses left in the range, the search starts again from the beginning of the range. If there are multiple MAC address ranges with available MAC addresses defined in a single MAC address pool, the ranges take turns in serving incoming requests in the same way available MAC addresses are selected.
Creating MAC Address Pools
You can create new MAC address pools.
-
Click
. -
Click the MAC Address Pools tab.
-
Click Add.
-
Enter the Name and Description of the new MAC address pool.
-
Select the Allow Duplicates check box to allow a MAC address to be used multiple times in a pool. The MAC address pool will not automatically use a duplicate MAC address, but enabling the duplicates option means a user can manually use a duplicate MAC address.
If one MAC address pool has duplicates disabled, and another has duplicates enabled, each MAC address can be used once in the pool with duplicates disabled but can be used multiple times in the pool with duplicates enabled.
-
Enter the required MAC Address Ranges. To enter multiple ranges click the plus button next to the From and To fields.
-
Click OK.
Editing MAC Address Pools
You can edit MAC address pools to change the details, including the range of MAC addresses available in the pool and whether duplicates are allowed.
-
Click
. -
Click the MAC Address Pools tab.
-
Select the MAC address pool to be edited.
-
Click Edit.
-
Change the Name, Description, Allow Duplicates, and MAC Address Ranges fields as required.
When a MAC address range is updated, the MAC addresses of existing NICs are not reassigned. MAC addresses that were already assigned, but are outside of the new MAC address range, are added as user-specified MAC addresses and are still tracked by that MAC address pool.
-
Click OK.
Editing MAC Address Pool Permissions
After a MAC address pool has been created, you can edit its user permissions. The user permissions control which data centers can use the MAC address pool. See Roles for more information on adding new user permissions.
-
Click
. -
Click the MAC Address Pools tab.
-
Select the required MAC address pool.
-
Edit the user permissions for the MAC address pool:
-
To add user permissions to a MAC address pool:
-
Click Add in the user permissions pane at the bottom of the Configure window.
-
Search for and select the required users.
-
Select the required role from the Role to Assign drop-down list.
-
Click OK to add the user permissions.
-
-
To remove user permissions from a MAC address pool:
-
Select the user permission to be removed in the user permissions pane at the bottom of the Configure window.
-
Click Remove to remove the user permissions.
-
-
Removing MAC Address Pools
You can remove a created MAC address pool if the pool is not associated with a cluster, but the default MAC address pool cannot be removed.
-
Click
. -
Click the MAC Address Pools tab.
-
Select the MAC address pool to be removed.
-
Click the Remove.
-
Click OK.
1.2. Dashboard
The Dashboard provides an overview of the oVirt system status by displaying a summary of oVirt’s resources and utilization. This summary can alert you to a problem and allows you to analyze the problem area.
The information in the dashboard is updated every 15 minutes by default from Data Warehouse, and every 15 seconds by default by the Engine API, or whenever the Dashboard is refreshed. The Dashboard is refreshed when the user changes back from another page or when manually refreshed. The Dashboard does not automatically refresh. The inventory card information is supplied by the Engine API and the utilization information is supplied by Data Warehouse. The Dashboard is implemented as a UI plugin component, which is automatically installed and upgraded alongside the Engine.

1.2.1. Prerequisites
The Dashboard requires that Data Warehouse is installed and configured. See Installing and Configuring Data Warehouse in the Data Warehouse Guide.
1.2.2. Global Inventory
The top section of the Dashboard provides a global inventory of the oVirt resources and includes items for data centers, clusters, hosts, storage domains, virtual machines, and events. Icons show the status of each resource and numbers show the quantity of the each resource with that status.

The title shows the number of a type of resource and their status is displayed below the title. Clicking on the resource title navigates to the related page in the oVirt Engine. The status for Clusters is always displayed as N/A.
Icon | Status |
---|---|
None of that resource added to oVirt. |
|
Shows the number of a resource with a warning status. Clicking on the icon navigates to the appropriate page with the search limited to that resource with a warning status. The search is limited differently for each resource:
|
|
Shows the number of a resource with an up status. Clicking on the icon navigates to the appropriate page with the search limited to resources that are up. |
|
Shows the number of a resource with a down status. Clicking on the icon navigates to the appropriate page with the search limited to resources with a down status. The search is limited differently for each resource:
|
|
images:images/Dashboard_Alert.png[title="Alert icon"] |
Shows the number of events with an alert status. Clicking on the icon navigates to Events with the search limited to events with the severity of alert. |
images:images/Dashboard_Error.png[title="Error icon"] |
Shows the number of events with an error status. Clicking on the icon navigates to Events with the search limited to events with the severity of error. |
1.2.3. Global Utilization
The Global Utilization section shows the system utilization of the CPU, Memory and Storage.

-
The top section shows the percentage of the available CPU, memory or storage and the over commit ratio. For example, the over commit ratio for the CPU is calculated by dividing the number of virtual cores by the number of physical cores that are available for the running virtual machines based on the latest data in Data Warehouse.
-
The donut displays the usage in percentage for the CPU, memory or storage and shows the average usage for all hosts based on the average usage in the last 5 minutes. Hovering over a section of the donut will display the value of the selected section.
-
The line graph at the bottom displays the trend in the last 24 hours. Each data point shows the average usage for a specific hour. Hovering over a point on the graph displays the time and the percentage used for the CPU graph and the amount of usage for the memory and storage graphs.
Top Utilized Resources

Clicking the donut in the global utilization section of the Dashboard will display a list of the top utilized resources for the CPU, memory or storage. For CPU and memory the pop-up shows a list of the ten hosts and virtual machines with the highest usage. For storage the pop-up shows a list of the top ten utilized storage domains and virtual machines. The arrow to the right of the usage bar shows the trend of usage for that resource in the last minute.
1.2.4. Cluster Utilization
The Cluster Utilization section shows the cluster utilization for the CPU and memory in a heatmap.

CPU
The heatmap of the CPU utilization for a specific cluster that shows the average utilization of the CPU for the last 24 hours. Hovering over the heatmap displays the cluster name. Clicking on the heatmap navigates to
and displays the results of a search on a specific cluster sorted by CPU utilization. The formula used to calculate the usage of the CPU by the cluster is the average host CPU utilization in the cluster. This is calculated by using the average host CPU utilization for each host over the last 24 hours to find the total average usage of the CPU by the cluster.Memory
The heatmap of the memory utilization for a specific cluster that shows the average utilization of the memory for the last 24 hours. Hovering over the heatmap displays the cluster name. Clicking on the heatmap navigates to
and displays the results of a search on a specific cluster sorted by memory usage. The formula used to calculate the memory usage by the cluster is the total utilization of the memory in the cluster in GB. This is calculated by using the average host memory utilization for each host over the last 24 hours to find the total average usage of memory by the cluster.1.2.5. Storage Utilization
The Storage Utilization section shows the storage utilization in a heatmap.

The heatmap shows the average utilization of the storage for the last 24 hours. The formula used to calculate the storage usage by the cluster is the total utilization of the storage in the cluster. This is calculated by using the average storage utilization for each host over the last 24 hours to find the total average usage of the storage by the cluster. Hovering over the heatmap displays the storage domain name. Clicking on the heatmap navigates to
with the storage domains sorted by utilization.1.3. Searches
1.3.1. Performing Searches in oVirt
The Administration Portal allows you to manage thousands of resources, such as virtual machines, hosts, users, and more. To perform a search, enter the search query (free-text or syntax-based) into the search bar, available on the main page for each resource. Search queries can be saved as bookmarks for future reuse, so you do not have to reenter a search query each time the specific search results are required. Searches are not case sensitive.
1.3.2. Search Syntax and Examples
The syntax of the search queries for oVirt resources is as follows:
result type: {criteria} [sortby sort_spec]
Syntax Examples
The following examples describe how the search query is used and help you to understand how oVirt assists with building search queries.
Example | Result |
---|---|
Hosts: Vms.status = up page 2 |
Displays page 2 of a list of all hosts running virtual machines that are up. |
Vms: domain = qa.company.com |
Displays a list of all virtual machines running on the specified domain. |
Vms: users.name = Mary |
Displays a list of all virtual machines belonging to users with the user name Mary. |
Events: severity > normal sortby time |
Displays the list of all Events whose severity is higher than Normal, sorted by time. |
1.3.3. Search Auto-Completion
The Administration Portal provides auto-completion to help you create valid and powerful search queries. As you type each part of a search query, a drop-down list of choices for the next part of the search opens below the Search Bar. You can either select from the list and then continue typing/selecting the next part of the search, or ignore the options and continue entering your query manually.
The following table specifies by example how the Administration Portal auto-completion assists in constructing a query:
Hosts: Vms.status = down
Input | List Items Displayed | Action |
---|---|---|
h |
|
Select |
Hosts: |
All host properties |
Type v |
Hosts: v |
host properties starting with a |
Select |
Hosts: Vms |
All virtual machine properties |
Type s |
Hosts: Vms.s |
All virtual machine properties beginning with |
Select |
Hosts: Vms.status |
|
Select or type = |
Hosts: Vms.status = |
All status values |
Select or type down |
1.3.4. Search Result Type Options
The result type allows you to search for resources of any of the following types:
-
Vms for a list of virtual machines
-
Host for a list of hosts
-
Pools for a list of pools
-
Template for a list of templates
-
Events for a list of events
-
Users for a list of users
-
Cluster for a list of clusters
-
DataCenter for a list of data centers
-
Storage for a list of storage domains
As each type of resource has a unique set of properties and a set of other resource types that it is associated with, each search type has a set of valid syntax combinations. You can also use the auto-complete feature to create valid queries easily.
1.3.5. Search Criteria
You can specify the search criteria after the colon in the query. The syntax of {criteria}
is as follows:
<prop><operator><value>
or
<obj-type><prop><operator><value>
Examples
The following table describes the parts of the syntax:
Part | Description | Values | Example | Note |
---|---|---|---|---|
prop |
The property of the searched-for resource. Can also be the property of a resource type (see |
Limit your search to objects with a certain property. For example, search for objects with a status property. |
Status |
N/A |
obj-type |
A resource type that can be associated with the searched-for resource. |
These are system objects, like data centers and virtual machines. |
Users |
N/A |
operator |
Comparison operators. |
= != (not equal) > < >= <= |
N/A |
Value options depend on property. |
Value |
What the expression is being compared to. |
String Integer Ranking Date (formatted according to Regional Settings) |
Jones 256 normal |
|
1.3.6. Search: Multiple Criteria and Wildcards
Wildcards can be used in the <value>
part of the syntax for strings. For example, to find all users beginning with m, enter m*
.
You can perform a search having two criteria by using the Boolean operators AND
and OR
. For example:
Vms: users.name = m* AND status = Up
This query returns all running virtual machines for users whose names begin with "m".
Vms: users.name = m* AND tag = "paris-loc"
This query returns all virtual machines tagged with "paris-loc" for users whose names begin with "m".
When two criteria are specified without AND
or OR
, AND
is implied. AND
precedes OR
, and OR
precedes implied AND
.
1.3.7. Search: Determining Search Order
You can determine the sort order of the returned information by using sortby
. Sort direction (asc
for ascending, desc
for descending) can be included.
For example:
events: severity > normal sortby time desc
This query returns all Events whose severity is higher than Normal, sorted by time (descending order).
1.3.8. Searching for Data Centers
The following table describes all search options for Data Centers.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
Depends on property type |
The property of the clusters associated with the data center. |
|
String |
The name of the data center. |
|
String |
A description of the data center. |
|
String |
The type of data center. |
|
List |
The availability of the data center. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Datacenter: type = nfs and status != up
This example returns a list of data centers with a storage type of NFS and status other than up.
1.3.9. Searching for Clusters
The following table describes all search options for clusters.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
Depends on property type |
The property of the data center associated with the cluster. |
|
String |
The data center to which the cluster belongs. |
|
String |
The unique name that identifies the clusters on the network. |
|
String |
The description of the cluster. |
|
String |
True or False indicating the status of the cluster. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Clusters: initialized = true or name = Default
This example returns a list of clusters which are initialized or named Default.
1.3.10. Searching for Hosts
The following table describes all search options for hosts.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
Depends on property type |
The property of the virtual machines associated with the host. |
|
Depends on property type |
The property of the templates associated with the host. |
|
Depends on property type |
The property of the events associated with the host. |
|
Depends on property type |
The property of the users associated with the host. |
|
String |
The name of the host. |
|
List |
The availability of the host. |
|
String |
The health status of the host as reported by external systems and plug-ins. |
|
String |
The cluster to which the host belongs. |
|
String |
The unique name that identifies the host on the network. |
|
Integer |
The percent of processing power used. |
|
Integer |
The percentage of memory used. |
|
Integer |
The percentage of network usage. |
|
Integer |
Jobs waiting to be executed in the run-queue per processor, in a given time slice. |
|
Integer |
The version number of the operating system. |
|
Integer |
The number of CPUs on the host. |
|
Integer |
The amount of memory available. |
|
Integer |
The processing speed of the CPU. |
|
String |
The type of CPU. |
|
Integer |
The number of virtual machines currently running. |
|
Integer |
The number of virtual machines currently being migrated. |
|
Integer |
The percentage of committed memory. |
|
String |
The tag assigned to the host. |
|
String |
The type of host. |
|
String |
The data center to which the host belongs. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Hosts: cluster = Default and Vms.os = rhel6
This example returns a list of hosts which are part of the Default cluster and host virtual machines running the Enterprise Linux 6 operating system.
1.3.11. Searching for Networks
The following table describes all search options for networks.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
Depends on property type |
The property of the cluster associated with the network. |
|
Depends on property type |
The property of the host associated with the network. |
|
String |
The human readable name that identifies the network. |
|
String |
Keywords or text describing the network, optionally used when creating the network. |
|
Integer |
The VLAN ID of the network. |
|
String |
Whether Spanning Tree Protocol (STP) is enabled or disabled for the network. |
|
Integer |
The maximum transmission unit for the logical network. |
|
String |
Whether the network is only used for virtual machine traffic. |
|
String |
The data center to which the network is attached. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Network: mtu > 1500 and vmnetwork = true
This example returns a list of networks with a maximum transmission unit greater than 1500 bytes, and which are set up for use by only virtual machines.
1.3.12. Searching for Storage
The following table describes all search options for storage.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
Depends on property type |
The property of the hosts associated with the storage. |
|
Depends on property type |
The property of the clusters associated with the storage. |
|
String |
The unique name that identifies the storage on the network. |
|
String |
The status of the storage domain. |
|
String |
The health status of the storage domain as reported by external systems and plug-ins. |
|
String |
The data center to which the storage belongs. |
|
String |
The type of the storage. |
|
Integer |
The size (GB) of the free storage. |
|
Integer |
The amount (GB) of the storage that is used. |
|
Integer |
The total amount (GB) of the storage that is available. |
|
Integer |
The amount (GB) of the storage that is committed. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Storage: free_size > 6 GB and total_size < 20 GB
This example returns a list of storage with free storage space greater than 6 GB, or total storage space less than 20 GB.
1.3.13. Searching for Disks
The following table describes all search options for disks.
You can use the Disk Type and Content Type filtering options to reduce the number of displayed virtual disks.
|
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
Depends on property type |
The property of the data centers associated with the disk. |
|
Depends on property type |
The property of the storage associated with the disk. |
|
String |
The human readable name that identifies the storage on the network. |
|
String |
Keywords or text describing the disk, optionally used when creating the disk. |
|
Integer |
The virtual size of the disk. |
|
Integer |
The size of the disk. |
|
Integer |
The actual size allocated to the disk. |
|
Integer |
The date the disk was created. |
|
String |
Whether the disk can or cannot be booted. Valid values are one of |
|
String |
Whether the disk can or cannot be attached to more than one virtual machine at a time. Valid values are one of |
|
String |
The format of the disk. Can be one of |
|
String |
The status of the disk. Can be one of |
|
String |
The type of the disk. Can be one of |
|
Integer |
The number of virtual machine(s) to which the disk is attached. |
|
String |
The name(s) of the virtual machine(s) to which the disk is attached. |
|
String |
The name of the quota enforced on the virtual disk. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Disks: format = cow and provisioned_size > 8
This example returns a list of virtual disks with QCOW format and an allocated disk size greater than 8 GB.
1.3.14. Searching for Volumes
The following table describes all search options for volumes.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
String |
The name of the cluster associated with the volume. |
|
Depends on property type (examples: name, description, comment, architecture) |
The property of the clusters associated with the volume. |
|
String |
The human readable name that identifies the volume. |
|
String |
Can be one of distribute, replicate, distributed_replicate, stripe, or distributed_stripe. |
|
Integer |
Can be one of TCP or RDMA. |
|
Integer |
Number of replica. |
|
Integer |
Number of stripes. |
|
String |
The status of the volume. Can be one of Up or Down. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Volume: transport_type = rdma and stripe_count >= 2
This example returns a list of volumes with transport type set to RDMA, and with 2 or more stripes.
1.3.15. Searching for Virtual Machines
The following table describes all search options for virtual machines.
Currently, the Network Label, Custom Emulated Machine, and Custom CPU Type properties are not supported search parameters. |
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
Depends on property type |
The property of the hosts associated with the virtual machine. |
|
Depends on property type |
The property of the templates associated with the virtual machine. |
|
Depends on property type |
The property of the events associated with the virtual machine. |
|
Depends on property type |
The property of the users associated with the virtual machine. |
|
Depends on the property type |
The property of storage devices associated with the virtual machine. |
|
Depends on the property type |
The property of the vNIC associated with the virtual machine. |
|
String |
The name of the virtual machine. |
|
List |
The availability of the virtual machine. |
|
Integer |
The IP address of the virtual machine. |
|
Integer |
The number of minutes that the virtual machine has been running. |
|
String |
The domain (usually Active Directory domain) that groups these machines. |
|
String |
The operating system selected when the virtual machine was created. |
|
Date |
The date on which the virtual machine was created. |
|
String |
The unique name that identifies the virtual machine on the network. |
|
Integer |
The percent of processing power used. |
|
Integer |
The percentage of memory used. |
|
Integer |
The percentage of network used. |
|
Integer |
The maximum memory defined. |
|
String |
The applications currently installed on the virtual machine. |
|
List |
The cluster to which the virtual machine belongs. |
|
List |
The virtual machine pool to which the virtual machine belongs. |
|
String |
The name of the user currently logged in to the virtual machine. |
|
List |
The tags to which the virtual machine belongs. |
|
String |
The data center to which the virtual machine belongs. |
|
List |
The virtual machine type (server or desktop). |
|
String |
The name of the quota associated with the virtual machine. |
|
String |
Keywords or text describing the virtual machine, optionally used when creating the virtual machine. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
|
Boolean |
The virtual machine has pending configuration changes. |
Example
Vms: template.name = Win* and user.name = ""
This example returns a list of virtual machines whose base template name begins with Win and are assigned to any user.
Example
Vms: cluster = Default and os = windows7
This example returns a list of virtual machines that belong to the Default cluster and are running Windows 7.
1.3.16. Searching for Pools
The following table describes all search options for Pools.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
String |
The name of the pool. |
|
String |
The description of the pool. |
|
List |
The type of pool. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Pools: type = automatic
This example returns a list of pools with a type of automatic
.
1.3.17. Searching for Templates
The following table describes all search options for templates.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
String |
The property of the virtual machines associated with the template. |
|
String |
The property of the hosts associated with the template. |
|
String |
The property of the events associated with the template. |
|
String |
The property of the users associated with the template. |
|
String |
The name of the template. |
|
String |
The domain of the template. |
|
String |
The type of operating system. |
|
Integer |
The date on which the template was created. Date format is mm/dd/yy. |
|
Integer |
The number of virtual machines created from the template. |
|
Integer |
Defined memory. |
|
String |
The description of the template. |
|
String |
The status of the template. |
|
String |
The cluster associated with the template. |
|
String |
The data center associated with the template. |
|
String |
The quota associated with the template. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Template: Events.severity >= normal and Vms.uptime > 0
This example returns a list of templates where events of normal or greater severity have occurred on virtual machines derived from the template, and the virtual machines are still running.
1.3.18. Searching for Users
The following table describes all search options for users.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
Depends on property type |
The property of the virtual machines associated with the user. |
|
Depends on property type |
The property of the hosts associated with the user. |
|
Depends on property type |
The property of the templates associated with the user. |
|
Depends on property type |
The property of the events associated with the user. |
|
String |
The name of the user. |
|
String |
The last name of the user. |
|
String |
The unique name of the user. |
|
String |
The department to which the user belongs. |
|
String |
The group to which the user belongs. |
|
String |
The title of the user. |
|
String |
The status of the user. |
|
String |
The role of the user. |
|
String |
The tag to which the user belongs. |
|
String |
The pool to which the user belongs. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Users: Events.severity > normal and Vms.status = up or Vms.status = pause
This example returns a list of users where events of greater than normal severity have occurred on their virtual machines AND the virtual machines are still running; or the users' virtual machines are paused.
1.3.19. Searching for Events
The following table describes all search options you can use to search for events. Auto-completion is offered for many options as appropriate.
Property (of resource or resource-type) | Type | Description (Reference) |
---|---|---|
|
Depends on property type |
The property of the virtual machines associated with the event. |
|
Depends on property type |
The property of the hosts associated with the event. |
|
Depends on property type |
The property of the templates associated with the event. |
|
Depends on property type |
The property of the users associated with the event. |
|
Depends on property type |
The property of the clusters associated with the event. |
|
Depends on property type |
The property of the volumes associated with the event. |
|
List |
Type of the event. |
|
List |
The severity of the event: Warning/Error/Normal. |
|
String |
Description of the event type. |
|
List |
Day the event occurred. |
|
String |
The user name associated with the event. |
|
String |
The host associated with the event. |
|
String |
The virtual machine associated with the event. |
|
String |
The template associated with the event. |
|
String |
The storage associated with the event. |
|
String |
The data center associated with the event. |
|
String |
The volume associated with the event. |
|
Integer |
The identification number of the event. |
|
List |
Sorts the returned results by one of the resource properties. |
|
Integer |
The page number of results to display. |
Example
Events: Vms.name = testdesktop and Hosts.name = gonzo.example.com
This example returns a list of events, where the event occurred on the virtual machine named testdesktop
while it was running on the host gonzo.example.com
.
1.4. Bookmarks
1.4.1. Saving a Query String as a Bookmark
A bookmark can be used to remember a search query, and shared with other users.
-
Enter the desired search query in the search bar and perform the search.
-
Click the star-shaped Bookmark button to the right of the search bar. This opens the New Bookmark window.
-
Enter the Name of the bookmark.
-
Edit the Search string field, if required.
-
Click OK.
Click the Bookmarks icon () in the header bar to find and select the bookmark.
1.4.2. Editing a Bookmark
You can modify the name and search string of a bookmark.
-
Click the Bookmarks icon (
) in the header bar.
-
Select a bookmark and click Edit.
-
Change the Name and Search string fields as necessary.
-
Click OK.
1.4.3. Deleting a Bookmark
When a bookmark is no longer needed, remove it.
-
Click the Bookmarks icon (
) in the header bar.
-
Select a bookmark and click Remove.
-
Click OK.
1.5. Tags
1.5.1. Using Tags to Customize Interactions with oVirt
After your oVirt platform is set up and configured to your requirements, you can customize the way you work with it using tags. Tags allow system resources to be arranged into groups or categories. This is useful when many objects exist in the virtualization environment and the administrator wants to concentrate on a specific set of them.
This section describes how to create and edit tags, assign them to hosts or virtual machines and search using the tags as criteria. Tags can be arranged in a hierarchy that matches a structure, to fit the needs of the enterprise.
To create, modify, and remove Administration Portal tags, click the Tags icon () in the header bar.
1.5.2. Creating a Tag
Create tags so you can filter search results using tags.
-
Click the Tags icon (
) in the header bar.
-
Click Add to create a new tag, or select a tag and click New to create a descendant tag.
-
Enter the Name and Description of the new tag.
-
Click OK.
1.5.3. Modifying a Tag
You can edit the name and description of a tag.
Modifying a Tag
-
Click the Tags icon (
) in the header bar.
-
Select the tag you want to modify and click Edit.
-
Change the Name and Description fields as necessary.
-
Click OK.
1.5.4. Deleting a Tag
When a tag is no longer needed, remove it.
-
Click the Tags icon (
) in the header bar.
-
Select the tag you want to delete and click Remove. A message warns you that removing the tag will also remove all descendants of the tag.
-
Click OK.
You have removed the tag and all its descendants. The tag is also removed from all the objects that it was attached to.
1.5.5. Adding and Removing Tags to and from Objects
You can assign tags to and remove tags from hosts, virtual machines, and users.
-
Select the object(s) you want to tag or untag.
-
Click More Actions (
), then click Assign Tags.
-
Select the check box to assign a tag to the object, or clear the check box to detach the tag from the object.
-
Click OK.
The specified tag is now added or removed as a custom property of the selected object(s).
1.5.6. Searching for Objects Using Tags
Enter a search query using tag
as the property and the desired value or set of values as criteria for the search.
The objects tagged with the specified criteria are listed in the results list.
If you search for objects using |
1.5.7. Customizing Hosts with Tags
You can use tags to store information about your hosts. You can then search for hosts based on tags. For more information on searches, see Searches.
-
Click
and select a host. -
Click More Actions (
), then click Assign Tags.
-
Select the check boxes of applicable tags.
-
Click OK.
You have added extra, searchable information about your host as tags.
2. Administering the Resources
2.1. Quality of Service
oVirt allows you to define quality of service entries that provide fine-grained control over the level of input and output, processing, and networking capabilities that resources in your environment can access. Quality of service entries are defined at the data center level and are assigned to profiles created under clusters and storage domains. These profiles are then assigned to individual resources in the clusters and storage domains where the profiles were created.
2.1.1. Storage Quality of Service
Storage quality of service defines the maximum level of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Assigning storage quality of service to a virtual disk allows you to fine tune the performance of storage domains and prevent the storage operations associated with one virtual disk from affecting the storage capabilities available to other virtual disks hosted in the same storage domain.
Creating a Storage Quality of Service Entry
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the QoS tab.
-
Under Storage, click New.
-
Enter a QoS Name and a Description for the quality of service entry.
-
Specify the Throughput quality of service by clicking one of the radio buttons:
-
None
-
Total - Enter the maximum permitted total throughput in the MB/s field.
-
Read/Write - Enter the maximum permitted throughput for read operations in the left MB/s field, and the maximum permitted throughput for write operations in the right MB/s field.
-
-
Specify the input and output (IOps) quality of service by clicking one of the radio buttons:
-
None
-
Total - Enter the maximum permitted number of input and output operations per second in the IOps field.
-
Read/Write - Enter the maximum permitted number of input operations per second in the left IOps field, and the maximum permitted number of output operations per second in the right IOps field.
-
-
Click OK.
You have created a storage quality of service entry, and can create disk profiles based on that entry in data storage domains that belong to the data center.
Removing a Storage Quality of Service Entry
Remove an existing storage quality of service entry.
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the QoS tab.
-
Under Storage, select a storage quality of service entry and click Remove.
-
Click OK.
If any disk profiles were based on that entry, the storage quality of service entry for those profiles is automatically set to [unlimited]
.
2.1.2. Virtual Machine Network Quality of Service
Virtual machine network quality of service is a feature that allows you to create profiles for limiting both the inbound and outbound traffic of individual virtual network interface controllers. With this feature, you can limit bandwidth in a number of layers, controlling the consumption of network resources.
Creating a Virtual Machine Network Quality of Service Entry
Create a virtual machine network quality of service entry to regulate network traffic when applied to a virtual network interface controller (vNIC) profile, also known as a virtual machine network interface profile.
Creating a Virtual Machine Network Quality of Service Entry
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the QoS tab.
-
Under VM Network, click New.
-
Enter a Name for the virtual machine network quality of service entry.
-
Enter the limits for the Inbound and Outbound network traffic.
-
Click OK.
You have created a virtual machine network quality of service entry that can be used in a virtual network interface controller.
Settings in the New Virtual Machine Network QoS and Edit Virtual Machine Network QoS Windows Explained
Virtual machine network quality of service settings allow you to configure bandwidth limits for both inbound and outbound traffic on three distinct levels.
Field Name | Description |
---|---|
Data Center |
The data center to which the virtual machine network QoS policy is to be added. This field is configured automatically according to the selected data center. |
Name |
A name to represent the virtual machine network QoS policy within the Engine. |
Inbound |
The settings to be applied to inbound traffic. Select or clear the Inbound check box to enable or disable these settings.
|
Outbound |
The settings to be applied to outbound traffic. Select or clear the Outbound check box to enable or disable these settings.
|
To change the maximum value allowed by the Average, Peak, or Burst fields, use the engine-config
command to change the value of the MaxAverageNetworkQoSValue
, MaxPeakNetworkQoSValue
, or MaxBurstNetworkQoSValue
configuration keys. You must restart the ovirt-engine service for any changes to take effect. For example:
# engine-config -s MaxAverageNetworkQoSValue=2048 # systemctl restart ovirt-engine
Removing a Virtual Machine Network Quality of Service Entry
Remove an existing virtual machine network quality of service entry.
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the QoS tab.
-
Under VM Network, select a virtual machine network quality of service entry and click Remove.
-
Click OK.
2.1.3. Host Network Quality of Service
Host network quality of service configures the networks on a host to enable the control of network traffic through the physical interfaces. Host network quality of service allows for the fine tuning of network performance by controlling the consumption of network resources on the same physical network interface controller. This helps to prevent situations where one network causes other networks attached to the same physical network interface controller to no longer function due to heavy traffic. By configuring host network quality of service, these networks can now function on the same physical network interface controller without congestion issues.
Creating a Host Network Quality of Service Entry
Create a host network quality of service entry.
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the QoS tab.
-
Under Host Network, click New.
-
Enter a Qos Name and a description for the quality of service entry.
-
Enter the desired values for Weighted Share, Rate Limit [Mbps], and Committed Rate [Mbps].
-
Click OK.
Settings in the New Host Network Quality of Service and Edit Host Network Quality of Service Windows Explained
Host network quality of service settings allow you to configure bandwidth limits for outbound traffic.
Field Name | Description |
---|---|
Data Center |
The data center to which the host network QoS policy is to be added. This field is configured automatically according to the selected data center. |
QoS Name |
A name to represent the host network QoS policy within the Engine. |
Description |
A description of the host network QoS policy. |
Outbound |
The settings to be applied to outbound traffic.
|
To change the maximum value allowed by the Rate Limit [Mbps] or Committed Rate [Mbps] fields, use the engine-config
command to change the value of the MaxAverageNetworkQoSValue
configuration key. You must restart the ovirt-engine service for the change to take effect. For example:
# engine-config -s MaxAverageNetworkQoSValue=2048 # systemctl restart ovirt-engine
Removing a Host Network Quality of Service Entry
Remove an existing network quality of service entry.
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the QoS tab.
-
Under Host Network, select a host network quality of service entry and click Remove.
-
Click OK when prompted.
2.1.4. CPU Quality of Service
CPU quality of service defines the maximum amount of processing capability a virtual machine can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. Assigning CPU quality of service to a virtual machine allows you to prevent the workload on one virtual machine in a cluster from affecting the processing resources available to other virtual machines in that cluster.
Creating a CPU Quality of Service Entry
Create a CPU quality of service entry.
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the QoS tab.
-
Under CPU, click New.
-
Enter a QoS Name and a Description for the quality of service entry.
-
Enter the maximum processing capability the quality of service entry permits in the Limit (%) field. Do not include the
%
symbol. -
Click OK.
You have created a CPU quality of service entry, and can create CPU profiles based on that entry in clusters that belong to the data center.
Removing a CPU Quality of Service Entry
Remove an existing CPU quality of service entry.
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the QoS tab.
-
Under CPU, select a CPU quality of service entry and click Remove.
-
Click OK.
If any CPU profiles were based on that entry, the CPU quality of service entry for those profiles is automatically set to [unlimited]
.
2.2. Data Centers
2.2.1. Introduction to Data Centers
A data center is a logical entity that defines the set of resources used in a specific environment. A data center is considered a container resource, in that it is comprised of logical resources, in the form of clusters and hosts; network resources, in the form of logical networks and physical NICs; and storage resources, in the form of storage domains.
A data center can contain multiple clusters, which can contain multiple hosts; it can have multiple storage domains associated to it; and it can support multiple virtual machines on each of its hosts. A oVirt environment can contain multiple data centers; the data center infrastructure allows you to keep these centers separate.
All data centers are managed from the single Administration Portal.

oVirt creates a default data center during installation. You can configure the default data center, or set up new appropriately named data centers.
2.2.2. The Storage Pool Manager
The Storage Pool Manager (SPM) is a role given to one of the hosts in the data center enabling it to manage the storage domains of the data center. The SPM entity can be run on any host in the data center; the oVirt Engine grants the role to one of the hosts. The SPM does not preclude the host from its standard operation; a host running as SPM can still host virtual resources.
The SPM entity controls access to storage by coordinating the metadata across the storage domains. This includes creating, deleting, and manipulating virtual disks (images), snapshots, and templates, and allocating storage for sparse block devices (on SAN). This is an exclusive responsibility: only one host can be the SPM in the data center at one time to ensure metadata integrity.
The oVirt Engine ensures that the SPM is always available. The Engine moves the SPM role to a different host if the SPM host encounters problems accessing the storage. When the SPM starts, it ensures that it is the only host granted the role; therefore it will acquire a storage-centric lease. This process can take some time.
2.2.3. SPM Priority
The SPM role uses some of a host’s available resources. The SPM priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority. Critical virtual machines on hosts with low SPM priority will not have to contend with SPM operations for host resources.
You can change a host’s SPM priority in the SPM tab in the Edit Host window.
2.2.4. Data Center Tasks
Creating a New Data Center
This procedure creates a data center in your virtualization environment. The data center requires a functioning cluster, host, and storage domain to operate.
After you set the Compatibility Version, you cannot lower the version number. Version regression is not supported. You can specify a MAC pool range for a cluster. Setting a MAC pool range is no longer supported. |
-
Click
. -
Click New.
-
Enter the Name and Description of the data center.
-
Select the Storage Type, Compatibility Version, and Quota Mode of the data center from the drop-down menus.
-
Click OK to create the data center and open the Data Center - Guide Me window.
-
The Guide Me window lists the entities that need to be configured for the data center. Configure these entities or postpone configuration by clicking the Configure Later button. Configuration can be resumed by selecting the data center and clicking More Actions (
), then clicking Guide Me.
The new data center will remain Uninitialized until a cluster, host, and storage domain are configured for it; use Guide Me to configure these entities.
Explanation of Settings in the New Data Center and Edit Data Center Windows
The table below describes the settings of a data center as displayed in the New Data Center and Edit Data Center windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.
Field | Description/Action |
---|---|
Name |
The name of the data center. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. |
Description |
The description of the data center. This field is recommended but not mandatory. |
Storage Type |
Choose Shared or Local storage type. Different types of storage domains (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center. Local and shared domains, however, cannot be mixed. You can change the storage type after the data center is initialized. See Changing the Data Center Storage Type. |
Compatibility Version |
The version of oVirt. After upgrading the oVirt Engine, the hosts, clusters and data centers may still be in the earlier version. Ensure that you have upgraded all the hosts, then the clusters, before you upgrade the Compatibility Level of the data center. |
Quota Mode |
Quota is a resource limitation tool provided with oVirt. Choose one of:
|
Comment |
Optionally add a plain text comment about the data center. |
Re-Initializing a Data Center: Recovery Procedure
This recovery procedure replaces the master
data domain of your data center with a new master
data domain. You must re-initialize your master
data domain if its data is corrupted. Re-initializing a data center allows you to restore all other resources associated with the data center, including clusters, hosts, and non-problematic storage domains.
You can import any backup or exported virtual machines or templates into your new master
data domain.
-
Click
and select the data center. -
Ensure that any storage domains attached to the data center are in maintenance mode.
-
Click More Actions (
), then click Re-Initialize Data Center.
-
The Data Center Re-Initialize window lists all available (detached; in maintenance mode) storage domains. Click the radio button for the storage domain you are adding to the data center.
-
Select the Approve operation check box.
-
Click OK.
The storage domain is attached to the data center as the master
data domain and activated. You can now import any backup or exported virtual machines or templates into your new master
data domain.
Removing a Data Center
An active host is required to remove a data center. Removing a data center will not remove the associated resources.
-
Ensure the storage domains attached to the data center are in maintenance mode.
-
Click
and select the data center to remove. -
Click Remove.
-
Click OK.
Force Removing a Data Center
A data center becomes Non Responsive
if the attached storage domain is corrupt or if the host becomes Non Responsive
. You cannot Remove the data center under either circumstance.
Force Remove does not require an active host. It also permanently removes the attached storage domain.
It may be necessary to Destroy a corrupted storage domain before you can Force Remove the data center.
-
Click
and select the data center to remove. -
Click More Actions (
), then click Force Remove.
-
Select the Approve operation check box.
-
Click OK
The data center and attached storage domain are permanently removed from the oVirt environment.
Changing the Data Center Storage Type
You can change the storage type of the data center after it has been initialized. This is useful for data domains that are used to move virtual machines or templates around.
Limitations
-
Shared to Local - For a data center that does not contain more than one host and more than one cluster, since a local data center does not support it.
-
Local to Shared - For a data center that does not contain a local storage domain.
-
Click
and select the data center to change. -
Click Edit.
-
Change the Storage Type to the desired value.
-
Click OK.
Changing the Data Center Compatibility Version
oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level.
-
To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center.
-
In the Administration Portal, click
. -
Select the data center to change and click Edit.
-
Change the Compatibility Version to the desired value.
-
Click OK. The Change Data Center Compatibility Version confirmation dialog opens.
-
Click OK to confirm.
2.2.5. Data Centers and Storage Domains
Attaching an Existing Data Domain to a Data Center
Data domains that are Unattached can be attached to a data center. Shared storage domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center.
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the Storage tab to list the storage domains already attached to the data center.
-
Click Attach Data.
-
Select the check box for the data domain to attach to the data center. You can select multiple check boxes to attach multiple data domains.
-
Click OK.
The data domain is attached to the data center and is automatically activated.
Attaching an Existing ISO domain to a Data Center
An ISO domain that is Unattached can be attached to a data center. The ISO domain must be of the same Storage Type as the data center.
Only one ISO domain can be attached to a data center.
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the Storage tab to list the storage domains already attached to the data center.
-
Click Attach ISO.
-
Click the radio button for the appropriate ISO domain.
-
Click OK.
The ISO domain is attached to the data center and is automatically activated.
Attaching an Existing Export Domain to a Data Center
The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. See Importing Existing Storage Domains for information on importing storage domains. |
An export domain that is Unattached can be attached to a data center. Only one export domain can be attached to a data center.
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the Storage tab to list the storage domains already attached to the data center.
-
Click Attach Export.
-
Click the radio button for the appropriate export domain.
-
Click OK.
The export domain is attached to the data center and is automatically activated.
Detaching a Storage Domain from a Data Center
Detaching a storage domain from a data center stops the data center from associating with that storage domain. The storage domain is not removed from the oVirt environment; it can be attached to another data center.
Data, such as virtual machines and templates, remains attached to the storage domain.
Although it possible to detach the last master storage domain, this is not recommended. If the master storage domain is detached, it must be reinitialized. If the storage domain is reinitialized, all your data will be lost, and the storage domain might not find your disks again. |
-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the Storage tab to list the storage domains attached to the data center.
-
Select the storage domain to detach. If the storage domain is
Active
, click Maintenance. -
Click OK to initiate maintenance mode.
-
Click Detach.
-
Click OK.
It can take up to several minutes for the storage domain to disappear from the details view.
2.3. Clusters
2.3.1. Introduction to Clusters
A cluster is a logical grouping of hosts that share the same storage domains and have the same type of CPU (either Intel or AMD). If the hosts have different generations of CPU models, they use only the features present in all models.
Each cluster in the system must belong to a data center, and each host in the system must belong to a cluster. Virtual machines are dynamically allocated to any host in a cluster and can be migrated between them, according to policies defined on the cluster and settings on the virtual machines. The cluster is the highest level at which power and load-sharing policies can be defined.
The number of hosts and number of virtual machines that belong to a cluster are displayed in the results list under Host Count and VM Count, respectively.
Clusters run virtual machines or Gluster Storage Servers. These two purposes are mutually exclusive: A single cluster cannot support virtualization and storage hosts together.
oVirt creates a default cluster in the default data center during installation.

2.3.2. Cluster Tasks
Some cluster options do not apply to Gluster clusters. For more information about using Gluster Storage with oVirt, see Configuring oVirt with Gluster Storage. |
Creating a New Cluster
A data center can contain multiple clusters, and a cluster can contain multiple hosts. All hosts in a cluster must have the same CPU architecture. To optimize your CPU types, create your hosts before you create your cluster. After creating the cluster, you can configure the hosts using the Guide Me button.
-
Click
. -
Click New.
-
Select the Data Center the cluster will belong to from the drop-down list.
-
Enter the Name and Description of the cluster.
-
Select a network from the Management Network drop-down list to assign the management network role.
-
Select the CPU Architecture.
-
For CPU Type, select the oldest CPU processor family among the hosts that will be part of this cluster. The CPU types are listed in order from the oldest to newest.
A hosts whose CPU processor family is older than the one you specify with CPU Type cannot be part of this cluster. For details, see Which CPU family should a RHEV3 or RHV4 cluster be set to?.
-
Select the FIPS Mode of the cluster from the drop-down list.
-
Select the Compatibility Version of the cluster from the drop-down list.
-
Select the Switch Type from the drop-down list.
-
Select the Firewall Type for hosts in the cluster, either Firewalld (default) or iptables.
iptables is only supported on Enterprise Linux 7 hosts, in clusters with compatibility version 4.2 or 4.3. You can only add Enterprise Linux 8 hosts to clusters with firewall type firewalld
-
Select either the Enable Virt Service or Enable Gluster Service check box to define whether the cluster will be populated with virtual machine hosts or with Gluster-enabled nodes.
-
Optionally select the Enable to set VM maintenance reason check box to enable an optional reason field when a virtual machine is shut down from the Engine, allowing the administrator to provide an explanation for the maintenance.
-
Optionally select the Enable to set Host maintenance reason check box to enable an optional reason field when a host is placed into maintenance mode from the Engine, allowing the administrator to provide an explanation for the maintenance.
-
Optionally select the /dev/hwrng source (external hardware device) check box to specify the random number generator device that all hosts in the cluster will use. The /dev/urandom source (Linux-provided device) is enabled by default.
-
Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster.
-
Click the Migration Policy tab to define the virtual machine migration policy for the cluster.
-
Click the Scheduling Policy tab to optionally configure a scheduling policy, configure scheduler optimization settings, enable trusted service for hosts in the cluster, enable HA Reservation, and select a serial number policy.
-
Click the Console tab to optionally override the global SPICE proxy, if any, and specify the address of a SPICE proxy for hosts in the cluster.
-
Click the Fencing policy tab to enable or disable fencing in the cluster, and select fencing options.
-
Click the MAC Address Pool tab to specify a MAC address pool other than the default pool for the cluster. For more options on creating, editing, or removing MAC address pools, see MAC Address Pools.
-
Click OK to create the cluster and open the Cluster - Guide Me window.
-
The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the Configure Later button. Configuration can be resumed by selecting the cluster and clicking More Actions (
), then clicking Guide Me.
General Cluster Settings Explained
The table below describes the settings for the General tab in the New Cluster and Edit Cluster windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.
Field | Description/Action |
---|---|
Data Center |
The data center that will contain the cluster. The data center must be created before adding a cluster. |
Name |
The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. |
Description / Comment |
The description of the cluster or additional notes. These fields are recommended but not mandatory. |
Management Network |
The logical network that will be assigned the management network role. The default is ovirtmgmt. This network will also be used for migrating virtual machines if the migration network is not properly attached to the source or the destination hosts. On existing clusters, the management network can only be changed using the Manage Networks button in the Logical Networks tab in the details view. |
CPU Architecture |
The CPU architecture of the cluster. All hosts in a cluster must run the architecture you specify. Different CPU types are available depending on which CPU architecture is selected.
|
CPU Type |
The oldest CPU family in the cluster. For a list of CPU types, see CPU Requirements in the Planning and Prerequisites Guide. You cannot change this after creating the cluster without significant disruption. Set CPU type to the oldest CPU model in the cluster. Only features present in all models can be used. For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest. |
Chipset/Firmware Type |
This setting is only available if the CPU Architecture of the cluster is set to x86_64. This setting specifies the chipset and firmware type. Options are:
For more information, see UEFI and the Q35 chipset in the Administration Guide. |
Change Existing VMs/Templates from 1440fx to Q35 Chipset with Bios |
Select this check box to change existing workloads when the cluster’s chipset changes from I440FX to Q35. |
FIPS Mode |
The FIPS mode used by the cluster. All hosts in the cluster must run the FIPS mode you specify or they will become non-operational.
|
Compatibility Version |
The version of oVirt. You will not be able to select a version earlier than the version specified for the data center. |
Switch Type |
The type of switch used by the cluster. Linux Bridge is the standard oVirt switch. OVS provides support for Open vSwitch networking features. |
Firewall Type |
Specifies the firewall type for hosts in the cluster, either firewalld (default) or iptables. iptables is only supported on Enterprise Linux 7 hosts, in clusters with compatibility version 4.2 or 4.3. You can only add Enterprise Linux 8 hosts to clusters with firewall type firewalld. If you change an existing cluster’s firewall type, you must reinstall all hosts in the cluster to apply the change. |
Default Network Provider |
Specifies the default external network provider that the cluster will use. If you select Open Virtual Network (OVN), the hosts added to the cluster are automatically configured to communicate with the OVN provider. If you change the default network provider, you must reinstall all hosts in the cluster to apply the change. |
Maximum Log Memory Threshold |
Specifies the logging threshold for maximum memory consumption as a percentage or as an absolute value in MB. A message is logged if a host’s memory usage exceeds the percentage value or if a host’s available memory falls below the absolute value in MB. The default is |
Enable Virt Service |
If this check box is selected, hosts in this cluster will be used to run virtual machines. |
Enable Gluster Service |
If this check box is selected, hosts in this cluster will be used as Gluster Storage Server nodes, and not for running virtual machines. |
Import existing gluster configuration |
This check box is only available if the Enable Gluster Service radio button is selected. This option allows you to import an existing Gluster-enabled cluster and all its attached hosts to oVirt Engine. The following options are required for each host in the cluster that is being imported:
|
Additional Random Number Generator source |
If the check box is selected, all hosts in the cluster have the additional random number generator device available. This enables passthrough of entropy from the random number generator device to virtual machines. |
Gluster Tuned Profile |
This check box is only available if the Enable Gluster Service check box is selected. This option specifies the virtual-host tuning profile to enable more aggressive writeback of dirty memory pages, which benefits the host performance. |
Optimization Settings Explained
Memory Considerations
Memory page sharing allows virtual machines to use up to 200% of their allocated memory by utilizing unused memory in other virtual machines. This process is based on the assumption that the virtual machines in your oVirt environment will not all be running at full capacity at the same time, allowing unused memory to be temporarily allocated to a particular virtual machine.
CPU Considerations
-
For non-CPU-intensive workloads, you can run virtual machines with a total number of processor cores greater than the number of cores in the host (the number of processor cores for a single virtual machine must not exceed the number of cores in the host). The following benefits can be achieved:
-
You can run a greater number of virtual machines, which reduces hardware requirements.
-
You can configure virtual machines with CPU topologies that are otherwise not possible, such as when the number of virtual cores is between the number of host cores and the number of host threads.
-
-
For best performance, and especially for CPU-intensive workloads, you should use the same topology in the virtual machine as in the host, so the host and the virtual machine expect the same cache usage. When the host has hyperthreading enabled, QEMU treats the host’s hyperthreads as cores, so the virtual machine is not aware that it is running on a single core with multiple threads. This behavior might impact the performance of a virtual machine, because a virtual core that actually corresponds to a hyperthread in the host core might share a single cache with another hyperthread in the same host core, while the virtual machine treats it as a separate core.
The table below describes the settings for the Optimization tab in the New Cluster and Edit Cluster windows.
Field | Description/Action |
---|---|
Memory Optimization |
|
CPU Threads |
Selecting the Count Threads As Cores check box enables hosts to run virtual machines with a total number of processor cores greater than the number of cores in the host (the number of processor cores for a single virtual machine must not exceed the number of cores in the host). When this check box is selected, the exposed host threads are treated as cores that virtual machines can use. For example, a 24-core system with 2 threads per core (48 threads total) can run virtual machines with up to 48 cores each, and the algorithms to calculate host CPU load would compare load against twice as many potential utilized cores. |
Memory Balloon |
Selecting the Enable Memory Balloon Optimization check box enables memory overcommitment on virtual machines running on the hosts in this cluster. When this check box is selected, the Memory Overcommit Manager (MoM) starts ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine. To have a balloon running, the virtual machine needs to have a balloon device with relevant drivers. Each virtual machine includes a balloon device unless specifically removed. Each host in this cluster receives a balloon policy update when its status changes to It is important to understand that in some scenarios ballooning may collide with KSM. In such cases MoM will try to adjust the balloon size to minimize collisions. Additionally, in some scenarios ballooning may cause sub-optimal performance for a virtual machine. Administrators are advised to use ballooning optimization with caution. |
KSM control |
Selecting the Enable KSM check box enables MoM to run Kernel Same-page Merging (KSM) when necessary and when it can yield a memory saving benefit that outweighs its CPU cost. |
Migration Policy Settings Explained
A migration policy defines the conditions for live migrating virtual machines in the event of host failure. These conditions include the downtime of the virtual machine during migration, network bandwidth, and how the virtual machines are prioritized.
Policy | Description | ||
---|---|---|---|
Cluster default (Minimal downtime) |
Overrides in |
||
Minimal downtime |
A policy that lets virtual machines migrate in typical situations. Virtual machines should not experience any significant downtime. The migration will be aborted if the virtual machine migration does not converge after a long time (dependent on QEMU iterations, with a maximum of 500 milliseconds). The guest agent hook mechanism is enabled. |
||
Post-copy migration |
When used, post-copy migration pauses the migrating virtual machine vCPUs on the source host, transfers only a minimum of memory pages, activates the virtual machine vCPUs on the destination host, and transfers the remaining memory pages while the virtual machine is running on the destination. The post-copy policy first tries pre-copy to verify whether convergence can occur. The migration switches to post-copy if the virtual machine migration does not converge after a long time. This significantly reduces the downtime of the migrated virtual machine, and also guarantees that the migration finishes regardless of how rapidly the memory pages of the source virtual machine change. It is optimal for migrating virtual machines in heavy continuous use, which would not be possible to migrate with standard pre-copy migration. The disadvantage of this policy is that in the post-copy phase, the virtual machine may slow down significantly as the missing parts of memory are transferred between the hosts.
|
||
Suspend workload if needed |
A policy that lets virtual machines migrate in most situations, including virtual machines running heavy workloads. Because of this, virtual machines may experience a more significant downtime than with some of the other settings. The migration may still be aborted for extreme workloads. The guest agent hook mechanism is enabled. |
The bandwidth settings define the maximum bandwidth of both outgoing and incoming migrations per host.
Policy | Description |
---|---|
Auto |
Bandwidth is copied from the Rate Limit [Mbps] setting in the data center Host Network QoS. If the rate limit has not been defined, it is computed as a minimum of link speeds of sending and receiving network interfaces. If rate limit has not been set, and link speeds are not available, it is determined by local VDSM setting on sending host. |
Hypervisor default |
Bandwidth is controlled by local VDSM setting on sending Host. |
Custom |
Defined by user (in Mbps). This value is divided by the number of concurrent migrations (default is 2, to account for ingoing and outgoing migration). Therefore, the user-defined bandwidth must be large enough to accommodate all concurrent migrations. For example, if the |
The resilience policy defines how the virtual machines are prioritized in the migration.
Field | Description/Action |
---|---|
Migrate Virtual Machines |
Migrates all virtual machines in order of their defined priority. |
Migrate only Highly Available Virtual Machines |
Migrates only highly available virtual machines to prevent overloading other hosts. |
Do Not Migrate Virtual Machines |
Prevents virtual machines from being migrated. |
Field | Description/Action |
---|---|
Enable Migration Encryption |
Allows the virtual machine to be encrypted during migration.
|
Parallel Migrations |
Allows you to specify whether and how many parallel migration connections to use.
|
Number of VM Migration Connections |
This setting is only available when Custom is selected. The preferred number of custom parallel migrations, between 2 and 255. |
Scheduling Policy Settings Explained
Scheduling policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host’s CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Scheduling Policies in the Administration Guide for more information.
Field | Description/Action |
---|---|
Select Policy |
Select a policy from the drop-down list.
|
Properties |
The following properties appear depending on the selected policy. Edit them if necessary:
|
Scheduler Optimization |
Optimize scheduling for host weighing/ordering.
|
Enable Trusted Service |
Enable integration with an OpenAttestation server. Before this can be enabled, use the |
Enable HA Reservation |
Enable the Engine to monitor cluster capacity for highly available virtual machines. The Engine ensures that appropriate capacity exists within a cluster for virtual machines designated as highly available to migrate in the event that their existing host fails unexpectedly. |
Serial Number Policy |
Configure the policy for assigning serial numbers to each new virtual machine in the cluster:
|
Custom Serial Number |
Specify the custom serial number to apply to new virtual machines in the cluster. |
When a host’s free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580
are logged to /var/log/vdsm/mom.log. /var/log/vdsm/mom.log is the Memory Overcommit Manager log file.
MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized cluster scheduling policy properties
The scheduler has a background process that migrates virtual machines according to the current cluster scheduling policy and its parameters. Based on the various criteria and their relative weights in a policy, the scheduler continuously categorizes hosts as source hosts or destination hosts and migrates individual virtual machines from the former to the latter.
The following description explains how the evenly_distributed and power_saving cluster scheduling policies interact with the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties. Although both policies consider CPU and memory load, CPU load is not relevant for the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties.
If you define the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties as part of the evenly_distributed policy:
-
Hosts that have less free memory than MaxFreeMemoryForOverUtilized are overutilized and become source hosts.
-
Hosts that have more free memory than MinFreeMemoryForUnderUtilized are underutilized and become destination hosts.
-
If MaxFreeMemoryForOverUtilized is not defined, the scheduler does not migrate virtual machines based on the memory load. (It continues migrating virtual machines based on the policy’s other criteria, such as CPU load.)
-
If MinFreeMemoryForUnderUtilized is not defined, the scheduler considers all hosts eligible to become destination hosts.
If you define the MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized properties as part of the power_saving policy:
-
Hosts that have less free memory than MaxFreeMemoryForOverUtilized are overutilized and become source hosts.
-
Hosts that have more free memory than MinFreeMemoryForUnderUtilized are underutilized and become source hosts.
-
Hosts that have more free memory than MaxFreeMemoryForOverUtilized are not overutilized and become destination hosts.
-
Hosts that have less free memory than MinFreeMemoryForUnderUtilized are not underutilized and become destination hosts.
-
The scheduler prefers migrating virtual machines to hosts that are neither overutilized nor underutilized. If there are not enough of these hosts, the scheduler can migrate virtual machines to underutilized hosts. If the underutilized hosts are not needed for this purpose, the scheduler can power them down.
-
If MaxFreeMemoryForOverUtilized is not defined, no hosts are overutilized. Therefore, only underutilized hosts are source hosts, and destination hosts include all hosts in the cluster.
-
If MinFreeMemoryForUnderUtilized is not defined, only overutilized hosts are source hosts, and hosts that are not overutilized are destination hosts.
-
To prevent the host from overutilization of all the physical CPUs, define the virtual CPU to physical CPU ratio - VCpuToPhysicalCpuRatio with a value between 0.1 and 2.9. When this parameter is set, hosts with a lower CPU utilization are preferred when scheduling a virtual machine.
If adding a virtual machine causes the ratio to exceed the limit, both the VCpuToPhysicalCpuRatio and the CPU utilization are considered.
In a running environment, if the host VCpuToPhysicalCpuRatio exceeds 2.5, some virtual machines might be load balanced and moved to hosts with a lower VCpuToPhysicalCpuRatio.
Cluster Console Settings Explained
The table below describes the settings for the Console tab in the New Cluster and Edit Cluster windows.
Field | Description/Action |
---|---|
Define SPICE Proxy for Cluster |
Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the VM Portal) is outside of the network where the hypervisors reside. |
Overridden SPICE proxy address |
The proxy by which the SPICE client connects to virtual machines. The address must be in the following format: protocol://[host]:[port] |
Fencing Policy Settings Explained
The table below describes the settings for the Fencing Policy tab in the New Cluster and Edit Cluster windows.
Field | Description/Action |
---|---|
Enable fencing |
Enables fencing on the cluster. Fencing is enabled by default, but can be disabled if required; for example, if temporary network issues are occurring or expected, administrators can disable fencing until diagnostics or maintenance activities are completed. Note that if fencing is disabled, highly available virtual machines running on non-responsive hosts will not be restarted elsewhere. |
Skip fencing if host has live lease on storage |
If this check box is selected, any hosts in the cluster that are Non Responsive and still connected to storage will not be fenced. |
Skip fencing on cluster connectivity issues |
If this check box is selected, fencing will be temporarily disabled if the percentage of hosts in the cluster that are experiencing connectivity issues is greater than or equal to the defined Threshold. The Threshold value is selected from the drop-down list; available values are 25, 50, 75, and 100. |
Skip fencing if gluster bricks are up |
This option is only available when Gluster Storage functionality is enabled. If this check box is selected, fencing is skipped if bricks are running and can be reached from other peers. See Chapter 2. Configure High Availability using Fencing Policies and Appendix A. Fencing Policies for Gluster Storage in Maintaining Red Hat Hyperconverged Infrastructure for more information. |
Skip fencing if gluster quorum not met |
This option is only available when Gluster Storage functionality is enabled. If this check box is selected, fencing is skipped if bricks are running and shutting down the host will cause loss of quorum. See Chapter 2. Configure High Availability using Fencing Policies and Appendix A. Fencing Policies for Gluster Storage in Maintaining Red Hat Hyperconverged Infrastructure for more information. |
Setting Load and Power Management Policies for Hosts in a Cluster
The evenly_distributed and power_saving scheduling policies allow you to specify acceptable memory and CPU usage values, and the point at which virtual machines must be migrated to or from a host. The vm_evenly_distributed scheduling policy distributes virtual machines evenly between hosts based on a count of the virtual machines. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. For a detailed explanation of each scheduling policy, see Cluster Scheduling Policy Settings.
-
Click
and select a cluster. -
Click Edit.
-
Click the Scheduling Policy tab.
-
Select one of the following policies:
-
none
-
vm_evenly_distributed
-
Set the minimum number of virtual machines that must be running on at least one host to enable load balancing in the HighVmCount field.
-
Define the maximum acceptable difference between the number of virtual machines on the most highly-utilized host and the number of virtual machines on the least-utilized host in the MigrationThreshold field.
-
Define the number of slots for virtual machines to be reserved on SPM hosts in the SpmVmGrace field.
-
Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Engine virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information.
-
-
evenly_distributed
-
Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field.
-
Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field.
-
Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Engine virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information.
-
Optionally, to prevent the host from overutilization of all the physical CPUs, define the virtual CPU to physical CPU ratio - VCpuToPhysicalCpuRatio with a value between 0.1 and 2.9. When this parameter is set, hosts with a lower CPU utilization are preferred when scheduling a virtual machine.
If adding a virtual machine causes the ratio to exceed the limit, both the VCpuToPhysicalCpuRatio and the CPU utilization are considered.
In a running environment, if the host VCpuToPhysicalCpuRatio exceeds 2.5, some virtual machines might be load balanced and moved to hosts with a lower VCpuToPhysicalCpuRatio.
-
-
power_saving
-
Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field.
-
Enter the CPU utilization percentage below which the host will be considered under-utilized in the LowUtilization field.
-
Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field.
-
Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Engine virtual machine if it migrates or shuts down. See Configuring Memory Slots Reserved for the self-hosted engine for more information.
-
-
-
Choose one of the following as the Scheduler Optimization for the cluster:
-
Select Optimize for Utilization to include weight modules in scheduling to allow best selection.
-
Select Optimize for Speed to skip host weighting in cases where there are more than ten pending requests.
-
-
If you are using an OpenAttestation server to verify your hosts, and have set up the server’s details using the
engine-config
tool, select the Enable Trusted Service check box.
OpenAttestation and Intel Trusted Execution Technology (Intel TXT) are no longer available.
-
Optionally select the Enable HA Reservation check box to enable the Engine to monitor cluster capacity for highly available virtual machines.
-
Optionally select a Serial Number Policy for the virtual machines in the cluster:
-
System Default: Use the system-wide defaults, which are configured in the Engine database using the engine configuration tool and the
DefaultSerialNumberPolicy
andDefaultCustomSerialNumber
key names. The default value forDefaultSerialNumberPolicy
is to use the Host ID. See Scheduling Policies in the Administration Guide for more information. -
Host ID: Set each virtual machine’s serial number to the UUID of the host.
-
Vm ID: Set each virtual machine’s serial number to the UUID of the virtual machine.
-
Custom serial number: Set each virtual machine’s serial number to the value you specify in the following Custom Serial Number parameter.
-
-
Click OK.
Updating the MoM Policy on Hosts in a Cluster
The Memory Overcommit Manager handles memory balloon and KSM functions on a host. Changes to these functions for a cluster pass to hosts the next time a host moves to a status of Up after being rebooted or in maintenance mode. However, if necessary you can apply important changes to a host immediately by synchronizing the MoM policy while the host is Up. The following procedure must be performed on each host individually.
-
Click
. -
Click the cluster’s name. This opens the details view.
-
Click the Hosts tab and select the host that requires an updated MoM policy.
-
Click Sync MoM Policy.
The MoM policy on the host is updated without having to move the host to maintenance mode and back Up.
Creating a CPU Profile
CPU profiles define the maximum amount of processing capability a virtual machine in a cluster can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are created based on CPU profiles defined under data centers, and are not automatically applied to all virtual machines in a cluster; they must be manually assigned to individual virtual machines for the profile to take effect.
This procedure assumes you have already defined one or more CPU quality of service entries under the data center to which the cluster belongs.
-
Click
. -
Click the cluster’s name. This opens the details view.
-
Click the CPU Profiles tab.
-
Click New.
-
Enter a Name and a Description for the CPU profile.
-
Select the quality of service to apply to the CPU profile from the QoS list.
-
Click OK.
Removing a CPU Profile
Remove an existing CPU profile from your oVirt environment.
-
Click
. -
Click the cluster’s name. This opens the details view.
-
Click the CPU Profiles tab and select the CPU profile to remove.
-
Click Remove.
-
Click OK.
If the CPU profile was assigned to any virtual machines, those virtual machines are automatically assigned the default
CPU profile.
Importing an Existing Gluster Storage Cluster
You can import a Gluster Storage cluster and all hosts belonging to the cluster into oVirt Engine.
When you provide details such as the IP address or host name and password of any host in the cluster, the gluster peer status
command is executed on that host through SSH, then displays a list of hosts that are a part of the cluster. You must manually verify the fingerprint of each host and provide passwords for them. You will not be able to import the cluster if one of the hosts in the cluster is down or unreachable. As the newly imported hosts do not have VDSM installed, the bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them.
-
Click
. -
Click New.
-
Select the Data Center the cluster will belong to.
-
Enter the Name and Description of the cluster.
-
Select the Enable Gluster Service check box and the Import existing gluster configuration check box.
The Import existing gluster configuration field is only displayed if the Enable Gluster Service is selected.
-
In the Hostname field, enter the host name or IP address of any server in the cluster.
The host SSH Fingerprint displays to ensure you are connecting with the correct host. If a host is unreachable or if there is a network error, an error Error in fetching fingerprint displays in the Fingerprint field.
-
Enter the Password for the server, and click OK.
-
The Add Hosts window opens, and a list of hosts that are a part of the cluster displays.
-
For each host, enter the Name and the Root Password.
-
If you wish to use the same password for all hosts, select the Use a Common Password check box to enter the password in the provided text field.
Click Apply to set the entered password all hosts.
Verify that the fingerprints are valid and submit your changes by clicking OK.
The bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. You have now successfully imported an existing Gluster Storage cluster into oVirt Engine.
Explanation of Settings in the Add Hosts Window
The Add Hosts window allows you to specify the details of the hosts imported as part of a Gluster-enabled cluster. This window appears after you have selected the Enable Gluster Service check box in the New Cluster window and provided the necessary host details.
Field | Description |
---|---|
Use a common password |
Tick this check box to use the same password for all hosts belonging to the cluster. Enter the password in the Password field, then click the Apply button to set the password on all hosts. |
Name |
Enter the name of the host. |
Hostname/IP |
This field is automatically populated with the fully qualified domain name or IP of the host you provided in the New Cluster window. |
Root Password |
Enter a password in this field to use a different root password for each host. This field overrides the common password provided for all hosts in the cluster. |
Fingerprint |
The host fingerprint is displayed to ensure you are connecting with the correct host. This field is automatically populated with the fingerprint of the host you provided in the New Cluster window. |
Removing a Cluster
Move all hosts out of a cluster before removing it.
You cannot remove the Default cluster, as it holds the Blank template. You can, however, rename the Default cluster and add it to a new data center. |
-
Click
and select a cluster. -
Ensure there are no hosts in the cluster.
-
Click Remove.
-
Click OK
Memory Optimization
To increase the number of virtual machines on a host, you can use memory overcommitment, in which the memory you assign to virtual machines exceeds RAM and relies on swap space.
However, there are potential problems with memory overcommitment:
-
Swapping performance - Swap space is slower and consumes more CPU resources than RAM, impacting virtual machine performance. Excessive swapping can lead to CPU thrashing.
-
Out-of-memory (OOM) killer - If the host runs out of swap space, new processes cannot start, and the kernel’s OOM killer daemon begins shutting down active processes such as virtual machine guests.
To help overcome these shortcomings, you can do the following:
-
Limit memory overcommitment using the Memory Optimization setting and the Memory Overcommit Manager (MoM).
-
Make the swap space large enough to accommodate the maximum potential demand for virtual memory and have a safety margin remaining.
-
Reduce virtual memory size by enabling memory ballooning and Kernel Same-page Merging (KSM).
Memory Optimization and Memory Overcommitment
You can limit the amount of memory overcommitment by selecting one of the Memory Optimization settings: None (0%), 150%, or 200%.
Each setting represents a percentage of RAM. For example, with a host that has 64 GB RAM, selecting 150% means you can overcommit memory by an additional 32 GB, for a total of 96 GB in virtual memory. If the host uses 4 GB of that total, the remaining 92 GB are available. You can assign most of that to the virtual machines (Memory Size on the System tab), but consider leaving some of it unassigned as a safety margin.
Sudden spikes in demand for virtual memory can impact performance before the MoM, memory ballooning, and KSM have time to re-optimize virtual memory. To reduce that impact, select a limit that is appropriate for the kinds of applications and workloads you are running:
-
For workloads that produce more incremental growth in demand for memory, select a higher percentage, such as 200% or 150%.
-
For more critical applications or workloads that produce more sudden increases in demand for memory, select a lower percentage, such as 150% or None (0%). Selecting None helps prevent memory overcommitment but allows the MoM, memory balloon devices, and KSM to continue optimizing virtual memory.
Always test your Memory Optimization settings by stress testing under a wide range of conditions before deploying the configuration to production. |
To configure the Memory Optimization setting, click the Optimization tab in the New Cluster or Edit Cluster windows. See Cluster Optimization Settings Explained.
Additional comments:
-
The Host Statistics views display useful historical information for sizing the overcommitment ratio.
-
The actual memory available cannot be determined in real time because the amount of memory optimization achieved by KSM and memory ballooning changes continuously.
-
When virtual machines reach the virtual memory limit, new apps cannot start.
-
When you plan the number of virtual machines to run on a host, use the maximum virtual memory (physical memory size and the Memory Optimization setting) as a starting point. Do not factor in the smaller virtual memory achieved by memory optimizations such as memory ballooning and KSM.
Swap Space and Memory Overcommitment
Red Hat provides these recommendations for configuring swap space.
When applying these recommendations, follow the guidance to size the swap space as "last effort memory" for a worst-case scenario. Use the physical memory size and Memory Optimization setting as a basis for estimating the total virtual memory size. Exclude any reduction of the virtual memory size from optimization by the MoM, memory ballooning, and KSM.
To help prevent an OOM condition, make the swap space large enough to handle a worst-case scenario and still have a safety margin available. Always stress-test your configuration under a wide range of conditions before deploying it to production. |
The Memory Overcommit Manager (MoM)
The Memory Overcommit Manager (MoM) does two things:
-
It limits memory overcommitment by applying the Memory Optimization setting to the hosts in a cluster, as described in the preceding section.
-
It optimizes memory by managing the memory ballooning and KSM, as described in the following sections.
You do not need to enable or disable MoM.
When a host’s free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580
are logged to /var/log/vdsm/mom.log, the Memory Overcommit Manager log file.
Memory Ballooning
Virtual machines start with the full amount of virtual memory you have assigned to them. As virtual memory usage exceeds RAM, the host relies more on swap space. If enabled, memory ballooning lets virtual machines give up the unused portion of that memory. The freed memory can be reused by other processes and virtual machines on the host. The reduced memory footprint makes swapping less likely and improves performance.
The virtio-balloon package that provides the memory balloon device and drivers ships as a loadable kernel module (LKM). By default, it is configured to load automatically. Adding the module to the denyist or unloading it disables ballooning.
The memory balloon devices do not coordinate directly with each other; they rely on the host’s Memory Overcommit Manager (MoM) process to continuously monitor each virtual machine needs and instruct the balloon device to increase or decrease virtual memory.
Performance considerations:
-
Red Hat does not recommend memory ballooning and overcommitment for workloads that require continuous high-performance and low latency. See Configuring High-Performance Virtual Machines, Templates, and Pools.
-
Use memory ballooning when increasing virtual machine density (economy) is more important than performance.
-
Memory ballooning does not have a significant impact on CPU utilization. (KSM consumes some CPU resources, but consumption remains consistent under pressure.)
To enable memory ballooning, click the Optimization tab in the New Cluster or Edit Cluster windows. Then select the Enable Memory Balloon Optimization checkbox. This setting enables memory overcommitment on virtual machines running on the hosts in this cluster. When this check box is selected, the MoM starts ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine. See Cluster Optimization Settings Explained.
Each host in this cluster receives a balloon policy update when its status changes to Up. If necessary, you can manually update the balloon policy on a host without having to change the status. See Updating the MoM Policy on Hosts in a Cluster.
Kernel Same-page Merging (KSM)
When a virtual machine runs, it often creates duplicate memory pages for items such as common libraries and high-use data. Furthermore, virtual machines that run similar guest operating systems and applications produce duplicate memory pages in virtual memory.
When enabled, Kernel Same-page Merging (KSM) examines the virtual memory on a host, eliminates duplicate memory pages, and shares the remaining memory pages across multiple applications and virtual machines. These shared memory pages are marked copy-on-write; if a virtual machine needs to write changes to the page, it makes a copy first before writing its modifications to that copy.
While KSM is enabled, the MoM manages KSM. You do not need to configure or control KSM manually.
KSM increases virtual memory performance in two ways. Because a shared memory page is used more frequently, the host is more likely to the store it in cache or main memory, which improves the memory access speed. Additionally, with memory overcommitment, KSM reduces the virtual memory footprint, reducing the likelihood of swapping and improving performance.
KSM consumes more CPU resources than memory ballooning. The amount of CPU KSM consumes remains consistent under pressure. Running identical virtual machines and applications on a host provides KSM with more opportunities to merge memory pages than running dissimilar ones. If you run mostly dissimilar virtual machines and applications, the CPU cost of using KSM may offset its benefits.
Performance considerations:
-
After the KSM daemon merges large amounts of memory, the kernel memory accounting statistics may eventually contradict each other. If your system has a large amount of free memory, you might improve performance by disabling KSM.
-
Red Hat does not recommend KSM and overcommitment for workloads that require continuous high-performance and low latency. See Configuring High-Performance Virtual Machines, Templates, and Pools.
-
Use KSM when increasing virtual machine density (economy) is more important than performance.
To enable KSM, click the Optimization tab in the New Cluster or Edit Cluster windows. Then select the Enable KSM checkbox. This setting enables MoM to run KSM when necessary and when it can yield a memory saving benefit that outweighs its CPU cost. See Cluster Optimization Settings Explained.
UEFI and the Q35 chipset
The Intel Q35 chipset, the default chipset for new virtual machines, includes support for the Unified Extensible Firmware Interface (UEFI), which replaces legacy BIOS.
Alternatively you can configure a virtual machine or cluster to use the legacy Intel i440fx chipset, which does not support UEFI.
UEFI provides several advantages over legacy BIOS, including the following:
-
A modern boot loader
-
SecureBoot, which authenticates the digital signatures of the boot loader
-
GUID Partition Table (GPT), which enables disks larger than 2 TB
To use UEFI on a virtual machine, you must configure the virtual machine’s cluster for 4.4 compatibility or later. Then you can set UEFI for any existing virtual machine, or to be the default BIOS type for new virtual machines in the cluster. The following options are available:
BIOS Type | Description |
---|---|
Q35 Chipset with Legacy BIOS |
Legacy BIOS without UEFI (Default for clusters with compatibility version 4.4) |
Q35 Chipset with UEFI BIOS |
BIOS with UEFI |
Q35 Chipset with SecureBoot |
UEFI with SecureBoot, which authenticates the digital signatures of the boot loader |
Legacy |
i440fx chipset with legacy BIOS |
You can configure a virtual machine to use the Q35 chipset and UEFI before installing an operating system. Converting a virtual machine from legacy BIOS to UEFI is not supported after installing an operating system.
Configuring a cluster to use the Q35 Chipset and UEFI
After upgrading a cluster to oVirt 4.4, all virtual machines in the cluster run the 4.4 version of VDSM. You can configure a cluster’s default BIOS type, which determines the default BIOS type of any new virtual machines you create in that cluster. If necessary, you can override the cluster’s default BIOS type by specifying a different BIOS type when you create a virtual machine.
-
In the VM Portal or the Administration Portal, click
. -
Select a cluster and click Edit.
-
Click General.
-
Define the default BIOS type for new virtual machines in the cluster by clicking the BIOS Type dropdown menu, and selecting one of the following:
-
Legacy
-
Q35 Chipset with Legacy BIOS
-
Q35 Chipset with UEFI BIOS
-
Q35 Chipset with SecureBoot
-
-
From the Compatibility Version dropdown menu select 4.4. The Engine checks that all running hosts are compatible with 4.4, and if they are, the Engine uses 4.4 features.
-
If any existing virtual machines in the cluster should use the new BIOS type, configure them to do so. Any new virtual machines in the cluster that are configured to use the BIOS type Cluster default now use the BIOS type you selected. For more information, see Configuring a virtual machine to use the Q35 Chipset and UEFI.
Because you can change the BIOS type only before installing an operating system, for any existing virtual machines that are configured to use the BIOS type Cluster default, change the BIOS type to the previous default cluster BIOS type. Otherwise the virtual machine might not boot. Alternatively, you can reinstall the virtual machine’s operating system. |
Configuring a virtual machine to use the Q35 Chipset and UEFI
You can configure a virtual machine to use the Q35 chipset and UEFI before installing an operating system. Converting a virtual machine from legacy BIOS to UEFI, or from UEFI to legacy BIOS, might prevent the virtual machine from booting. If you change the BIOS type of an existing virtual machine, reinstall the operating system.
If the virtual machine’s BIOS type is set to Cluster default, changing the BIOS type of the cluster changes the BIOS type of the virtual machine. If the virtual machine has an operating system installed, changing the cluster BIOS type can cause booting the virtual machine to fail. |
To configure a virtual machine to use the Q35 chipset and UEFI:
-
In the VM Portal or the Administration Portal click
. -
Select a virtual machine and click Edit.
-
On the General tab, click Show Advanced Options.
-
Click
. -
Select one of the following from the BIOS Type dropdown menu:
-
Cluster default
-
Q35 Chipset with Legacy BIOS
-
Q35 Chipset with UEFI BIOS
-
Q35 Chipset with SecureBoot
-
-
Click OK.
-
From the Virtual Machine portal or the Administration Portal, power off the virtual machine. The next time you start the virtual machine, it will run with the new BIOS type you selected.
Changing the Cluster Compatibility Version
oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.
-
To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon next to the host indicating an update is available.
-
Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. oVirt recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection.
If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster.
-
In the Administration Portal, click
. -
Select the cluster to change and click Edit.
-
On the General tab, change the Compatibility Version to the desired value.
-
Click OK. The Change Cluster Compatibility Version confirmation dialog opens.
-
Click OK to confirm.
An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine’s configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version. |
After updating a cluster’s compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon (). You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview.
In a self-hosted engine environment, the Engine virtual machine does not need to be restarted.
Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Virtual machines that have not been updated run with the old configuration, and the new configuration could be overwritten if other changes are made to the virtual machine before the reboot.
Once you have updated the compatibility version of all clusters and virtual machines in a data center, you can then change the compatibility version of the data center itself.
2.4. Logical Networks
2.4.1. Logical Network Tasks
Performing Networking Tasks
provides a central location for users to perform logical network-related operations and search for logical networks based on each network’s property or association with other resources. The New, Edit and Remove buttons allow you to create, change the properties of, and delete logical networks within data centers.
Click each network name and use the tabs in the details view to perform functions including:
-
Attaching or detaching the networks to clusters and hosts
-
Removing network interfaces from virtual machines and templates
-
Adding and removing permissions for users to access and manage networks
These functions are also accessible through each individual resource.
Do not change networking in a data center or a cluster if any hosts are running as this risks making the host unreachable. |
If you plan to use oVirt nodes to provide any services, remember that the services will stop if the oVirt environment stops operating. This applies to all services, but you should be especially aware of the hazards of running the following on oVirt:
|
Creating a New Logical Network in a Data Center or Cluster
Create a logical network and define its use in a data center, or in clusters in a data center.
-
Click
or . -
Click the data center or cluster name. The Details view opens.
-
Click the Logical Networks tab.
-
Open the New Logical Network window:
-
From a data center details view, click New.
-
From a cluster details view, click Add Network.
-
-
Enter a Name, Description, and Comment for the logical network.
-
Optional: Enable Enable VLAN tagging.
-
Optional: Disable VM Network.
-
Optional: Select the Create on external provider checkbox. This disables the network label and the VM network. See External Providers for details.
-
Select the External Provider. The External Provider list does not include external providers that are in read-only mode.
-
To create an internal, isolated network, select ovirt-provider-ovn on the External Provider list and leave Connect to physical network cleared.
-
-
Enter a new label or select an existing label for the logical network in the Network Label text field.
-
For MTU, either select Default (1500) or select Custom and specify a custom value.
After you create a network on an external provider, you cannot change the network’s MTU settings.
If you change the network’s MTU settings, you must propagate this change to the running virtual machines on the network: Hot unplug and replug every virtual machine’s vNIC that should apply the MTU setting, or restart the virtual machines. Otherwise, these interfaces fail when the virtual machine migrates to another host. For more information, see After network MTU change, some VMs and bridges have the old MTU and seeing packet drops and BZ#1766414.
-
If you selected ovirt-provider-ovn from the External Provider drop-down list, define whether the network should implement Security Groups. See Logical Network General Settings Explained for details.
-
From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
-
If the Create on external provider checkbox is selected, the Subnet tab is visible. From the Subnet tab, select the Create subnet and enter a Name, CIDR, and Gateway address, and select an IP Version for the subnet that the logical network will provide. You can also add DNS servers as required.
-
From the vNIC Profiles tab, add vNIC profiles to the logical network as required.
-
Click OK.
If you entered a label for the logical network, it is automatically added to all host network interfaces with that label.
When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied. |
Editing a Logical Network
A logical network cannot be edited or moved to another interface if it is not synchronized with the network configuration on the host. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts on how to synchronize your networks. |
When changing the |
-
Click
. -
Click the data center’s name. This opens the details view.
-
Click the Logical Networks tab and select a logical network.
-
Click Edit.
-
Edit the necessary settings.
You can edit the name of a new or existing network, with the exception of the default network, without having to stop the virtual machines.
-
Click OK.
Multi-host network configuration automatically applies updated network settings to all of the hosts within the data center to which the network is assigned. Changes can only be applied when virtual machines using the network are down. You cannot rename a logical network that is already configured on a host. You cannot disable the VM Network option while virtual machines or templates using that network are running. |
Removing a Logical Network
You can remove a logical network from
or . The following procedure shows you how to remove logical networks associated to a data center. For a working oVirt environment, you must have at least one logical network used as the ovirtmgmt management network.-
Click
. -
Click a data center’s name. This opens the details view.
-
Click the Logical Networks tab to list the logical networks in the data center.
-
Select a logical network and click Remove.
-
Optionally, select the Remove external network(s) from the provider(s) as well check box to remove the logical network both from the Engine and from the external provider if the network is provided by an external provider. The check box is grayed out if the external provider is in read-only mode.
-
Click OK.
The logical network is removed from the Engine and is no longer available.
Configuring a Non-Management Logical Network as the Default Route
The default route used by hosts in a cluster is through the management network (ovirtmgmt
). The following procedure provides instructions to configure a non-management logical network as the default route.
Prerequisite:
-
If you are using the
default_route
custom property, you need to clear the custom property from all attached hosts and then follow this procedure.
Configuring the Default Route Role
-
Click
. -
Click the name of the non-management logical network to configure as the default route to access its details.
-
Click the Clusters tab.
-
Click Manage Network. This opens the Manage Network window.
-
Select the Default Route checkbox for the appropriate cluster(s).
-
Click OK.
When networks are attached to a host, the default route of the host will be set on the network of your choice. It is recommended to configure the default route role before any host is added to your cluster. If your cluster already contains hosts, they may become out-of-sync until you sync your change to them.
-
For IPv6, oVirt supports only static addressing.
-
If both networks share a single gateway (are on the same subnet), you can move the default route role from the management network (ovirtmgmt) to another logical network.
-
If the host and Engine are not on the same subnet, the Engine loses connectivity with the host because the IPv6 gateway has been removed.
-
Moving the default route role to a non-management network removes the IPv6 gateway from the network interface and generates an alert: "On cluster clustername the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network."
Adding a static route on a host
You can use nmstate to add static routes to hosts. This method requires you to configure the hosts directly, without using oVirt Engine.
Static-routes you add are preserved as long as the related routed bridge, interface, or bond exists and has an IP address. Otherwise, the system removes the static route.
Except for adding or removing a static route on a host, always use the oVirt Engine to configure host network settings in your cluster. For details, see Network Manager Stateful Configuration (nmstate). |
The custom static-route is preserved so long as its interface/bond exists and has an IP address. Otherwise, it will be removed. As a result, VM networks behave differently from non-VM networks:
|
This procedure requires nmstate, which is only available if your environment uses:
-
oVirt Engine version 4.4
-
Enterprise Linux hosts and oVirt Nodes that are based on Enterprise Linux 8
-
Connect to the host you want to configure.
-
On the host, create a
static_route.yml
file, with the following example content:routes: config: - destination: 192.168.123.0/24 next-hop-address: 192.168.178.1 next-hop-interface: eth1
-
Replace the example values shown with real values for your network.
-
To route your traffic to a secondary added network, use
next-hop-interface
to specify an interface or network name.-
To use a non-virtual machine network, specify an interface such as
eth1
. -
To use a virtual machine network, specify a network name that is also the bridge name such as
net1
.
-
-
Run this command:
$ nmstatectl set static_route.yml
-
Run the IP route command,
ip route
, with the destination parameter value you set instatic_route.yml
. This should show the desired route. For example, run the following command:$ ip route | grep 192.168.123.0`
Removing a static route on a host
You can use nmstate to remove static routes from hosts. This method requires you to configure the hosts directly, without using oVirt Engine.
Except for adding or removing a static route on a host, always use the oVirt Engine to configure host network settings in your cluster. For details, see Network Manager Stateful Configuration (nmstate). |
The custom static-route is preserved so long as its interface/bond exists and has an IP address. Otherwise, it will be removed. As a result, VM networks behave differently from non-VM networks:
|
This procedure requires nmstate, which is only available if your environment uses:
-
oVirt Engine version 4.4
-
Enterprise Linux hosts and oVirt Nodes that are based on Enterprise Linux 8
-
Connect to the host you want to reconfigure.
-
On the host, edit the
static_route.yml
file. -
Insert a line
state: absent
as shown in the following example. -
Add the value of
next-hop-interface
between the brackets ofinterfaces: []
. The result should look similar to the example shown here.routes: config: - destination: 192.168.123.0/24 next-hop-address: 192.168.178. next-hop-interface: eth1 state: absent interfaces: [{“name”: eth1}]
-
Run this command:
$ nmstatectl set static_route.yml
-
Run the IP route command,
ip route
, with the destination parameter value you set instatic_route.yml
. This should no longer show the desired route. For example, run the following command:$ ip route | grep 192.168.123.0`
Viewing or Editing the Gateway for a Logical Network
Users can define the gateway, along with the IP address and subnet mask, for a logical network. This is necessary when multiple networks exist on a host and traffic should be routed through the specified network, rather than the default gateway.
If multiple networks exist on a host and the gateways are not defined, return traffic will be routed through the default gateway, which may not reach the intended destination. This would result in users being unable to ping the host.
oVirt handles multiple gateways automatically whenever an interface goes up or down.
-
Click
. -
Click the host’s name. This opens the details view.
-
Click the Network Interfaces tab to list the network interfaces attached to the host, and their configurations.
-
Click Setup Host Networks.
-
Hover your cursor over an assigned logical network and click the pencil icon. This opens the Edit Management Network window.
The Edit Management Network window displays the network name, the boot protocol, and the IP, subnet mask, and gateway addresses. The address information can be manually edited by selecting a Static boot protocol.
Logical Network General Settings Explained
The table below describes the settings for the General tab of the New Logical Network and Edit Logical Network window.
Field Name | Description |
---|---|
Name |
The name of the logical network. This text field must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. Note that while the name of the logical network can be longer than 15 characters and can contain non-ASCII characters, the on-host identifier (vdsm_name) will differ from the name you defined. See Mapping VDSM Names to Logical Network Names for instructions on displaying a mapping of these names. |
Description |
The description of the logical network. This text field has a 40-character limit. |
Comment |
A field for adding plain text, human-readable comments regarding the logical network. |
Create on external provider |
Allows you to create the logical network to an OpenStack Networking instance that has been added to the Engine as an external provider. External Provider - Allows you to select the external provider on which the logical network will be created. |
Enable VLAN tagging |
VLAN tagging is a security feature that gives all network traffic carried on the logical network a special characteristic. VLAN-tagged traffic cannot be read by interfaces that do not also have that characteristic. Use of VLANs on logical networks also allows a single network interface to be associated with multiple, differently VLAN-tagged logical networks. Enter a numeric value in the text entry field if VLAN tagging is enabled. |
VM Network |
Select this option if only virtual machines use this network. If the network is used for traffic that does not involve virtual machines, such as storage communications, do not select this check box. |
Port Isolation |
If this is set, virtual machines on the same host are prevented from communicating and seeing each other on this logical network. For this option to work on different hypervisors, the switches need to be configured with PVLAN/Port Isolation on the respective port/VLAN connected to the hypervisors, and not reflect back the frames with any hairpin setting. |
MTU |
Choose either Default, which sets the maximum transmission unit (MTU) to the value given in the parenthesis (), or Custom to set a custom MTU for the logical network. You can use this to match the MTU supported by your new logical network to the MTU supported by the hardware it interfaces with. Enter a numeric value in the text entry field if Custom is selected. IMPORTANT: If you change the network’s MTU settings, you must propagate this change to the running virtual machines on the network: Hot unplug and replug every virtual machine’s vNIC that should apply the MTU setting, or restart the virtual machines. Otherwise, these interfaces fail when the virtual machine migrates to another host. For more information, see After network MTU change, some VMs and bridges have the old MTU and seeing packet drops and BZ#1766414. |
Network Label |
Allows you to specify a new label for the network or select from existing labels already attached to host network interfaces. If you select an existing label, the logical network will be automatically assigned to all host network interfaces with that label. |
Security Groups |
Allows you to assign security groups to the ports on this logical network.
|
Logical Network Cluster Settings Explained
The table below describes the settings for the Cluster tab of the New Logical Network window.
Field Name | Description |
---|---|
Attach/Detach Network to/from Cluster(s) |
Allows you to attach or detach the logical network from clusters in the data center and specify whether the logical network will be a required network for individual clusters. Name - the name of the cluster to which the settings will apply. This value cannot be edited. Attach All - Allows you to attach or detach the logical network to or from all clusters in the data center. Alternatively, select or clear the Attach check box next to the name of each cluster to attach or detach the logical network to or from a given cluster. Required All - Allows you to specify whether the logical network is a required network on all clusters. Alternatively, select or clear the Required check box next to the name of each cluster to specify whether the logical network is a required network for a given cluster. |
Logical Network vNIC Profiles Settings Explained
The table below describes the settings for the vNIC Profiles tab of the New Logical Network window.
Field Name | Description |
---|---|
vNIC Profiles |
Allows you to specify one or more vNIC profiles for the logical network. You can add or remove a vNIC profile to or from the logical network by clicking the plus or minus button next to the vNIC profile. The first field is for entering a name for the vNIC profile. Public - Allows you to specify whether the profile is available to all users. QoS - Allows you to specify a network quality of service (QoS) profile to the vNIC profile. |
Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window
Specify the traffic type for the logical network to optimize the network traffic flow.
-
Click
. -
Click the cluster’s name. This opens the details view.
-
Click the Logical Networks tab.
-
Click Manage Networks.
-
Select the appropriate check boxes and radio buttons.
-
Click OK.
Logical networks offered by external providers must be used as virtual machine networks; they cannot be assigned special cluster roles such as display or migration. |
Explanation of Settings in the Manage Networks Window
The table below describes the settings for the Manage Networks window.
Field | Description/Action |
---|---|
Assign |
Assigns the logical network to all hosts in the cluster. |
Required |
A Network marked "required" must remain operational in order for the hosts associated with it to function properly. If a required network ceases to function, any hosts associated with it become non-operational. |
VM Network |
A logical network marked "VM Network" carries network traffic relevant to the virtual machine network. |
Display Network |
A logical network marked "Display Network" carries network traffic relevant to SPICE and to the virtual network controller. |
Migration Network |
A logical network marked "Migration Network" carries virtual machine and storage migration traffic. If an outage occurs on this network, the management network (ovirtmgmt by default) will be used instead. |
Configuring virtual functions on a NIC
This is one in a series of topics that show how to set up and configure SR-IOV on oVirt. For more information, see Setting Up and Configuring SR-IOV |
Single Root I/O Virtualization (SR-IOV) enables you to use each PCIe endpoint as multiple separate devices by using physical functions (PFs) and virtual functions (VFs). A PCIe card can have between one and eight PFs. Each PF can have many VFs. The number of VFs it can have depends on the specific type of PCIe device.
To configure SR-IOV-capable Network Interface Controllers (NICs), you use the oVirt Engine. There, you can configure the number of VFs on each NIC.
You can configure a VF like you would configure a standalone NIC, including:
-
Assigning one or more logical networks to the VF.
-
Creating bonded interfaces with VFs.
-
Assigning vNICs to VFs for direct device passthrough.
By default, all virtual networks have access to the virtual functions. You can disable this default and specify which networks have access to a virtual function.
-
For a vNIC to be attached to a VF must, its passthrough property must be enabled. For details, see Enabling_Passthrough_on_a_vNIC_Profile.
-
Click
. -
Click the name of an SR-IOV-capable host. This opens the details view.
-
Click the Network Interfaces tab.
-
Click Setup Host Networks.
-
Select an SR-IOV-capable NIC, marked with a
, and click the pencil icon.
-
Optional: To change the number of virtual functions, click the Number of VFs setting drop-down button and edit the Number of VFs text field.
Changing the number of VFs deletes all previous VFs on the network interface before creating the new VFs. This includes any VFs that have virtual machines directly attached.
-
Optional: To limit which virtual networks have access virtual functions, select Specific networks.
-
Select the networks that should have access to the VF, or use Labels to select networks based on their network labels.
-
-
Click OK.
-
In the Setup Host Networks window, click OK.
2.4.2. Virtual Network Interface Cards (vNICs)
vNIC Profile Overview
A Virtual Network Interface Card (vNIC) profile is a collection of settings that can be applied to individual virtual network interface cards in the Engine. A vNIC profile allows you to apply Network QoS profiles to a vNIC, enable or disable port mirroring, and add or remove custom properties. A vNIC profile also offers an added layer of administrative flexibility in that permission to use (consume) these profiles can be granted to specific users. In this way, you can control the quality of service that different users receive from a given network.
Creating or Editing a vNIC Profile
Create or edit a Virtual Network Interface Controller (vNIC) profile to regulate network bandwidth for users and groups.
If you are enabling or disabling port mirroring, all virtual machines using the associated profile must be in a down state before editing. |
-
Click
. -
Click the logical network’s name. This opens the details view.
-
Click the vNIC Profiles tab.
-
Click New or Edit.
-
Enter the Name and Description of the profile.
-
Select the relevant Quality of Service policy from the QoS list.
-
Select a Network Filter from the drop-down list to manage the traffic of network packets to and from virtual machines. For more information on network filters, see Applying network filtering in the Enterprise Linux Virtualization Deployment and Administration Guide.
-
Select the Passthrough check box to enable passthrough of the vNIC and allow direct device assignment of a virtual function. Enabling the passthrough property will disable QoS, network filtering, and port mirroring as these are not compatible. For more information on passthrough, see Enabling Passthrough on a vNIC Profile.
-
If Passthrough is selected, optionally deselect the Migratable check box to disable migration for vNICs using this profile. If you keep this check box selected, see Additional Prerequisites for Virtual Machines with SR-IOV-Enabled vNICs in the Virtual Machine Management Guide.
-
Use the Port Mirroring and Allow all users to use this Profile check boxes to toggle these options.
-
Select a custom property from the custom properties list, which displays Please select a key… by default. Use the + and - buttons to add or remove custom properties.
-
Click OK.
Apply this profile to users and groups to regulate their network bandwidth. If you edited a vNIC profile, you must either restart the virtual machine, or hot unplug and then hot plug the vNIC if the guest operating system supports vNIC hot plug and hot unplug.
Explanation of Settings in the VM Interface Profile Window
Field Name | Description | ||
---|---|---|---|
Network |
A drop-down list of the available networks to apply the vNIC profile to. |
||
Name |
The name of the vNIC profile. This must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores between 1 and 50 characters. |
||
Description |
The description of the vNIC profile. This field is recommended but not mandatory. |
||
QoS |
A drop-down list of the available Network Quality of Service policies to apply to the vNIC profile. QoS policies regulate inbound and outbound network traffic of the vNIC. |
||
Network Filter |
A drop-down list of the available network filters to apply to the vNIC profile. Network filters improve network security by filtering the type of packets that can be sent to and from virtual machines. The default filter is Use
|
||
Passthrough |
A check box to toggle the passthrough property. Passthrough allows a vNIC to connect directly to a virtual function of a host NIC. The passthrough property cannot be edited if the vNIC profile is attached to a virtual machine. QoS, network filters, and port mirroring are disabled in the vNIC profile if passthrough is enabled. |
||
Migratable |
A check box to toggle whether or not vNICs using this profile can be migrated. Migration is enabled by default on regular vNIC profiles; the check box is selected and cannot be changed. When the Passthrough check box is selected, Migratable becomes available and can be deselected, if required, to disable migration of passthrough vNICs. |
||
Failover |
A drop-down menu to select available vNIC profiles that act as a failover device. Available only when the Passthrough and Migratable check boxes are checked. |
||
Port Mirroring |
A check box to toggle port mirroring. Port mirroring copies layer 3 network traffic on the logical network to a virtual interface on a virtual machine. It it not selected by default. For further details, see Port Mirroring in the Technical Reference. |
||
Device Custom Properties |
A drop-down menu to select available custom properties to apply to the vNIC profile. Use the + and - buttons to add and remove properties respectively. |
||
Allow all users to use this Profile |
A check box to toggle the availability of the profile to all users in the environment. It is selected by default. |
Enabling Passthrough on a vNIC Profile
This is one in a series of topics that show how to set up and configure SR-IOV on oVirt. For more information, see Setting Up and Configuring SR-IOV |
The passthrough property of a vNIC profile enables a vNIC to be directly connected to a virtual function (VF) of an SR-IOV-enabled NIC. The vNIC will then bypass the software network virtualization and connect directly to the VF for direct device assignment.
The passthrough property cannot be enabled if the vNIC profile is already attached to a vNIC; this procedure creates a new profile to avoid this. If a vNIC profile has passthrough enabled, QoS, network filters, and port mirroring cannot be enabled on the same profile.
For more information on SR-IOV, direct device assignment, and the hardware considerations for implementing these in oVirt, see Hardware Considerations for Implementing SR-IOV.
-
Click
. -
Click the logical network’s name. This opens the details view.
-
Click the vNIC Profiles tab to list all vNIC profiles for that logical network.
-
Click New.
-
Enter the Name and Description of the profile.
-
Select the Passthrough check box.
-
Optionally deselect the Migratable check box to disable migration for vNICs using this profile. If you keep this check box selected, see Additional Prerequisites for Virtual Machines with SR-IOV-Enabled vNICs in the Virtual Machine Management Guide.
-
If necessary, select a custom property from the custom properties list, which displays Please select a key… by default. Use the + and - buttons to add or remove custom properties.
-
Click OK.
The vNIC profile is now passthrough-capable. To use this profile to directly attach a virtual machine to a NIC or PCI VF, attach the logical network to the NIC and create a new PCI Passthrough vNIC on the desired virtual machine that uses the passthrough vNIC profile. For more information on these procedures respectively, see Editing Host Network Interfaces and Assigning Logical Networks to Hosts, and Adding a New Network Interface in the Virtual Machine Management Guide.
Enabling a vNIC profile for SR-IOV migration with failover
Failover allows the selection of a profile that acts as a failover device during virtual machine migration when the VF needs to be detached, preserving virtual machine communication with minimal interruption.
Failover is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope. |
-
The Passthrough and Migratable check boxes of the profile are selected.
-
The failover network is attached to the host.
-
In order to make a vNIC profile acting as failover editable, you must first remove any failover references.
-
vNIC profiles that can act as failover are profiles that are not selected as Passthrough or are not connected to an External Network.
-
In the Administration Portal, go to
, select the vNIC profile, click Edit and select aFailover vNIC profile
from the drop down list. -
Click OK to save the profile settings.
Attaching two vNIC profiles that reference the same failover vNIC profile to the same virtual machine will fail in libvirt. |
Removing a vNIC Profile
Remove a vNIC profile to delete it from your virtualized environment.
-
Click
. -
Click the logical network’s name. This opens the details view.
-
Click the vNIC Profiles tab to display available vNIC profiles.
-
Select one or more profiles and click Remove.
-
Click OK.
Assigning Security Groups to vNIC Profiles
This feature is only available when |
You can assign security groups to the vNIC profile of networks that have been imported from an OpenStack Networking instance and that use the Open vSwitch plug-in. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network interface. The following procedure outlines how to attach a security group to a vNIC profile.
A security group is identified using the ID of that security group as registered in the Open Virtual Network (OVN) External Network Provider. You can find the IDs of security groups for a given tenant using the OpenStack Networking API, see List Security Groups in the OpenStack API Reference. |
-
Click
. -
Click the logical network’s name. This opens the details view.
-
Click the vNIC Profiles tab.
-
Click New, or select an existing vNIC profile and click Edit.
-
From the custom properties drop-down list, select SecurityGroups. Leaving the custom property drop-down blank applies the default security settings, which permit all outbound traffic and intercommunication but deny all inbound traffic from outside of the default security group. Note that removing the SecurityGroups property later will not affect the applied security group.
-
In the text field, enter the ID of the security group to attach to the vNIC profile.
-
Click OK.
You have attached a security group to the vNIC profile. All traffic through the logical network to which that profile is attached will be filtered in accordance with the rules defined for that security group.
User Permissions for vNIC Profiles
Configure user permissions to assign users to certain vNIC profiles. Assign the VnicProfileUser role to a user to enable them to use the profile. Restrict users from certain profiles by removing their permission for that profile.
User Permissions for vNIC Profiles
-
Click
. -
Click the vNIC profile’s name. This opens the details view.
-
Click the Permissions tab to show the current user permissions for the profile.
-
Click Add or Remove to change user permissions for the vNIC profile.
-
In the Add Permissions to User window, click My Groups to display your user groups. You can use this option to grant permissions to other users in your groups.
You have configured user permissions for a vNIC profile.
2.4.3. External Provider Networks
Importing Networks From External Providers
To use networks from an Open Virtual Network (OVN), register the provider with the Engine. See Adding an External Network Provider for more information. Then, use the following procedure to import the networks provided by that provider into the Engine so the networks can be used by virtual machines.
-
Click
. -
Click Import.
-
From the Network Provider drop-down list, select an external provider. The networks offered by that provider are automatically discovered and listed in the Provider Networks list.
-
Using the check boxes, select the networks to import in the Provider Networks list and click the down arrow to move those networks into the Networks to Import list.
-
You can customize the name of the network that you are importing. To customize the name, click the network’s name in the Name column, and change the text.
-
From the Data Center drop-down list, select the data center into which the networks will be imported.
-
Optional: Clear the Allow All check box to prevent that network from being available to all users.
-
Click Import.
The selected networks are imported into the target data center and can be attached to virtual machines. See Adding a New Network Interface in the Virtual Machine Management Guide for more information.
Limitations to Using External Provider Networks
The following limitations apply to using logical networks imported from an external provider in a oVirt environment.
-
Logical networks offered by external providers must be used as virtual machine networks, and cannot be used as display networks.
-
The same logical network can be imported more than once, but only to different data centers.
-
You cannot edit logical networks offered by external providers in the Engine. To edit the details of a logical network offered by an external provider, you must edit the logical network directly from the external provider that provides that logical network.
-
Port mirroring is not available for virtual network interface cards connected to logical networks offered by external providers.
-
If a virtual machine uses a logical network offered by an external provider, that provider cannot be deleted from the Engine while the logical network is still in use by the virtual machine.
-
Networks offered by external providers are non-required. As such, scheduling for clusters in which such logical networks have been imported will not take those logical networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the logical network on hosts in clusters in which such logical networks have been imported.
Configuring Subnets on External Provider Logical Networks
A logical network provided by an external provider can only assign IP addresses to virtual machines if one or more subnets have been defined on that logical network. If no subnets are defined, virtual machines will not be assigned IP addresses. If there is one subnet, virtual machines will be assigned an IP address from that subnet, and if there are multiple subnets, virtual machines will be assigned an IP address from any of the available subnets. The DHCP service provided by the external network provider on which the logical network is hosted is responsible for assigning these IP addresses.
While the oVirt Engine automatically discovers predefined subnets on imported logical networks, you can also add or remove subnets to or from logical networks from within the Engine.
If you add Open Virtual Network (OVN) (ovirt-provider-ovn) as an external network provider, multiple subnets can be connected to each other by routers. To manage these routers, you can use the OpenStack Networking API v2.0. Please note, however, that ovirt-provider-ovn has a limitation: Source NAT (enable_snat in the OpenStack API) is not implemented.
Adding Subnets to External Provider Logical Networks
Create a subnet on a logical network provided by an external provider.
-
Click
. -
Click the logical network’s name. This opens the details view.
-
Click the Subnets tab.
-
Click New.
-
Enter a Name and CIDR for the new subnet.
-
From the IP Version drop-down list, select either IPv4 or IPv6.
-
Click OK.
For IPv6, oVirt supports only static addressing. |
Removing Subnets from External Provider Logical Networks
Remove a subnet from a logical network provided by an external provider.
-
Click
. -
Click the logical network’s name. This opens the details view.
-
Click the Subnets tab.
-
Select a subnet and click Remove.
-
Click OK.
Assigning Security Groups to Logical Networks and Ports
This feature is only available when Open Virtual Network (OVN) is added as an external network provider (as ovirt-provider-ovn). Security groups cannot be created through the oVirt Engine. You must create security groups through OpenStack Networking API v2.0 or Ansible. |
A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network. You can also use security groups to filter traffic at the port level.
In oVirt 4.2.7, security groups are disabled by default.
-
Click
. -
Click the cluster name. This opens the details view.
-
Click the Logical Networks tab.
-
Click Add Network and define the properties, ensuring that you select
ovirt-provider-ovn
from theExternal Providers
drop-down list. For more information, see Creating a new logical network in a data center or cluster. -
Select
Enabled
from theSecurity Group
drop-down list. For more details see Logical Network General Settings Explained. -
Click
OK
. -
Create security groups using either OpenStack Networking API v2.0 or Ansible.
-
Create security group rules using either OpenStack Networking API v2.0 or Ansible.
-
Update the ports with the security groups that you defined using either OpenStack Networking API v2.0 or Ansible.
-
Optional. Define whether the security feature is enabled at the port level. Currently, this is only possible using the OpenStack Networking API. If the
port_security_enabled
attribute is not set, it will default to the value specified in the network to which it belongs.
2.4.4. Hosts and Networking
Network Manager Stateful Configuration (nmstate)
Version 4.4 of oVirt (oVirt) uses Network Manager Stateful Configuration (nmstate) to configure networking for oVirt hosts that are based on EL 8. oVirt version 4.3 and earlier use interface configuration (ifcfg) network scripts to manage host networking.
To use nmstate, upgrade the oVirt Engine and hosts as described in the oVirt Upgrade Guide.
As an administrator, you do not need to install or configure nmstate. It is enabled by default and runs in the background.
Always use oVirt Engine to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration. |
The change to nmstate is nearly transparent. It only changes how you configure host networking in the following ways:
-
After you add a host to a cluster, always use the oVirt Engine to modify the host network.
-
Modifying the host network without using the Engine can create an unsupported configuration.
-
To fix an unsupported configuration, you replace it with a supported one by using the Engine to synchronize the host network. For details, see Synchronizing Host Networks.
-
The only situation where you modify host networks outside the Engine is to configure a static route on a host. For more details, see Adding a static route on a host.
The change to nmstate improves how oVirt Engine applies configuration changes you make in Cockpit and Anaconda before adding the host to the Engine. This fixes some issues, such as BZ#1680970 Static IPv6 Address is lost on host deploy if NM manages the interface.
If you use # dnf update nmstate # systemctl restart vdsmd supervdsmd |
If you use # dnf update NetworkManager # systemctl restart NetworkManager |
Refreshing Host Capabilities
When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Engine.
-
Click
and select a host. -
Click
.
The list of network interface cards in the Network Interfaces tab for the selected host is updated. Any new network interface cards can now be used in the Engine.
Editing Host Network Interfaces and Assigning Logical Networks to Hosts
You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces. Bridge and ethtool custom properties are also supported.
The only way to change the IP address of a host in oVirt is to remove the host and then to add it again. To change the VLAN settings of a host, see Editing VLAN Settings. |
You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines. |
If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, you can hover your cursor over a physical network interface to view the switch port’s current configuration. This can help to prevent incorrect configuration. Check the following information prior to assigning logical networks:
|
-
Click
. -
Click the host’s name. This opens the details view.
-
Click the Network Interfaces tab.
-
Click Setup Host Networks.
-
Optionally, hover your cursor over host network interface to view configuration information provided by the switch.
-
Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area next to the physical host network interface.
If a NIC is connected to more than one logical network, only one of the networks can be non-VLAN. All the other logical networks must be unique VLANs.
-
Configure the logical network:
-
Hover your cursor over an assigned logical network and click the pencil icon. This opens the Edit Management Network window.
-
From the IPv4 tab, select a Boot Protocol from None, DHCP, or Static. If you selected Static, enter the IP, Netmask / Routing Prefix, and the Gateway.
For IPv6, only static IPv6 addressing is supported. To configure the logical network, select the IPv6 tab and make the following entries:
-
Set Boot Protocol to Static.
-
For Routing Prefix, enter the length of the prefix using a forward slash and decimals. For example:
/48
-
IP: The complete IPv6 address of the host network interface. For example:
2001:db8::1:0:0:6
-
Gateway: The source router’s IPv6 address. For example:
2001:db8::1:0:0:1
If you change the host’s management network IP address, you must reinstall the host for the new IP address to be configured.
Each logical network can have a separate gateway defined from the management network gateway. This ensures traffic that arrives on the logical network will be forwarded using the logical network’s gateway instead of the default gateway used by the management network.
-
-
Use the QoS tab to override the default host network quality of service. Select Override QoS and enter the desired values in the following fields:
-
Weighted Share: Signifies how much of the logical link’s capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100.
-
Rate Limit [Mbps]: The maximum bandwidth to be used by a network.
-
Committed Rate [Mbps]: The minimum bandwidth required by a network. The Committed Rate requested is not guaranteed and will vary depending on the network infrastructure and the Committed Rate requested by other networks on the same logical link.
-
-
To configure a network bridge, click the Custom Properties tab and select bridge_opts from the drop-down list. Enter a valid key and value with the following syntax: key=value. Separate multiple entries with a whitespace character. The following keys are valid, with the values provided as examples. For more information on these parameters, see Explanation of bridge_opts Parameters.
forward_delay=1500 group_addr=1:80:c2:0:0:0 group_fwd_mask=0x0 hash_max=512 hello_time=200 max_age=2000 multicast_last_member_count=2 multicast_last_member_interval=100 multicast_membership_interval=26000 multicast_querier=0 multicast_querier_interval=25500 multicast_query_interval=13000 multicast_query_response_interval=1000 multicast_query_use_ifaddr=0 multicast_router=1 multicast_snooping=1 multicast_startup_query_count=2 multicast_startup_query_interval=3125
-
To configure ethernet properties, click the Custom Properties tab and select ethtool_opts from the drop-down list. Enter a valid value using the format of the command-line arguments of ethtool. For example: :
--coalesce em1 rx-usecs 14 sample-interval 3 --offload em2 rx on lro on tso off --change em1 speed 1000 duplex half
This field can accept wild cards. For example, to apply the same option to all of this network’s interfaces, use:
--coalesce * rx-usecs 14 sample-interval 3
The ethtool_opts option is not available by default; you need to add it using the engine configuration tool. See How to Set Up Engine to Use Ethtool for more information. For more information on ethtool properties, see the manual page by typing
man ethtool
in the command line. -
To configure Fibre Channel over Ethernet (FCoE), click the Custom Properties tab and select fcoe from the drop-down list. Enter a valid key and value with the following syntax: key=value. At least
enable=yes
is required. You can also adddcb=[yes|no]
and `auto_vlan=[yes|no]. Separate multiple entries with a whitespace character. The fcoe option is not available by default; you need to add it using the engine configuration tool. See How to Set Up Engine to Use FCoE for more information.A separate, dedicated logical network is recommended for use with FCoE.
-
To change the default network used by the host from the management network (ovirtmgmt) to a non-management network, configure the non-management network’s default route. See Configuring a Default Route for more information.
-
If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box. For more information about unsynchronized hosts and how to synchronize them, see Synchronizing host networks.
-
-
Select the Verify connectivity between Host and Engine check box to check network connectivity. This action only works if the host is in maintenance mode.
-
Click OK.
If not all network interface cards for the host are displayed, click to update the list of network interface cards available for that host. |
In some cases, making multiple concurrent changes to a host network configuration using the Setup Host Networks window or setupNetwork
command fails with an Operation failed: [Cannot setup Networks]. Another Setup Networks or Host Refresh process in progress on the host. Please try later.]
error in the event log. This error indicates that some of the changes were not configured on the host. This happens because, to preserve the integrity of the configuration state, only a single setup network command can be processed at a time. Other concurrent configuration commands are queued for up to a default timeout of 20 seconds. To help prevent the above failure from happening, use the engine-config
command to increase the timeout period of SetupNetworksWaitTimeoutSeconds
beyond 20 seconds. For example:
# engine-config --set SetupNetworksWaitTimeoutSeconds=40
Synchronizing Host Networks
The Engine defines a network interface as out-of-sync
when the definition of the interface on the host differs from the definitions stored by the Engine.
Out-of-sync networks appear with an Out-of-sync icon in the host’s Network Interfaces tab and with this icon
in the Setup Host Networks window.
When a host’s network is out of sync, the only activities that you can perform on the unsynchronized network in the Setup Host Networks window are detaching the logical network from the network interface or synchronizing the network.
A host will become out of sync if:
-
You make configuration changes on the host rather than using the the Edit Logical Networks window, for example:
-
Changing the VLAN identifier on the physical host.
-
Changing the Custom MTU on the physical host.
-
-
You move a host to a different data center with the same network name, but with different values/parameters.
-
You change a network’s VM Network property by manually removing the bridge from the host.
If you change the network’s MTU settings, you must propagate this change to the running virtual machines on the network: Hot unplug and replug every virtual machine’s vNIC that should apply the MTU setting, or restart the virtual machines. Otherwise, these interfaces fail when the virtual machine migrates to another host. For more information, see After network MTU change, some VMs and bridges have the old MTU and seeing packet drops and BZ#1766414. |
Following these best practices will prevent your host from becoming unsynchronized:
-
Use the Administration Portal to make changes rather than making changes locally on the host.
-
Edit VLAN settings according to the instructions in Editing VLAN Settings.
Synchronizing a host’s network interface definitions involves using the definitions from the Engine and applying them to the host. If these are not the definitions that you require, after synchronizing your hosts update their definitions from the Administration Portal. You can synchronize a host’s networks on three levels:
-
Per logical network
-
Per host
-
Per cluster
-
Click
. -
Click the host’s name. This opens the details view.
-
Click the Network Interfaces tab.
-
Click Setup Host Networks.
-
Hover your cursor over the unsynchronized network and click the pencil icon. This opens the Edit Network window.
-
Select the Sync network check box.
-
Click OK to save the network change.
-
Click OK to close the Setup Host Networks window.
-
Click the Sync All Networks button in the host’s Network Interfaces tab to synchronize all of the host’s unsynchronized network interfaces.
-
Click the Sync All Networks button in the cluster’s Logical Networks tab to synchronize all unsynchronized logical network definitions for the entire cluster.
You can also synchronize a host’s networks via the REST API. See syncallnetworks in the REST API Guide. |
Editing a Host’s VLAN Settings
To change the VLAN settings of a host, the host must be removed from the Engine, reconfigured, and re-added to the Engine.
To keep networking synchronized, do the following:
-
Put the host in maintenance mode.
-
Manually remove the management network from the host. This will make the host reachable over the new VLAN.
-
Add the host to the cluster. Virtual machines that are not connected directly to the management network can be migrated between hosts safely.
The following warning message appears when the VLAN ID of the management network is changed:
Changing certain properties (e.g. VLAN, MTU) of the management network could lead to loss of connectivity to hosts in the data center, if its underlying network infrastructure isn't configured to accommodate the changes. Are you sure you want to proceed?
Proceeding causes all of the hosts in the data center to lose connectivity to the Engine and causes the migration of hosts to the new management network to fail. The management network will be reported as "out-of-sync".
If you change the management network’s VLAN ID, you must reinstall the host to apply the new VLAN ID. |
Adding Multiple VLANs to a Single Network Interface Using Logical Networks
Multiple VLANs can be added to a single network interface to separate traffic on the one host.
You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows. |
-
Click
. -
Click the host’s name. This opens the details view.
-
Click the Network Interfaces tab.
-
Click Setup Host Networks.
-
Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.
-
Edit the logical networks:
-
Hover your cursor over an assigned logical network and click the pencil icon.
-
If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.
-
Select a Boot Protocol:
-
None
-
DHCP
-
Static
-
-
Provide the IP and Subnet Mask.
-
Click OK.
-
-
Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
-
Click OK.
Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational.
This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface.
Copying host networks
To save time, you can copy a source host’s network configuration to a target host in the same cluster.
Copying the network configuration includes:
-
Logical networks attached to the host, except the
ovirtmgmt
management network -
Bonds attached to interfaces
-
Do not copy network configurations that contain static IP addresses. Doing this sets the boot protocol in the target host to
none
. -
Copying a configuration to a target host with the same interface names as the source host but different physical network connections produces a wrong configuration.
-
The target host must have an equal or greater number of interfaces than the source host. Otherwise, the operation fails.
-
Copying
QoS
,DNS
, andcustom_properties
is not supported. -
Network interface labels are not copied.
Copying host networks replaces ALL network settings on the target host except its attachment to the |
-
The number of NICs on the target host must be equal or greater than those on the source host. Otherwise, the operation fails.
-
The hosts must be in the same cluster.
-
In the Administration Portal, click
. -
Select the source host whose configuration you want to copy.
-
Click Copy Host Networks. This opens the Copy Host Networks window.
-
Use Target Host to select the host that should receive the configuration. The list only shows hosts that are in the same cluster.
-
Click Copy Host Networks.
-
Verify the network settings of the target host
-
Selecting multiple hosts disables the Copy Host Networks button and context menu.
-
Instead of using the Copy Host Networks button, you can right-click a host and select Copy Host Networks from the context menu.
-
The Copy Host Networks button is also available in any host’s details view.
Assigning Additional IPv4 Addresses to a Host Network
A host network, such as the ovirtmgmt management network, is created with only one IP address when initially set up. This means that if a NIC’s configuration file is configured with multiple IP addresses, only the first listed IP address will be assigned to the host network. Additional IP addresses may be required if connecting to storage, or to a server on a separate private subnet using the same NIC.
The vdsm-hook-extra-ipv4-addrs
hook allows you to configure additional IPv4 addresses for host networks. For more information about hooks, see VDSM and Hooks.
In the following procedure, the host-specific tasks must be performed on each host for which you want to configure additional IP addresses.
-
On the host that you want to configure additional IPv4 addresses for, install the VDSM hook package. The package needs to be installed manually on Enterprise Linux hosts and oVirt Nodes.
# dnf install vdsm-hook-extra-ipv4-addrs
-
On the Engine, run the following command to add the key:
# engine-config -s 'UserDefinedNetworkCustomProperties=ipv4_addrs=.*'
-
Restart the
ovirt-engine
service:# systemctl restart ovirt-engine.service
-
In the Administration Portal, click
. -
Click the host’s name. This opens the details view.
-
Click the Network Interfaces tab and click Setup Host Networks.
-
Edit the host network interface by hovering the cursor over the assigned logical network and clicking the pencil icon.
-
Select ipv4_addr from the Custom Properties drop-down list and add the additional IP address and prefix (for example 5.5.5.5/24). Multiple IP addresses must be comma-separated.
-
Click OK to close the Edit Network window.
-
Click OK to close the Setup Host Networks window.
The additional IP addresses will not be displayed in the Engine, but you can run the command ip addr show
on the host to confirm that they have been added.
Adding Network Labels to Host Network Interfaces
Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces. Setting a label on a role network (for instance, a migration network or a display network) causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses.
There are two methods of adding labels to a host network interface:
-
Manually, in the Administration Portal
-
Automatically, with the LLDP Labeler service
-
Click
. -
Click the host’s name. This opens the details view.
-
Click the Network Interfaces tab.
-
Click Setup Host Networks.
-
Click Labels and right-click [New Label]. Select a physical network interface to label.
-
Enter a name for the network label in the Label text field.
-
Click OK.
You can automate the process of assigning labels to host network interfaces in the configured list of clusters with the LLDP Labeler service.
Configuring the LLDP Labeler
By default, LLDP Labeler runs as an hourly service. This option is useful if you make hardware changes (for example, NICs, switches, or cables) or change switch configurations.
-
The interfaces must be connected to a Juniper switch.
-
The Juniper switch must be configured to provide the
Port VLAN
using LLDP.
-
Configure the
username
andpassword
in/etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
username
- the username of the Engine administrator. The default isadmin@internal
. -
password
- the password of the Engine administrator. The default is123456
.
-
-
Configure the LLDP Labeler service by updating the following values in
etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
clusters
- a comma-separated list of clusters on which the service should run. Wildcards are supported. For example,Cluster*
defines LLDP Labeler to run on all clusters starting with wordCluster
. To run the service on all clusters in the data center, type*
. The default isDef*
. -
api_url
- the full URL of the Engine’s API. The default ishttps://Manager_FQDN/ovirt-engine/api
-
ca_file
- the path to the custom CA certificate file. Leave this value empty if you do not use custom certificates. The default is empty. -
auto_bonding
- enables LLDP Labeler’s bonding capabilities. The default istrue
. -
auto_labeling
- enables LLDP Labeler’s labeling capabilities. The default istrue
.
-
-
Optionally, you can configure the service to run at a different time interval by changing the value of
OnUnitActiveSec
inetc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer
. The default is1h
. -
Configure the service to start now and at boot by entering the following command:
# systemctl enable --now ovirt-lldp-labeler
To invoke the service manually, enter the following command:
# /usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py
You have added a network label to a host network interface. Newly created logical networks with the same label are automatically assigned to all host network interfaces with that label. Removing a label from a logical network automatically removes that logical network from all host network interfaces with that label.
Changing the FQDN of a Host
Use the following procedure to change the fully qualified domain name of hosts.
-
Place the host into maintenance mode so the virtual machines are live migrated to another host. See Moving a host to maintenance mode for more information. Alternatively, manually shut down or migrate all the virtual machines to another host. See Manually Migrating Virtual Machines in the Virtual Machine Management Guide for more information.
-
Click Remove, and click OK to remove the host from the Administration Portal.
-
Use the
hostnamectl
tool to update the host name. For more options, see Configure Host Names in the Enterprise Linux 7 Networking Guide.# hostnamectl set-hostname NEW_FQDN
-
Reboot the host.
-
Re-register the host with the Engine. See Adding standard hosts to the Engine for more information.
IPv6 Networking Support
oVirt supports static IPv6 networking in most contexts.
oVirt requires IPv6 to remain enabled on the computer or virtual machine where you are running the Engine (also called "the Engine machine"). Do not disable IPv6 on the Engine machine, even if your systems do not use it. |
-
Only static IPv6 addressing is supported. Dynamic IPv6 addressing with DHCP or Stateless Address Autoconfiguration are not supported.
-
Dual-stack addressing, IPv4 and IPv6, is not supported.
-
OVN networking can be used with only IPv4 or IPv6.
-
Switching clusters from IPv4 to IPv6 is not supported.
-
Only a single gateway per host can be set for IPv6.
-
If both networks share a single gateway (are on the same subnet), you can move the default route role from the management network (ovirtmgmt) to another logical network. The host and Engine should have the same IPv6 gateway. If the host and Engine are not on the same subnet, the Engine might lose connectivity with the host because the IPv6 gateway was removed.
-
Using a glusterfs storage domain with an IPv6-addressed gluster server is not supported.
Setting Up and Configuring SR-IOV
This topic summarizes the steps for setting up and configuring SR-IOV, with links out to topics that cover each step in detail.
Set up your hardware in accordance with the Hardware Considerations for Implementing SR-IOV
To set up and configure SR-IOV, complete the following tasks.
-
The number of the 'passthrough' vNICs depends on the number of available virtual functions (VFs) on the host. For example, to run a virtual machine (VM) with three SR-IOV cards (vNICs), the host must have three or more VFs enabled.
-
Hotplug and unplug are supported.
-
Live migration is supported.
-
To migrate a VM, the destination host must also have enough available VFs to receive the VM. During the migration, the VM releases a number of VFs on the source host and occupies the same number of VFs on the destination host.
-
On the host, you will see a device, link, or ifcae like any other interface. That device disappears when it is attached to a VM, and reappears when it is released.
-
Avoid attaching a host device directly to a VM for SR-IOV feature.
-
To use a VF as a trunk port with several VLANs and configure the VLANs within the Guest, please see Cannot configure VLAN on SR-IOV VF interfaces inside the Virtual Machine.
Here is an example of what the libvirt XML for the interface would look like:
----
<interface type='hostdev'>
<mac address='00:1a:yy:xx:vv:xx'/>
<driver name='vfio'/>
<source>
<address type='pci' domain='0x0000' bus='0x05' slot='0x10' function='0x0'/>
</source>
<alias name='ua-18400536-5688-4477-8471-be720e9efc68'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</interface>
----
The following example shows you how to get diagnostic information about the VFs attached to an interface.
# ip -s link show dev enp5s0f0 1: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT qlen 1000 link/ether 86:e2:ba:c2:50:f0 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 30931671 218401 0 0 0 19165434 TX: bytes packets errors dropped carrier collsns 997136 13661 0 0 0 0 vf 0 MAC 02:00:00:00:00:01, spoof checking on, link-state auto, trust off, query_rss off vf 1 MAC 00:1a:4b:16:01:5e, spoof checking on, link-state auto, trust off, query_rss off vf 2 MAC 02:00:00:00:00:01, spoof checking on, link-state auto, trust off, query_rss off
2.4.5. Network Bonding
Bonding methods
Network bonding combines multiple NICs into a bond device, with the following advantages:
-
The transmission speed of bonded NICs is greater than that of a single NIC.
-
Network bonding provides fault tolerance, because the bond device will not fail unless all its NICs fail.
Using NICs of the same make and model ensures that they support the same bonding options and modes.
oVirt’s default bonding mode, The logical networks of a bond must be compatible. A bond can support only 1 non-VLAN logical network. The rest of the logical networks must have unique VLAN IDs. Bonding must be enabled for the switch ports. Consult the manual provided by your switch vendor for specific instructions. |
You can create a network bond device using one of the following methods:
-
Manually, in the Administration Portal, for a specific host
-
Automatically, using LLDP Labeler, for unbonded NICs of all hosts in a cluster or data center
If your environment uses iSCSI storage and you want to implement redundancy, follow the instructions for configuring iSCSI multipathing.
Creating a Bond Device in the Administration Portal
You can create a bond device on a specific host in the Administration Portal. The bond device can carry both VLAN-tagged and untagged traffic.
-
Click
. -
Click the host’s name. This opens the details view.
-
Click the Network Interfaces tab to list the physical network interfaces attached to the host.
-
Click Setup Host Networks.
-
Check the switch configuration. If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, hover your cursor over a physical NIC to view the switch port’s aggregation configuration.
-
Drag and drop a NIC onto another NIC or onto a bond.
Two NICs form a new bond. A NIC and a bond adds the NIC to the existing bond.
If the logical networks are incompatible, the bonding operation is blocked.
-
Select the Bond Name and Bonding Mode from the drop-down menus. See Bonding Modes for details.
If you select the Custom bonding mode, you can enter bonding options in the text field, as in the following examples:
-
If your environment does not report link states with
ethtool
, you can set ARP monitoring by enteringmode=1 arp_interval=1 arp_ip_target=192.168.0.2
. -
You can designate a NIC with higher throughput as the primary interface by entering
mode=1 primary=eth0
.For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.
-
-
Click OK.
-
Attach a logical network to the new bond and configure it. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts for instructions.
You cannot attach a logical network directly to an individual NIC in the bond.
-
Optionally, you can select Verify connectivity between Host and Engine if the host is in maintenance mode.
-
Click OK.
Creating a Bond Device with the LLDP Labeler Service
The LLDP Labeler service enables you to create a bond device automatically with all unbonded NICs, for all the hosts in one or more clusters or in the entire data center. The bonding mode is (Mode 4) Dynamic Link Aggregation(802.3ad)
.
NICs with incompatible logical networks cannot be bonded.
Configuring the LLDP Labeler
By default, LLDP Labeler runs as an hourly service. This option is useful if you make hardware changes (for example, NICs, switches, or cables) or change switch configurations.
-
The interfaces must be connected to a Juniper switch.
-
The Juniper switch must be configured for Link Aggregation Control Protocol (LACP) using LLDP.
-
Configure the
username
andpassword
in/etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
username
- the username of the Engine administrator. The default isadmin@internal
. -
password
- the password of the Engine administrator. The default is123456
.
-
-
Configure the LLDP Labeler service by updating the following values in
etc/ovirt-lldp-labeler/conf.d/ovirt-lldp-credentials.conf
:-
clusters
- a comma-separated list of clusters on which the service should run. Wildcards are supported. For example,Cluster*
defines LLDP Labeler to run on all clusters starting with wordCluster
. To run the service on all clusters in the data center, type*
. The default isDef*
. -
api_url
- the full URL of the Engine’s API. The default ishttps://Manager_FQDN/ovirt-engine/api
-
ca_file
- the path to the custom CA certificate file. Leave this value empty if you do not use custom certificates. The default is empty. -
auto_bonding
- enables LLDP Labeler’s bonding capabilities. The default istrue
. -
auto_labeling
- enables LLDP Labeler’s labeling capabilities. The default istrue
.
-
-
Optionally, you can configure the service to run at a different time interval by changing the value of
OnUnitActiveSec
inetc/ovirt-lldp-labeler/conf.d/ovirt-lldp-labeler.timer
. The default is1h
. -
Configure the service to start now and at boot by entering the following command:
# systemctl enable --now ovirt-lldp-labeler
To invoke the service manually, enter the following command:
# /usr/bin/python /usr/share/ovirt-lldp-labeler/ovirt_lldp_labeler_cli.py
-
Attach a logical network to the new bond and configure it. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts for instructions.
You cannot attach a logical network directly to an individual NIC in the bond.
Bonding Modes
The packet dispersal algorithm is determined by the bonding mode. (See the Linux Ethernet Bonding Driver HOWTO for details). oVirt’s default bonding mode is (Mode 4) Dynamic Link Aggregation(802.3ad)
.
oVirt supports the following bonding modes, because they can be used in virtual machine (bridged) networks:
(Mode 1) Active-Backup
-
One NIC is active. If the active NIC fails, one of the backup NICs replaces it as the only active NIC in the bond. The MAC address of this bond is visible only on the network adapter port. This prevents MAC address confusion that might occur if the MAC address of the bond were to change, reflecting the MAC address of the new active NIC.
(Mode 2) Load Balance (balance-xor)
-
The NIC that transmits packets is selected by performing an XOR operation on the source MAC address and the destination MAC address, multiplied by the
modulo
of the total number of NICs. This algorithm ensures that the same NIC is selected for each destination MAC address. (Mode 3) Broadcast
-
Packets are transmitted to all NICs.
(Mode 4) Dynamic Link Aggregation(802.3ad)
(Default)-
The NICs are aggregated into groups that share the same speed and duplex settings . All the NICs in the active aggregation group are used.
(Mode 4) Dynamic Link Aggregation(802.3ad)
requires a switch that supports 802.3ad.The bonded NICs must have the same aggregator IDs. Otherwise, the Engine displays a warning exclamation mark icon on the bond in the Network Interfaces tab and the
ad_partner_mac
value of the bond is reported as00:00:00:00:00:00
. You can check the aggregator IDs by entering the following command:# cat /proc/net/bonding/bond0
The following bonding modes are incompatible with virtual machine logical networks and therefore only non-VM logical networks can be attached to bonds using these modes:
(Mode 0) Round-Robin
-
The NICs transmit packets in sequential order. Packets are transmitted in a loop that begins with the first available NIC in the bond and ends with the last available NIC in the bond. Subsequent loops start with the first available NIC.
(Mode 5) Balance-TLB
, also called Transmit Load-Balance-
Outgoing traffic is distributed, based on the load, over all the NICs in the bond. Incoming traffic is received by the active NIC. If the NIC receiving incoming traffic fails, another NIC is assigned.
(Mode 6) Balance-ALB
, also called Adaptive Load-Balance-
(Mode 5) Balance-TLB
is combined with receive load-balancing for IPv4 traffic. ARP negotiation is used for balancing the receive load.
2.5. Hosts
2.5.1. Introduction to Hosts
Hosts, also known as hypervisors, are the physical servers on which virtual machines run. Full virtualization is provided by using a loadable Linux kernel module called Kernel-based Virtual Machine (KVM).
KVM can concurrently host multiple virtual machines running either Windows or Linux operating systems. Virtual machines run as individual Linux processes and threads on the host machine and are managed remotely by the oVirt Engine. A oVirt environment has one or more hosts attached to it.
oVirt supports two methods of installing hosts. You can use the oVirt Node (oVirt Node) installation media, or install hypervisor packages on a standard Enterprise Linux installation.
You can identify the host type of an individual host in the oVirt Engine by selecting the host’s name. This opens the details view. Then look at the OS Description under Software. |
Hosts use tuned
profiles, which provide virtualization optimizations. For more information on tuned
, see the TuneD Profiles in Red Hat Enterprise Linux Monitoring and managing system status and performance.
The oVirt Node has security features enabled. Security Enhanced Linux (SELinux) and the firewall are fully configured and on by default. The status of SELinux on a selected host is reported under SELinux mode in the General tab of the details view. The Engine can open required ports on Enterprise Linux hosts when it adds them to the environment.
A host is a physical 64-bit server with the Intel VT or AMD-V extensions running Enterprise Linux 7 AMD64/Intel 64 version.
A physical host on the oVirt platform:
-
Must belong to only one cluster in the system.
-
Must have CPUs that support the AMD-V or Intel VT hardware virtualization extensions.
-
Must have CPUs that support all functionality exposed by the virtual CPU type selected upon cluster creation.
-
Has a minimum of 2 GB RAM.
-
Can have an assigned system administrator with system permissions.
Administrators can receive the latest security advisories from the oVirt watch list. Subscribe to the oVirt watch list to receive new security advisories for oVirt products by email. Subscribe by completing this form:
2.5.2. oVirt Node
oVirt Node (oVirt Node) is installed using a special build of Enterprise Linux with only the packages required to host virtual machines. It uses an Anaconda
installation interface based on the one used by Enterprise Linux hosts, and can be updated through the oVirt Engine or via yum
. Using the yum
command is the only way to install additional packages and have them persist after an upgrade.
oVirt Node features a Cockpit web interface for monitoring the host’s resources and performing administrative tasks. Direct access to oVirt Node via SSH or console is not supported, so the Cockpit web interface provides a graphical user interface for tasks that are performed before the host is added to the oVirt Engine, such as configuring networking or running terminal commands via the Terminal sub-tab.
Access the Cockpit web interface at https://HostFQDNorIP:9090 in your web browser. Cockpit for oVirt Node includes a custom Virtualization dashboard that displays the host’s health status, SSH Host Key, self-hosted engine status, virtual machines, and virtual machine statistics.
Starting in oVirt version 4.5 the oVirt Node uses systemd-coredump
to gather, save and process core dumps. For more information, see the documentation for core dump storage configuration files and systemd-coredump service.
In oVirt 4.4 and earlier oVirt Node uses the Automatic Bug Reporting Tool (ABRT) to collect meaningful debug information about application crashes. For more information, see the Enterprise Linux System Administrator’s Guide.
Custom boot kernel arguments can be added to oVirt Node using the |
Do not create untrusted users on oVirt Node, as this can lead to exploitation of local security vulnerabilities. |
2.5.3. Enterprise Linux hosts
You can use a Enterprise Linux 7 installation on capable hardware as a host. oVirt supports hosts running Enterprise Linux 7 Server AMD64/Intel 64 version with Intel VT or AMD-V extensions. To use your Enterprise Linux machine as a host, you must also attach the Enterprise Linux Server
and oVirt
subscriptions.
Adding a host can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and the creation of a bridge. Use the details view to monitor the process as the host and management system establish a connection.
Optionally, you can install a Cockpit web interface for monitoring the host’s resources and performing administrative tasks. The Cockpit web interface provides a graphical user interface for tasks that are performed before the host is added to the oVirt Engine, such as configuring networking or running terminal commands via the Terminal sub-tab.
Third-party watchdogs should not be installed on Enterprise Linux hosts, as they can interfere with the watchdog daemon provided by VDSM. |
2.5.4. Satellite Host Provider Hosts
Hosts provided by a Satellite host provider can also be used as virtualization hosts by the oVirt Engine. After a Satellite host provider has been added to the Engine as an external provider, any hosts that it provides can be added to and used in oVirt in the same way as oVirt Nodes (oVirt Node) and Enterprise Linux hosts.
2.5.5. Host Tasks
Adding Standard Hosts to the oVirt Engine
Always use the oVirt Engine to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration. For details, see Network Manager Stateful Configuration (nmstate). |
Adding a host to your oVirt environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge.
-
From the Administration Portal, click
. -
Click New.
-
Use the drop-down list to select the Data Center and Host Cluster for the new host.
-
Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.
-
Select an authentication method to use for the Engine to access the host.
-
Enter the root user’s password to use password authentication.
-
Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
-
-
Optionally, click the Advanced Parameters button to change the following advanced host settings:
-
Disable automatic firewall configuration.
-
Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
-
-
Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide.
-
Click OK.
The new host displays in the list of hosts with a status of Installing
, and you can view the progress of the installation in the Events section of the Notification Drawer (). After a brief delay the host status changes to
Up
.
Adding a Satellite Host Provider Host
The process for adding a Satellite host provider host is almost identical to that of adding a Enterprise Linux host except for the method by which the host is identified in the Engine. The following procedure outlines how to add a host provided by a Satellite host provider.
-
Click
. -
Click New.
-
Use the drop-down menu to select the Host Cluster for the new host.
-
Select the Foreman/Satellite check box to display the options for adding a Satellite host provider host and select the provider from which the host is to be added.
-
Select either Discovered Hosts or Provisioned Hosts.
-
Discovered Hosts (default option): Select the host, host group, and compute resources from the drop-down lists.
-
Provisioned Hosts: Select a host from the Providers Hosts drop-down list.
Any details regarding the host that can be retrieved from the external provider are automatically set, and can be edited as desired.
-
-
Enter the Name and SSH Port (Provisioned Hosts only) of the new host.
-
Select an authentication method to use with the host.
-
Enter the root user’s password to use password authentication.
-
Copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_hosts on the host to use public key authentication (Provisioned Hosts only).
-
-
You have now completed the mandatory steps to add a Enterprise Linux host. Click the Advanced Parameters drop-down button to show the advanced host settings.
-
Optionally disable automatic firewall configuration.
-
Optionally add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
-
-
You can configure the Power Management, SPM, Console, and Network Provider using the applicable tabs now; however, as these are not fundamental to adding a Enterprise Linux host, they are not covered in this procedure.
-
Click OK to add the host and close the window.
The new host displays in the list of hosts with a status of Installing
, and you can view the progress of the installation in the details view. After installation is complete, the status will update to Reboot
. The host must be activated for the status to change to Up
.
Setting up Satellite errata viewing for a host
In the Administration Portal, you can configure a host to view errata from Red Hat Satellite. After you associate a host with a Red Hat Satellite provider, you can receive updates in the host configuration dashboard about available errata and their importance, and decide when it is practical to apply the updates.
oVirt 4.4 supports viewing errata with Red Hat Satellite 6.6.
-
The Satellite server must be added as an external provider.
-
The Engine and any hosts on which you want to view errata must be registered in the Satellite server by their respective FQDNs. This ensures that external content host IDs do not need to be maintained in oVirt.
Hosts added using an IP address cannot report errata.
-
The Satellite account that manages the host must have Administrator permissions and a default organization set.
-
The host must be registered to the Satellite server.
-
Use Red Hat Satellite remote execution to manage packages on hosts.
The Katello agent is deprecated and will be removed in a future Satellite version. Migrate your processes to use the remote execution feature to update clients remotely. |
-
Click
and select the host. -
Click Edit.
-
Select the Use Foreman/Satellite check box.
-
Select the required Satellite server from the drop-down list.
-
Click OK.
The host is now configured to show the available errata, and their importance, in the same dashboard used to manage the host’s configuration.
-
Host Management Without Goferd and Katello Agent in the Red Hat Satellite document Managing Hosts
Configuring a Host for PCI Passthrough
This is one in a series of topics that show how to set up and configure SR-IOV on oVirt. For more information, see Setting Up and Configuring SR-IOV |
Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Engine already, ensure you place the host into maintenance mode first.
-
Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information.
-
Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Enterprise Linux Virtualization Deployment and Administration Guide for more information.
-
Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Engine or by editing the grub configuration file manually.
-
To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the oVirt Engine and Kernel Settings Explained.
-
To edit the grub configuration file manually, see Enabling IOMMU Manually.
-
-
For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information.
-
Enable IOMMU by editing the grub configuration file.
If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default.
-
For Intel, boot the machine, and append
intel_iommu=on
to the end of theGRUB_CMDLINE_LINUX
line in the grub configuration file.# vi /etc/default/grub ... GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... intel_iommu=on ...
-
For AMD, boot the machine, and append
amd_iommu=on
to the end of theGRUB_CMDLINE_LINUX
line in the grub configuration file.# vi /etc/default/grub … GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 … amd_iommu=on …
If
intel_iommu=on
or an AMD IOMMU is detected, you can try addingiommu=pt
. Thept
option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to the previous option if thept
option doesn’t work for your host.If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the
allow_unsafe_interrupts
option if the virtual machines are trusted. Theallow_unsafe_interrupts
is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option:# vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1
-
-
Refresh the grub.cfg file and reboot the host for these changes to take effect:
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot
Enabling nested virtualization for all virtual machines
Using hooks to enable nested virtualization is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Red Hat Technology Preview Features Support Scope. |
Nested virtualization enables virtual machines to host other virtual machines. For clarity, we will call these the parent virtual machines and nested virtual machines.
Child virtual machines are only visible to and managed by users who have access to the parent virtual machine. They are not visible to oVirt (oVirt) administrators.
By default, nested virtualization is not enabled in oVirt. To enable nested virtualization, you install a VDSM hook, vdsm-hook-nestedvt
, on all of the hosts in the cluster. Then, all of the virtual machines that run on these hosts can function as parent virtual machines.
You should only run parent virtual machines on hosts that support nested virtualization. If a parent virtual machine migrates to a host that does not support nested virtualization, its child virtual machines fail. To prevent this from happening, configure all of the hosts in the cluster to support nested virtualization. Otherwise, restrict parent virtual machines from migrating to hosts that do not support nested virtualization.
Take precautions to prevent parent virtual machines from migrating to hosts that do not support nested virtualization. |
-
In the Administration Portal, click
. -
Select a host in the cluster where you want to enable nested virtualization and click
and OK. -
Select the host again, click Host Console, and log into the host console.
-
Install the VDSM hook:
# dnf install vdsm-hook-nestedvt
-
Reboot the host.
-
Log into the host console again and verify that nested virtualization is enabled:
$ cat /sys/module/kvm*/parameters/nested
If this command returns
Y
or1
, the feature is enabled. -
Repeat this procedure for all of the hosts in the cluster.
Enabling nested virtualization for individual virtual machines
Nested virtualization is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope. |
Nested virtualization enables virtual machines to host other virtual machines. For clarity, we will call these the parent virtual machines and nested virtual machines.
Child virtual machines are only visible to and managed by users who have access to the parent virtual machine. They are not visible to oVirt (oVirt) administrators.
To enable nested virtualization on specific virtual machines, not all virtual machines, you configure a host or hosts to support nested virtualization. Then you configure the virtual machine or virtual machines on run on those specific hosts and enable Pass-Through Host CPU. This option lets the virtual machines use the nested virtualization settings you just configured on the host. This option also restricts which hosts the virtual machines can run on and requires manual migration.
Otherwise, to enable nested virtualization for all of the virtual machines in a cluster, see Enabling nested virtualization for all virtual machines
Only run parent virtual machines on hosts that support nested virtualization. If you migrate a parent virtual machine to a host that does not support nested virtualization, its child virtual machines will fail.
Do not migrate parent virtual machines to hosts that do not support nested virtualization. |
Avoid live migration of parent virtual machines that are running child virtual machines. Even if the source and destination hosts are identical and support nested virtualization, the live migration can cause the child virtual machines to fail. Instead, shut down virtual machines before migration.
Configure the hosts to support nested virtualization:
-
In the Administration Portal, click
. -
Select a host in the cluster where you want to enable nested virtualization and click
and OK. -
Select the host again, click Host Console, and log into the host console.
-
In the Edit Host window, select the Kernel tab.
-
Under Kernel boot parameters, if the checkboxes are greyed-out, click RESET.
-
Select Nested Virtualization and click OK.
This action displays a
kvm-<architecture>.nested=1
parameter in Kernel command line. The following steps add this parameter to the Current kernel CMD line. -
Click
. -
When the host status returns to
Up
, click under Power Management or SSH Management. -
Verify that nested virtualization is enabled. Log into the host console and enter:
$ cat /sys/module/kvm*/parameters/nested
If this command returns
Y
or1
, the feature is enabled. -
Repeat this procedure for all of the hosts you need to run parent virtual machines.
Enable nested virtualization in specific virtual machines:
-
In the Administration Portal, click
. -
Select a virtual machine and click Edit
-
In the Edit Vitual Machine window, click Show Advanced Options and select the Host tab.
-
Under Start Running On, click Specific Host and select the host or hosts you configured to support nested virtualization.
-
Under CPU Options, select Pass-Through Host CPU. This action automatically sets the Migration mode to Allow manual migration only.
In RHV version 4.2, you can only enable Pass-Through Host CPU when Do not allow migration is selected.
-
Creating nested virtual machines in the EL documentation.
Moving a Host to Maintenance Mode
Many common maintenance tasks, including network configuration and deployment of software updates, require that hosts be placed into maintenance mode. Hosts should be placed into maintenance mode before any event that might cause VDSM to stop working properly, such as a reboot, or issues with networking or storage.
When a host is placed into maintenance mode the oVirt Engine attempts to migrate all running virtual machines to alternative hosts. The standard prerequisites for live migration apply, in particular there must be at least one active host in the cluster with capacity to run the migrated virtual machines.
Virtual machines that are pinned to the host and cannot be migrated are shut down. You can check which virtual machines are pinned to the host by clicking Pinned to Host in the Virtual Machines tab of the host’s details view. |
Placing a Host into Maintenance Mode
-
Click
and select the desired host. -
Click
. This opens the Maintenance Host(s) confirmation window. -
Optionally, enter a Reason for moving the host into maintenance mode, which will appear in the logs and when the host is activated again. Then, click OK
The host maintenance Reason field will only appear if it has been enabled in the cluster settings. See Cluster General Settings Explained for more information.
-
Optionally, select the required options for hosts that support Gluster.
Select the Ignore Gluster Quorum and Self-Heal Validations option to avoid the default checks. By default, the Engine checks that the Gluster quorum is not lost when the host is moved to maintenance mode. The Engine also checks that there is no self-heal activity that will be affected by moving the host to maintenance mode. If the Gluster quorum will be lost or if there is self-heal activity that will be affected, the Engine prevents the host from being placed into maintenance mode. Only use this option if there is no other way to place the host in maintenance mode.
Select the Stop Gluster Service option to stop all Gluster services while moving the host to maintenance mode.
These fields will only appear in the host maintenance window when the selected host supports Gluster. See Replacing the Primary Gluster Storage Node in Maintaining Red Hat Hyperconverged Infrastructure for more information.
-
Click OK to initiate maintenance mode.
All running virtual machines are migrated to alternative hosts. If the host is the Storage Pool Manager (SPM), the SPM role is migrated to another host. The Status field of the host changes to Preparing for Maintenance
, and finally Maintenance
when the operation completes successfully. VDSM does not stop while the host is in maintenance mode.
If migration fails on any virtual machine, click on the host to stop the operation placing it into maintenance mode, then click Cancel Migration on the virtual machine to stop the migration. |
Activating a Host from Maintenance Mode
A host that has been placed into maintenance mode, or recently added to the environment, must be activated before it can be used. Activation may fail if the host is not ready; ensure that all tasks are complete before attempting to activate the host.
-
Click
and select the host. -
Click
.
The host status changes to Unassigned
, and finally Up
when the operation is complete. Virtual machines can now run on the host. Virtual machines that were migrated off the host when it was placed into maintenance mode are not automatically migrated back to the host when it is activated, but can be migrated manually. If the host was the Storage Pool Manager (SPM) before being placed into maintenance mode, the SPM role does not return automatically when the host is activated.
Configuring Host Firewall Rules
You can configure the host firewall rules so that they are persistent, using Ansible. The cluster must be configured to use firewalld
.
Changing the |
-
On the Engine machine, edit ovirt-host-deploy-post-tasks.yml.example to add a custom firewall port:
# vi /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml.example --- # # Any additional tasks required to be executing during host deploy process can # be added below # - name: Enable additional port on firewalld firewalld: port: "_12345/tcp_" permanent: yes immediate: yes state: enabled
-
Save the file to another location as ovirt-host-deploy-post-tasks.yml.
New or reinstalled hosts are configured with the updated firewall rules.
Existing hosts must be reinstalled by clicking
and selecting Automatically configure host firewall.Removing a Host
Removing a host from your oVirt environment is sometimes necessary, such as when you need to reinstall a host.
-
Click
and select the host. -
Click
. -
Once the host is in maintenance mode, click Remove. The Remove Host(s) confirmation window opens.
-
Select the Force Remove check box if the host is part of a Gluster Storage cluster and has volume bricks on it, or if the host is non-responsive.
-
Click OK.
Updating Hosts Between Minor Releases
You can update all hosts in a cluster, or update individual hosts.
You can update all hosts in a cluster instead of updating hosts individually. This is particularly useful during upgrades to new versions of oVirt. See oVirt Cluster Upgrade for more information about the Ansible role used to automate the updates.
Update one cluster at a time.
-
On oVirt Node, the update only preserves modified content in the
/etc
and/var
directories. Modified data in other paths is overwritten during an update. -
If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster.
-
In a self-hosted engine environment, the Engine virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.
-
The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.
-
You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines are shut down during the update, unless you choose to skip that host instead.
-
In the Administration Portal, click
and select the cluster. The Upgrade status column shows if an upgrade is available for any hosts in the cluster. -
Click Upgrade.
-
Select the hosts to update, then click Next.
-
Configure the options:
-
Stop Pinned VMs shuts down any virtual machines that are pinned to hosts in the cluster, and is selected by default. You can clear this check box to skip updating those hosts so that the pinned virtual machines stay running, such as when a pinned virtual machine is running important services or processes and you do not want it to shut down at an unknown time during the update.
-
Upgrade Timeout (Minutes) sets the time to wait for an individual host to be updated before the cluster upgrade fails with a timeout. The default is
60
. You can increase it for large clusters where 60 minutes might not be enough, or reduce it for small clusters where the hosts update quickly. -
Check Upgrade checks each host for available updates before running the upgrade process. It is not selected by default, but you can select it if you need to ensure that recent updates are included, such as when you have configured the Engine to check for host updates less frequently than the default.
-
Reboot After Upgrade reboots each host after it is updated, and is selected by default. You can clear this check box to speed up the process if you are sure that there are no pending updates that require a host reboot.
-
Use Maintenance Policy sets the cluster’s scheduling policy to
cluster_maintenance
during the update. It is selected by default, so activity is limited and virtual machines cannot start unless they are highly available. You can clear this check box if you have a custom scheduling policy that you want to keep using during the update, but this could have unknown consequences. Ensure your custom policy is compatible with cluster upgrade activity before disabling this option.
-
-
Click Next.
-
Review the summary of the hosts and virtual machines that are affected.
-
Click Upgrade.
-
A cluster upgrade status screen displays with a progress bar showing the precentage of completion, and a list of steps in the upgrade process that have completed. You can click Go to Event Log to open the log entries for the upgrade. Closing this screen does not interrupt the upgrade process.
You can track the progress of host updates:
-
in the
view, the Upgrade Status column displays a progress bar that displays the percentage of completion. -
in the
view -
in the Events section of the Notification Drawer (
).
You can track the progress of individual virtual machine migrations in the Status column of the
view. In large environments, you may need to filter the results to show a particular group of virtual machines.Use the host upgrade manager to update individual hosts directly from the Administration Portal.
The upgrade manager only checks hosts with a status of Up or Non-operational, but not Maintenance. |
-
On oVirt Node, the update only preserves modified content in the
/etc
and/var
directories. Modified data in other paths is overwritten during an update. -
If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. Update a host when its usage is relatively low.
-
In a self-hosted engine environment, the Engine virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.
-
The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.
-
You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines must be shut down before updating the host.
-
Ensure that the correct repositories are enabled. To view a list of currently enabled repositories, run
dnf repolist
.-
For oVirt Nodes the
centos-release-ovirt45`
RPM package enabling the correct repositories is already installed. -
For Enterprise Linux hosts:
-
If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.
# dnf update -y centos-release-ovirt45
-
As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available. |
-
In the Administration Portal, click
and select the host to be updated. -
Click
and click OK.Open the Notification Drawer (
) and expand the Events section to see the result.
-
If an update is available, click
. -
Click OK to update the host. Running virtual machines are migrated according to their migration policy. If migration is disabled for any virtual machines, you are prompted to shut them down.
The details of the host are updated in
and the status transitions through these stages:Maintenance > Installing > Reboot > Up
If the update fails, the host’s status changes to Install Failed. From Install Failed you can click
again.
Repeat this procedure for each host in the oVirt environment.
You should update the hosts from the Administration Portal. However, you can update the hosts using |
This information is provided for advanced system administrators who need to update hosts manually, but oVirt does not support this method. The procedure described in this topic does not include important steps, including certificate renewal, assuming advanced knowledge of such information. oVirt supports updating hosts using the Administration Portal. For details, see Updating individual hosts or Updating all hosts in a cluster in the Administration Guide. |
You can use the dnf
command to update your hosts. Update your systems regularly, to ensure timely application of security and bug fixes.
-
On oVirt Node, the update only preserves modified content in the
/etc
and/var
directories. Modified data in other paths is overwritten during an update. -
If the cluster has migration enabled, virtual machines are automatically migrated to another host in the cluster. Update a host when its usage is relatively low.
-
In a self-hosted engine environment, the Engine virtual machine can only migrate between self-hosted engine nodes in the same cluster. It cannot migrate to standard hosts.
-
The cluster must have sufficient memory reserved for its hosts to perform maintenance. Otherwise, virtual machine migrations will hang and fail. You can reduce the memory usage of host updates by shutting down some or all virtual machines before updating hosts.
-
You cannot migrate a pinned virtual machine (such as a virtual machine using a vGPU) to another host. Pinned virtual machines must be shut down before updating the host.
-
Ensure the correct repositories are enabled. You can check which repositories are currently enabled by running
dnf repolist
.
Upgrading from an older 4.5 to latest 4.5:
-
For oVirt Nodes, the
centos-release-ovirt45
RPM package enabling the correct repositories is already installed. -
For Enterprise Linux hosts:
-
If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.
# dnf update -y centos-release-ovirt45
As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available. |
Upgrading from an older 4.4 to latest 4.4:
-
For oVirt Nodes, the
ovirt-release44
RPM package enabling the correct repositories is already installed. -
For Enterprise Linux hosts ensure
ovirt-release44
RPM package is updated to the latest version:# dnf update -y ovirt-release44
Common procedure valid for both 4.4 and 4.5:
-
In the Administration Portal, click
and select the host to be updated. -
Click
and OK. -
For Enterprise Linux hosts:
-
Identify the current version of Enterprise Linux:
# cat /etc/redhat-release
-
Check which version of the redhat-release package is available:
# dnf --refresh info --available redhat-release
This command shows any available updates. For example, when upgrading from Enterprise Linux 8.2.z to 8.3, compare the version of the package with the currently installed version:
Available Packages Name : redhat-release Version : 8.3 Release : 1.0.el8 …
The Enterprise Linux Advanced Virtualization module is usually released later than the Enterprise Linux y-stream. If no new Advanced Virtualization module is available yet, or if there is an error enabling it, stop here and cancel the upgrade. Otherwise you risk corrupting the host.
-
If the Advanced Virtualization stream is available for Enterprise Linux 8.3 or later, reset the
virt
module:# dnf module reset virt
If this module is already enabled in the Advanced Virtualization stream, this step is not necessary, but it has no negative impact.
You can see the value of the stream by entering:
# dnf module list virt
-
Enable the
virt
module in the Advanced Virtualization stream with the following command:-
For oVirt 4.4.2:
# dnf module enable virt:8.2
-
For oVirt 4.4.3 to 4.4.5:
# dnf module enable virt:8.3
-
For oVirt 4.4.6 to 4.4.10:
# dnf module enable virt:av
-
For oVirt 4.5 and later:
# dnf module enable virt:rhel
Starting with EL 8.6 the Advanced virtualization packages will use the standard
virt:rhel
module. For EL 8.4 and 8.5, only one Advanced Virtualization stream is used,rhel:av
.
-
-
-
Enable version 14 of the
nodejs
module:# dnf module -y enable nodejs:14
-
Update the host:
# dnf upgrade --nobest
-
Reboot the host to ensure all updates are correctly applied.
Check the imgbased logs to see if any additional package updates have failed for a oVirt Node. If some packages were not successfully reinstalled after the update, check that the packages are listed in /var/imgbased/persisted-rpms. Add any missing packages then run
rpm -Uvh /var/imgbased/persisted-rpms/*
.
Repeat this process for each host in the oVirt environment.
Reinstalling Hosts
Reinstall oVirt Nodes (oVirt Node) and Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host.
When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. |
-
If the cluster has migration enabled, virtual machines can automatically migrate to another host in the cluster. Therefore, reinstall a host while its usage is relatively low.
-
Ensure that the cluster has sufficient memory for its hosts to perform maintenance. If a cluster lacks memory, migration of virtual machines will hang and then fail. To reduce memory usage, shut down some or all of the virtual machines before moving the host to maintenance.
-
Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time. One host must remain available to perform Storage Pool Manager (SPM) tasks.
-
Click
and select the host. -
Click
and OK. -
Click
. This opens the Install Host window. -
Click OK to reinstall the host.
After a host has been reinstalled and its status returns to Up, you can migrate virtual machines back to the host.
After you register a oVirt Node to the oVirt Engine and reinstall it, the Administration Portal may erroneously display its status as Install Failed. Click , and the host will change to an Up status and be ready for use. |
Viewing Host Errata
Errata for each host can be viewed after the host has been configured to receive errata information from the Red Hat Satellite server. For more information on configuring a host to receive errata information see Configuring Satellite Errata Management for a Host
-
Click
. -
Click the host’s name. This opens the details view.
-
Click the Errata tab.
Viewing the Health Status of a Host
Hosts have an external health status in addition to their regular Status. The external health status is reported by plug-ins or external systems, or set by an administrator, and appears to the left of the host’s Name as one of the following icons:
-
OK: No icon
-
Info:
-
Warning:
-
Error:
-
Failure:
To view further details about the host’s health status, click the host’s name. This opens the details view, and click the Events tab.
The host’s health status can also be viewed using the REST API. A GET
request on a host will include the external_status
element, which contains the health status.
You can set a host’s health status in the REST API via the events
collection. For more information, see Adding Events in the REST API Guide.
Viewing Host Devices
You can view the host devices for each host in the Host Devices tab in the details view. If the host has been configured for direct device assignment, these devices can be directly attached to virtual machines for improved performance.
For more information on the hardware requirements for direct device assignment, see Additional Hardware Considerations for Using Device Assignment in Hardware Considerations for Implementing SR-IOV.
For more information on configuring the host for direct device assignment, see Configuring a Host for PCI Passthrough host tasks.
For more information on attaching host devices to virtual machines, see Host Devices in the Virtual Machine Management Guide.
-
Click
. -
Click the host’s name. This opens the details view.
-
Click Host Devices tab.
This tab lists the details of the host devices, including whether the device is attached to a virtual machine, and currently in use by that virtual machine.
Accessing Cockpit from the Administration Portal
Cockpit is available by default on oVirt Nodes (oVirt Node) and Enterprise Linux hosts. You can access the Cockpit web interface by typing the address into a browser, or through the Administration Portal.
-
In the Administration Portal, click
and select a host. -
Click Host Console.
The Cockpit login page opens in a new browser window.
Setting a Legacy SPICE Cipher
SPICE consoles use FIPS-compliant encryption by default, with a cipher string. The default SPICE cipher string is:
kECDHE+FIPS:kDHE+FIPS:kRSA+FIPS:!eNULL:!aNULL
This string is generally sufficient. However, if you have a virtual machine with an older operating system or SPICE client, where either one or the other does not support FIPS-compliant encryption, you must use a weaker cipher string. Otherwise, a connection security error may occur if you install a new cluster or a new host in an existing cluster and try to connect to that virtual machine.
You can change the cipher string by using an Ansible playbook.
Changing the cipher string
-
On the Engine machine, create a file in the directory
/usr/share/ovirt-engine/playbooks
. For example:# vim /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml
-
Enter the following in the file and save it:
name: oVirt - setup weaker SPICE encryption for old clients hosts: hostname vars: host_deploy_spice_cipher_string: 'DEFAULT:-RC4:-3DES:-DES' roles: - ovirt-host-deploy-spice-encryption
-
Run the file you just created:
# ansible-playbook -l hostname /usr/share/ovirt-engine/playbooks/change-spice-cipher.yml
Alternatively, you can reconfigure the host with the Ansible playbook ovirt-host-deploy
using the --extra-vars
option with the variable host_deploy_spice_cipher_string
:
# ansible-playbook -l hostname \
--extra-vars host_deploy_spice_cipher_string=”DEFAULT:-RC4:-3DES:-DES” \
/usr/share/ovirt-engine/playbooks/ovirt-host-deploy.yml
Configuring Host Power Management Settings
Configure your host power management device settings to perform host life-cycle operations (stop, start, restart) from the Administration Portal.
You must configure host power management in order to utilize host high availability and virtual machine high availability. For more information about power management devices, see Power Management in the Technical Reference.
-
Click
and select a host. -
Click
, and click OK to confirm. -
When the host is in maintenance mode, click Edit.
-
Click the Power Management tab.
-
Select the Enable Power Management check box to enable the fields.
-
Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump.
If you enable or disable Kdump integration on an existing host, you must reinstall the host for kdump to be configured.
-
Optionally, select the Disable policy control of power management check box if you do not want your host’s power management to be controlled by the Scheduling Policy of the host’s cluster.
-
Click the plus (+) button to add a new power management device. The Edit fence agent window opens.
-
Enter the User Name and Password of the power management device into the appropriate fields.
-
Select the power management device Type in the drop-down list.
-
Enter the IP address in the Address field.
-
Enter the SSH Port number used by the power management device to communicate with the host.
-
Enter the Slot number used to identify the blade of the power management device.
-
Enter the Options for the power management device. Use a comma-separated list of 'key=value' entries.
-
If both IPv4 and IPv6 IP addresses can be used (default), leave the Options field blank.
-
If only IPv4 IP addresses can be used, enter
inet4_only=1
. -
If only IPv6 IP addresses can be used, enter
inet6_only=1
.
-
-
Select the Secure check box to enable the power management device to connect securely to the host.
-
Click Test to ensure the settings are correct. Test Succeeded, Host Status is: on will display upon successful verification.
-
Click OK to close the Edit fence agent window.
-
In the Power Management tab, optionally expand the Advanced Parameters and use the up and down buttons to specify the order in which the Engine will search the host’s cluster and dc (datacenter) for a fencing proxy.
-
Click OK.
|
The
drop-down menu is now enabled in the Administration Portal.Configuring Host Storage Pool Manager Settings
The Storage Pool Manager (SPM) is a management role given to one of the hosts in a data center to maintain access control over the storage domains. The SPM must always be available, and the SPM role will be assigned to another host if the SPM host becomes unavailable. As the SPM role uses some of the host’s available resources, it is important to prioritize hosts that can afford the resources.
The Storage Pool Manager (SPM) priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority.
-
Click
. -
Click Edit.
-
Click the SPM tab.
-
Use the radio buttons to select the appropriate SPM priority for the host.
-
Click OK.
Migrating a self-hosted engine host to a different cluster
You cannot migrate a host that is configured as a self-hosted engine host to a data center or cluster other than the one in which the self-hosted engine virtual machine is running. All self-hosted engine hosts must be in the same data center and cluster.
You need to disable the host from being a self-hosted engine host by undeploying the self-hosted engine configuration from the host.
-
Click
and select the host. -
Click
. The host’s status changes to Maintenance. -
Under Reinstall, select Hosted Engine UNDEPLOY.
-
Click Reinstall.
Alternatively, you can use the REST API
undeploy_hosted_engine
parameter. -
Click Edit.
-
Select the target data center and cluster.
-
Click OK.
-
Click
.
2.5.6. Explanation of Settings and Controls in the New Host and Edit Host Windows
Host General Settings Explained
These settings apply when editing the details of a host or adding new Enterprise Linux hosts and Satellite host provider hosts.
The General settings table contains the information required on the General tab of the New Host or Edit Host window.
Field Name | Description | ||
---|---|---|---|
Host Cluster |
The cluster and data center to which the host belongs. |
||
Use Foreman/Satellite |
Select or clear this check box to view or hide options for adding hosts provided by Satellite host providers. The following options are also available: Discovered Hosts
Provisioned Hosts
|
||
Name |
The name of the host. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. |
||
Comment |
A field for adding plain text, human-readable comments regarding the host. |
||
Hostname |
The IP address or resolvable host name of the host. If a resolvable hostname is used, you must ensure that all addresses that the hostname is resolved to match the IP addresses, IPv4 and IPv6, used by the management network of the host. |
||
Password |
The password of the host’s root user. Set the password when adding the host. The password cannot be edited afterwards. |
||
Activate host after install |
Select this checkbox to activate the host after successful installation. This is enabled by default and required for the hypervisors to be activated successfully. After successful installation, you can clear this checkbox to switch the host status to Maintenance. This allows the administrator to perform additional configuration tasks on the hypervisors. |
||
Reboot host after install |
Select this checkbox to reboot the host after it is installed. This is enabled by default.
|
||
SSH Public Key |
Copy the contents in the text box to the /root/.ssh/authorized_hosts file on the host to use the Engine’s SSH key instead of a password to authenticate with a host. |
||
Automatically configure host firewall |
When adding a new host, the Engine can open the required ports on the host’s firewall. This is enabled by default. This is an Advanced Parameter. |
||
SSH Fingerprint |
You can fetch the host’s SSH fingerprint, and compare it with the fingerprint you expect the host to return, ensuring that they match. This is an Advanced Parameter. |
Host Power Management Settings Explained
The Power Management settings table contains the information required on the Power Management tab of the New Host or Edit Host windows. You can configure power management if the host has a supported power management card.
Field Name | Description |
---|---|
Enable Power Management |
Enables power management on the host. Select this check box to enable the rest of the fields in the Power Management tab. |
Kdump integration |
Prevents the host from fencing while performing a kernel crash dump, so that the crash dump is not interrupted. In Enterprise Linux 7.1 and later, kdump is available by default. If kdump is available on the host, but its configuration is not valid (the kdump service cannot be started), enabling Kdump integration will cause the host (re)installation to fail. If you enable or disable Kdump integration on an existing host, you must reinstall the host. |
Disable policy control of power management |
Power management is controlled by the Scheduling Policy of the host’s cluster. If power management is enabled and the defined low utilization value is reached, the Engine will power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Select this check box to disable policy control. |
Agents by Sequential Order |
Lists the host’s fence agents. Fence agents can be sequential, concurrent, or a mix of both.
Fence agents are sequential by default. Use the up and down buttons to change the sequence in which the fence agents are used. To make two fence agents concurrent, select one fence agent from the Concurrent with drop-down list next to the other fence agent. Additional fence agents can be added to the group of concurrent fence agents by selecting the group from the Concurrent with drop-down list next to the additional fence agent. |
Add Fence Agent |
Click the + button to add a new fence agent. The Edit fence agent window opens. See the table below for more information on the fields in this window. |
Power Management Proxy Preference |
By default, specifies that the Engine will search for a fencing proxy within the same cluster as the host, and if no fencing proxy is found, the Engine will search in the same dc (data center). Use the up and down buttons to change the sequence in which these resources are used. This field is available under Advanced Parameters. |
The following table contains the information required in the Edit fence agent window.
Field Name | Description |
---|---|
Address |
The address to access your host’s power management device. Either a resolvable hostname or an IP address. |
User Name |
User account with which to access the power management device. You can set up a user on the device, or use the default user. |
Password |
Password for the user accessing the power management device. |
Type |
The type of power management device in your host. Choose one of the following:
For more information about power management devices, see Power Management in the Technical Reference. |
Port |
The port number used by the power management device to communicate with the host. |
Slot |
The number used to identify the blade of the power management device. |
Service Profile |
The service profile name used to identify the blade of the power management device. This field appears instead of Slot when the device type is |
Options |
Power management device specific options. Enter these as 'key=value'. See the documentation of your host’s power management device for the options available. For Enterprise Linux 7 hosts, if you are using cisco_ucs as the power management device, you also need to append |
Secure |
Select this check box to allow the power management device to connect securely to the host. This can be done via ssh, ssl, or other authentication protocols depending on the power management agent. |
SPM Priority Settings Explained
The SPM settings table details the information required on the SPM tab of the New Host or Edit Host window.
Field Name | Description |
---|---|
SPM Priority |
Defines the likelihood that the host will be given the role of Storage Pool Manager (SPM). The options are Low, Normal, and High priority. Low priority means that there is a reduced likelihood of the host being assigned the role of SPM, and High priority means there is an increased likelihood. The default setting is Normal. |
Host Console Settings Explained
The Console settings table details the information required on the Console tab of the New Host or Edit Host window.
Field Name | Description |
---|---|
Override display address |
Select this check box to override the display addresses of the host. This feature is useful in a case where the hosts are defined by internal IP and are behind a NAT firewall. When a user connects to a virtual machine from outside of the internal network, instead of returning the private address of the host on which the virtual machine is running, the machine returns a public IP or FQDN (which is resolved in the external network to the public IP). |
Display address |
The display address specified here will be used for all virtual machines running on this host. The address must be in the format of a fully qualified domain name or IP. |
vGPU Placement |
Specifies the preferred placement of vGPUs:
|
Network Provider Settings Explained
The Network Provider settings table details the information required on the Network Provider tab of the New Host or Edit Host window.
Field Name | Description |
---|---|
External Network Provider |
If you have added an external network provider and want the host’s network to be provisioned by the external network provider, select one from the list. |
Kernel Settings Explained
The Kernel settings table details the information required on the Kernel tab of the New Host or Edit Host window. Common kernel boot parameter options are listed as check boxes so you can easily select them.
For more complex changes, use the free text entry field next to Kernel command line to add in any additional parameters required. If you change any kernel command line parameters, you must reinstall the host.
If the host is attached to the Engine, you must place the host into maintenance mode before making changes. After making the changes, reinstall the host to apply the changes. |
Field Name | Description |
---|---|
Hostdev Passthrough & SR-IOV |
Enables the IOMMU flag in the kernel so a virtual machine can use a host device as if it is attached directly to the virtual machine. The host hardware and firmware must also support IOMMU. The virtualization extension and IOMMU extension must be enabled on the hardware. See Configuring a Host for PCI Passthrough. IBM POWER8 has IOMMU enabled by default. |
Nested Virtualization |
Enables the |
Unsafe Interrupts |
If IOMMU is enabled but the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling this option. Note that you should only enable this option if the virtual machines on the host are trusted; having the option enabled potentially exposes the host to MSI attacks from the virtual machines. This option is only intended to be used as a workaround when using uncertified hardware for evaluation purposes. |
PCI Reallocation |
If your SR-IOV NIC is unable to allocate virtual functions because of memory issues, consider enabling this option. The host hardware and firmware must also support PCI reallocation. This option is only intended to be used as a workaround when using uncertified hardware for evaluation purposes. |
Blacklist Nouveau |
Blocks the nouveau driver. Nouveau is a community driver for NVIDIA GPUs that conflicts with vendor-supplied drivers. The nouveau driver should be blocked when vendor drivers take precedence. |
SMT Disabled |
Disables Simultaneous Multi Threading (SMT). Disabling SMT can mitigate security vulnerabilities, such as L1TF or MDS. |
Kernel command line |
This field allows you to append more kernel parameters to the default parameters. |
If the kernel boot parameters are grayed out, click the reset button and the options will be available. |
Hosted Engine Settings Explained
The Hosted Engine settings table details the information required on the Hosted Engine tab of the New Host or Edit Host window.
Field Name | Description |
---|---|
Choose hosted engine deployment action |
Three options are available:
|
2.5.7. Host Resilience
Host High Availability
The oVirt Engine uses fencing to keep hosts in a cluster responsive. A Non Responsive host is different from a Non Operational host. Non Operational hosts can be communicated with by the Engine, but have an incorrect configuration, for example a missing logical network. Non Responsive hosts cannot be communicated with by the Engine.
Fencing allows a cluster to react to unexpected host failures and enforce power saving, load balancing, and virtual machine availability policies. You should configure the fencing parameters for your host’s power management device and test their correctness from time to time. In a fencing operation, a non-responsive host is rebooted, and if the host does not return to an active status within a prescribed time, it remains non-responsive pending manual intervention and troubleshooting.
To automatically check the fencing parameters, you can configure the When set to true, |
Power management operations can be performed by oVirt Engine after it reboots, by a proxy host, or manually in the Administration Portal. All the virtual machines running on the non-responsive host are stopped, and highly available virtual machines are started on a different host. At least two hosts are required for power management operations.
After the Engine starts up, it automatically attempts to fence non-responsive hosts that have power management enabled after the quiet time (5 minutes by default) has elapsed. The quiet time can be configured by updating the DisableFenceAtStartupInSec
engine-config option.
The DisableFenceAtStartupInSec engine-config option helps prevent a scenario where the Engine attempts to fence hosts while they boot up. This can occur after a data center outage because a host’s boot process is normally longer than the Engine boot process.
|
Hosts can be fenced automatically by the proxy host using the power management parameters, or manually by right-clicking on a host and using the options on the menu.
If a host runs virtual machines that are highly available, power management must be enabled and configured. |
Power Management by Proxy in oVirt
The oVirt Engine does not communicate directly with fence agents. Instead, the Engine uses a proxy to send power management commands to a host power management device. The Engine uses VDSM to execute power management device actions, so another host in the environment is used as a fencing proxy.
You can select between:
-
Any host in the same cluster as the host requiring fencing.
-
Any host in the same data center as the host requiring fencing.
A viable fencing proxy host has a status of either UP or Maintenance.
Setting Fencing Parameters on a Host
The parameters for host fencing are set using the Power Management fields on the New Host or Edit Host windows. Power management enables the system to fence a troublesome host using an additional interface such as a Remote Access Card (RAC).
All power management operations are done using a proxy host, as opposed to directly by the oVirt Engine. At least two hosts are required for power management operations.
-
Click
and select the host. -
Click Edit.
-
Click the Power Management tab.
-
Select the Enable Power Management check box to enable the fields.
-
Select the Kdump integration check box to prevent the host from fencing while performing a kernel crash dump.
If you enable or disable Kdump integration on an existing host, you must reinstall the host.
-
Optionally, select the Disable policy control of power management check box if you do not want your host’s power management to be controlled by the Scheduling Policy of the host’s cluster.
-
Click the + button to add a new power management device. The Edit fence agent window opens.
-
Enter the Address, User Name, and Password of the power management device.
-
Select the power management device Type from the drop-down list.
-
Enter the SSH Port number used by the power management device to communicate with the host.
-
Enter the Slot number used to identify the blade of the power management device.
-
Enter the Options for the power management device. Use a comma-separated list of 'key=value' entries.
-
Select the Secure check box to enable the power management device to connect securely to the host.
-
Click the Test button to ensure the settings are correct. Test Succeeded, Host Status is: on will display upon successful verification.
Power management parameters (userid, password, options, etc) are tested by oVirt Engine only during setup and manually after that. If you choose to ignore alerts about incorrect parameters, or if the parameters are changed on the power management hardware without the corresponding change in oVirt Engine, fencing is likely to fail when most needed.
-
Click OK to close the Edit fence agent window.
-
In the Power Management tab, optionally expand the Advanced Parameters and use the up and down buttons to specify the order in which the Engine will search the host’s cluster and dc (datacenter) for a fencing proxy.
-
Click OK.
You are returned to the list of hosts. Note that the exclamation mark next to the host’s name has now disappeared, signifying that power management has been successfully configured.
fence_kdump Advanced Configuration
kdump
Click the name of a host to view the status of the kdump service in the General tab of the details view:
-
Enabled: kdump is configured properly and the kdump service is running.
-
Disabled: the kdump service is not running (in this case kdump integration will not work properly).
-
Unknown: happens only for hosts with an earlier VDSM version that does not report kdump status.
For more information on installing and using kdump, see the Enterprise Linux 7 Kernel Crash Dump Guide.
fence_kdump
Enabling Kdump integration in the Power Management tab of the New Host or Edit Host window configures a standard fence_kdump setup. If the environment’s network configuration is simple and the Engine’s FQDN is resolvable on all hosts, the default fence_kdump settings are sufficient for use.
However, there are some cases where advanced configuration of fence_kdump is necessary. Environments with more complex networking may require manual changes to the configuration of the Engine, fence_kdump listener, or both. For example, if the Engine’s FQDN is not resolvable on all hosts with Kdump integration enabled, you can set a proper host name or IP address using engine-config
:
engine-config -s FenceKdumpDestinationAddress=A.B.C.D
The following example cases may also require configuration changes:
-
The Engine has two NICs, where one of these is public-facing, and the second is the preferred destination for fence_kdump messages.
-
You need to execute the fence_kdump listener on a different IP or port.
-
You need to set a custom interval for fence_kdump notification messages, to prevent possible packet loss.
Customized fence_kdump detection settings are recommended for advanced users only, as changes to the default configuration are only necessary in more complex networking setups.
fence_kdump listener Configuration
Edit the configuration of the fence_kdump listener. This is only necessary in cases where the default configuration is not sufficient.
-
Create a new file (for example, my-fence-kdump.conf) in /etc/ovirt-engine/ovirt-fence-kdump-listener.conf.d/.
-
Enter your customization with the syntax OPTION=value and save the file.
The edited values must also be changed in
engine-config
as outlined in the fence_kdump Listener Configuration Options table in Configuring fence-kdump on the Engine. -
Restart the fence_kdump listener:
# systemctl restart ovirt-fence-kdump-listener.service
The following options can be customized if required:
Variable | Description | Default | Note |
---|---|---|---|
LISTENER_ADDRESS |
Defines the IP address to receive fence_kdump messages on. |
0.0.0.0 |
If the value of this parameter is changed, it must match the value of |
LISTENER_PORT |
Defines the port to receive fence_kdump messages on. |
7410 |
If the value of this parameter is changed, it must match the value of |
HEARTBEAT_INTERVAL |
Defines the interval in seconds of the listener’s heartbeat updates. |
30 |
If the value of this parameter is changed, it must be half the size or smaller than the value of |
SESSION_SYNC_INTERVAL |
Defines the interval in seconds to synchronize the listener’s host kdumping sessions in memory to the database. |
5 |
If the value of this parameter is changed, it must be half the size or smaller than the value of |
REOPEN_DB_CONNECTION_INTERVAL |
Defines the interval in seconds to reopen the database connection which was previously unavailable. |
30 |
- |
KDUMP_FINISHED_TIMEOUT |
Defines the maximum timeout in seconds after the last received message from kdumping hosts after which the host kdump flow is marked as FINISHED. |
60 |
If the value of this parameter is changed, it must be double the size or higher than the value of |
Configuring fence_kdump on the Engine
Edit the Engine’s kdump configuration. This is only necessary in cases where the default configuration is not sufficient. The current configuration values can be found using:
# engine-config -g OPTION
-
Edit kdump’s configuration using the
engine-config
command:# engine-config -s OPTION=value
The edited values must also be changed in the fence_kdump listener configuration file as outlined in the
Kdump Configuration Options
table. See fence_kdump listener configuration. -
Restart the
ovirt-engine
service:# systemctl restart ovirt-engine.service
-
Reinstall all hosts with Kdump integration enabled, if required (see the table below).
The following options can be configured using engine-config
:
Variable | Description | Default | Note |
---|---|---|---|
FenceKdumpDestinationAddress |
Defines the hostname(s) or IP address(es) to send fence_kdump messages to. If empty, the Engine’s FQDN is used. |
Empty string (Engine FQDN is used) |
If the value of this parameter is changed, it must match the value of |
FenceKdumpDestinationPort |
Defines the port to send fence_kdump messages to. |
7410 |
If the value of this parameter is changed, it must match the value of |
FenceKdumpMessageInterval |
Defines the interval in seconds between messages sent by fence_kdump. |
5 |
If the value of this parameter is changed, it must be half the size or smaller than the value of |
FenceKdumpListenerTimeout |
Defines the maximum timeout in seconds since the last heartbeat to consider the fence_kdump listener alive. |
90 |
If the value of this parameter is changed, it must be double the size or higher than the value of |
KdumpStartedTimeout |
Defines the maximum timeout in seconds to wait until the first message from the kdumping host is received (to detect that host kdump flow has started). |
30 |
If the value of this parameter is changed, it must be double the size or higher than the value of |
Soft-Fencing Hosts
Hosts can sometimes become non-responsive due to an unexpected problem, and though VDSM is unable to respond to requests, the virtual machines that depend upon VDSM remain alive and accessible. In these situations, restarting VDSM returns VDSM to a responsive state and resolves this issue.
"SSH Soft Fencing" is a process where the Engine attempts to restart VDSM via SSH on non-responsive hosts. If the Engine fails to restart VDSM via SSH, the responsibility for fencing falls to the external fencing agent if an external fencing agent has been configured.
Soft-fencing over SSH works as follows. Fencing must be configured and enabled on the host, and a valid proxy host (a second host, in an UP state, in the data center) must exist. When the connection between the Engine and the host times out, the following happens:
-
On the first network failure, the status of the host changes to "connecting".
-
The Engine then makes three attempts to ask VDSM for its status, or it waits for an interval determined by the load on the host. The formula for determining the length of the interval is configured by the configuration values TimeoutToResetVdsInSeconds (the default is 60 seconds) + [DelayResetPerVmInSeconds (the default is 0.5 seconds)]*(the count of running virtual machines on host) + [DelayResetForSpmInSeconds (the default is 20 seconds)] * 1 (if host runs as SPM) or 0 (if the host does not run as SPM). To give VDSM the maximum amount of time to respond, the Engine chooses the longer of the two options mentioned above (three attempts to retrieve the status of VDSM or the interval determined by the above formula).
-
If the host does not respond when that interval has elapsed,
vdsm restart
is executed via SSH. -
If
vdsm restart
does not succeed in re-establishing the connection between the host and the Engine, the status of the host changes toNon Responsive
and, if power management is configured, fencing is handed off to the external fencing agent.
Soft-fencing over SSH can be executed on hosts that have no power management configured. This is distinct from "fencing": fencing can be executed only on hosts that have power management configured. |
Using Host Power Management Functions
When power management has been configured for a host, you can access a number of options from the Administration Portal interface. While each power management device has its own customizable options, they all support the basic options to start, stop, and restart a host.
-
Click
and select the host. -
Click the Management drop-down menu and select one of the following Power Management options:
-
Restart: This option stops the host and waits until the host’s status changes to
Down
. When the agent has verified that the host is down, the highly available virtual machines are restarted on another host in the cluster. The agent then restarts this host. When the host is ready for use its status displays asUp
. -
Start: This option starts the host and lets it join a cluster. When it is ready for use its status displays as
Up
. -
Stop: This option powers off the host. Before using this option, ensure that the virtual machines running on the host have been migrated to other hosts in the cluster. Otherwise the virtual machines will crash and only the highly available virtual machines will be restarted on another host. When the host has been stopped its status displays as
Non-Operational
.If Power Management is not enabled, you can restart or stop the host by selecting it, clicking the Management drop-down menu, and selecting an SSH Management option, Restart or Stop.
When two fencing agents are defined on a host, they can be used concurrently or sequentially. For concurrent agents, both agents have to respond to the Stop command for the host to be stopped; and when one agent responds to the Start command, the host will go up. For sequential agents, to start or stop a host, the primary agent is used first; if it fails, the secondary agent is used.
-
-
Click OK.
Manually Fencing or Isolating a Non-Responsive Host
If a host unpredictably goes into a non-responsive state, for example, due to a hardware failure, it can significantly affect the performance of the environment. If you do not have a power management device, or if it is incorrectly configured, you can reboot the host manually.
Do not select Confirm 'Host has been Rebooted' unless you have manually rebooted the host. Using this option while the host is still running can lead to a virtual machine image corruption. |
-
In the Administration Portal, click
and confirm the host’s status isNon Responsive
. -
Manually reboot the host. This could mean physically entering the lab and rebooting the host.
-
In the Administration Portal, select the host and click More Actions (
), then click Confirm 'Host has been Rebooted'.
-
Select the Approve Operation check box and click OK.
-
If your hosts take an unusually long time to boot, you can set
ServerRebootTimeout
to specify how many seconds to wait before determining that the host isNon Responsive
:# engine-config --set ServerRebootTimeout=integer
2.6. Storage
2.6.1. About oVirt storage
oVirt uses a centralized storage system for virtual disks, ISO files and snapshots. Storage networking can be implemented using:
-
Network File System (NFS)
-
GlusterFS exports
-
Other POSIX compliant file systems
-
Internet Small Computer System Interface (iSCSI)
-
Local storage attached directly to the virtualization hosts
-
Fibre Channel Protocol (FCP)
-
Parallel NFS (pNFS)
Setting up storage is a prerequisite for a new data center because a data center cannot be initialized unless storage domains are attached and activated.
As a oVirt system administrator, you create, configure, attach and maintain storage for the virtualized enterprise. You must be familiar with the storage types and their use. Read your storage array vendor’s guides, and see Red Hat Enterprise Linux Managing storage devices for more information on the concepts, protocols, requirements, and general usage of storage.
To add storage domains you must be able to successfully access the Administration Portal, and there must be at least one host connected with a status of Up.
oVirt has three types of storage domains:
-
Data Domain: A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center. In addition, snapshots of the virtual machines are also stored in the data domain.
The data domain cannot be shared across data centers. Data domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center, provided they are all shared, rather than local, domains.
You must attach a data domain to a data center before you can attach domains of other types to it.
-
ISO Domain: ISO domains store ISO files (or logical CDs) used to install and boot operating systems and applications for the virtual machines. An ISO domain removes the data center’s need for physical media. An ISO domain can be shared across different data centers. ISO domains can only be NFS-based. Only one ISO domain can be added to a data center.
-
Export Domain: Export domains are temporary storage repositories that are used to copy and move images between data centers and oVirt environments. Export domains can be used to backup virtual machines. An export domain can be moved between data centers, however, it can only be active in one data center at a time. Export domains can only be NFS-based. Only one export domain can be added to a data center.
The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. See Importing Existing Storage Domains for information on importing storage domains.
Only commence configuring and attaching storage for your oVirt environment once you have determined the storage needs of your data center(s). |
2.6.2. Understanding Storage Domains
A storage domain is a collection of images that have a common storage interface. A storage domain contains complete images of templates and virtual machines (including snapshots), or ISO files. A storage domain can be made of block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems).
By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO.
On NFS, all virtual disks, templates, and snapshots are files.
On SAN (iSCSI/FCP), each virtual disk, template or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks. See Red Hat Enterprise Linux Configuring and managing logical volumes for more information on LVM.
Virtual disks can have one of two formats, either QCOW2 or raw. The type of storage can be sparse or preallocated. Snapshots are always sparse but can be taken for disks of either format.
Virtual machines that share the same storage domain can be migrated between hosts that belong to the same cluster.
2.6.3. Preparing and Adding NFS Storage
Preparing NFS Storage
Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts.
For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8.
Specific system user accounts and system user groups are required by oVirt so the Engine can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown
and chmod
steps for all of the directories you intend to use as storage domains in oVirt.
-
Install the NFS
utils
package.# dnf install nfs-utils -y
-
To check the enabled versions:
# cat /proc/fs/nfsd/versions
-
Enable the following services:
# systemctl enable nfs-server # systemctl enable rpcbind
-
Create the group
kvm
:# groupadd kvm -g 36
-
Create the user
vdsm
in the groupkvm
:# useradd vdsm -u 36 -g kvm
-
Create the
storage
directory and modify the access rights.# mkdir /storage # chmod 0755 /storage # chown 36:36 /storage/
-
Add the
storage
directory to/etc/exports
with the relevant permissions.# vi /etc/exports # cat /etc/exports /storage *(rw)
-
Restart the following services:
# systemctl restart rpcbind # systemctl restart nfs-server
-
To see which export are available for a specific IP address:
# exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>
If changes in |
Adding NFS Storage
This procedure shows you how to attach existing NFS storage to your oVirt environment as a data domain.
If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list.
-
In the Administration Portal, click
. -
Click New Domain.
-
Enter a Name for the storage domain.
-
Accept the default values for the Data Center, Domain Function, Storage Type, Format, and Host lists.
-
Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data.
-
Optionally, you can configure the advanced parameters:
-
Click Advanced Parameters.
-
Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
-
Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
-
Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
-
-
Click OK.
The new NFS data domain has a status of Locked
until the disk is prepared. The data domain is then automatically attached to the data center.
Increasing NFS Storage
To increase the amount of NFS storage, you can either create a new storage domain and add it to an existing data center, or increase the available free space on the NFS server. For the former option, see Adding NFS Storage. The following procedure explains how to increase the available free space on the existing NFS server.
-
Click
. -
Click the NFS storage domain’s name. This opens the details view.
-
Click the Data Center tab and click Maintenance to place the storage domain into maintenance mode. This unmounts the existing share and makes it possible to resize the storage domain.
-
On the NFS server, resize the storage. For Enterprise Linux 6 systems, see Enterprise Linux 6 Storage Administration Guide. For Enterprise Linux 7 systems, see Enterprise Linux 7 Storage Administration Guide. For Enterprise Linux 8 systems, see Resizing a partition.
-
In the details view, click the Data Center tab and click Activate to mount the storage domain.
2.6.4. Preparing and adding local storage
A virtual machine’s disk that uses a storage device that is physically installed on the virtual machine’s host is referred to as a local storage device.
A storage device must be part of a storage domain. The storage domain type for local storage is referred to as a local storage domain.
Configuring a host to use local storage automatically creates, and adds the host to, a new local storage domain, data center and cluster to which no other host can be added. Multiple-host clusters require that all hosts have access to all storage domains, which is not possible with local storage. Virtual machines created in a single-host cluster cannot be migrated, fenced, or scheduled.
Preparing local storage
On oVirt Node (oVirt Node), local storage should always be defined on a file system that is separate from /
(root).
Use a separate logical volume or disk, to prevent possible loss of data during upgrades.
-
On the host, create the directory to be used for the local storage:
# mkdir -p /data/images
-
Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36):
# chown 36:36 /data /data/images # chmod 0755 /data /data/images
Create the local storage on a logical volume:
-
Create a local storage directory:
# mkdir /data # lvcreate -L $SIZE rhvh -n data # mkfs.ext4 /dev/mapper/rhvh-data # echo "/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2" >> /etc/fstab # mount /data
-
Mount the new local storage:
# mount -a
-
Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36):
# chown 36:36 /data /rhvh-data # chmod 0755 /data /rhvh-data
Adding a local storage domain
When adding a local storage domain to a host, setting the path to the local storage directory automatically creates and places the host in a local data center, local cluster, and local storage domain.
-
Click
and select the host. -
Click
and OK. The host’s status changes to Maintenance. -
Click
. -
Click the Edit buttons next to the Data Center, Cluster, and Storage fields to configure and name the local storage domain.
-
Set the path to your local storage in the text entry field.
-
If applicable, click the Optimization tab to configure the memory optimization policy for the new local storage cluster.
-
Click OK.
The Engine sets up the local data center with a local cluster, local storage domain. It also changes the host’s status to Up.
-
Click
. -
Locate the local storage domain you just added.
The domain’s status should be Active (), and the value in the Storage Type column should be Local on Host.
You can now upload a disk image in the new local storage domain.
2.6.5. Preparing and Adding POSIX-compliant File System Storage
Preparing POSIX-compliant File System Storage
POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP.
Any POSIX-compliant file system used as a storage domain in oVirt must be a clustered file system, such as Global File System 2 (GFS2), and must support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with oVirt.
For information on setting up and configuring POSIX-compliant file system storage, see Enterprise Linux Global File System 2.
Do not mount NFS storage by creating a POSIX-compliant file system storage domain. Always create an NFS storage domain instead. |
Adding POSIX-compliant File System Storage
This procedure shows you how to attach existing POSIX-compliant file system storage to your oVirt environment as a data domain.
-
Click
. -
Click New Domain.
-
Enter the Name for the storage domain.
-
Select the Data Center to be associated with the storage domain. The data center selected must be of type POSIX (POSIX compliant FS). Alternatively, select
(none)
. -
Select
Data
from the Domain Function drop-down list, andPOSIX compliant FS
from the Storage Type drop-down list.If applicable, select the Format from the drop-down menu.
-
Select a host from the Host drop-down list.
-
Enter the Path to the POSIX file system, as you would normally provide it to the
mount
command. -
Enter the VFS Type, as you would normally provide it to the
mount
command using the-t
argument. Seeman mount
for a list of valid VFS types. -
Enter additional Mount Options, as you would normally provide them to the
mount
command using the-o
argument. The mount options should be provided in a comma-separated list. Seeman mount
for a list of valid mount options. -
Optionally, you can configure the advanced parameters.
-
Click Advanced Parameters.
-
Enter a percentage value in the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
-
Enter a GB value in the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
-
Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
-
-
Click OK.
2.6.6. Preparing and Adding Block Storage
Preparing iSCSI Storage
oVirt supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time.
For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8.
If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the |
oVirt currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. |
If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:
|
Adding iSCSI Storage
This procedure shows you how to attach existing iSCSI storage to your oVirt environment as a data domain.
-
Click
. -
Click New Domain.
-
Enter the Name of the new storage domain.
-
Select a Data Center from the drop-down list.
-
Select Data as the Domain Function and iSCSI as the Storage Type.
-
Select an active host as the Host.
Communication to the storage domain is from the selected host and not directly from the Engine. Therefore, all hosts must have access to the storage device before the storage domain can be configured.
-
The Engine can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the next step.
-
Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment.
LUNs used externally for the environment are also displayed.
You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs.
If you use the REST API method
discoveriscsi
to discover the iscsi targets, you can use an FQDN or an IP address, but you must use the iscsi details from the discovered targets results to log in using the REST API methodiscsilogin
. See discoveriscsi in the REST API Guide for more information. -
Enter the FQDN or IP address of the iSCSI host in the Address field.
-
Enter the port with which to connect to the host when browsing for targets in the Port field. The default is
3260
. -
If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password.
You can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information.
-
Click Discover.
-
Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets.
If more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported.
When using the REST API
iscsilogin
method to log in, you must use the iscsi details from the discovered targets results in thediscoveriscsi
method. See iscsilogin in the REST API Guide for more information.
-
-
Click the + button next to the desired target. This expands the entry and displays all unused LUNs attached to the target.
-
Select the check box for each LUN that you are using to create the storage domain.
-
Optionally, you can configure the advanced parameters:
-
Click Advanced Parameters.
-
Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
-
Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
-
Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
-
Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains.
-
-
Click OK.
If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding.
If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond.
Configuring iSCSI Multipathing
iSCSI multipathing enables you to create and manage groups of logical networks and iSCSI storage connections. Multiple network paths between the hosts and iSCSI storage prevent host downtime caused by network path failure.
The Engine connects each host in the data center to each target, using the NICs or VLANs that are assigned to the logical networks in the iSCSI bond.
You can create an iSCSI bond with multiple targets and logical networks for redundancy.
-
One or more iSCSI targets
-
One or more logical networks that meet the following requirements:
-
Not defined as Required or VM Network
-
Assigned a static IP address in the same VLAN and subnet as the other logical networks in the iSCSI bond
-
Multipath is not supported for Self-Hosted Engine deployments. |
-
Click
. -
Click the data center name. This opens the details view.
-
In the iSCSI Multipathing tab, click Add.
-
In the Add iSCSI Bond window, enter a Name and a Description.
-
Select a logical network from Logical Networks and a storage domain from Storage Targets. You must select all the paths to the same target.
-
Click OK.
The hosts in the data center are connected to the iSCSI targets through the logical networks in the iSCSI bond.
Migrating a Logical Network to an iSCSI Bond
If you have a logical network that you created for iSCSI traffic and configured on top of an existing network bond, you can migrate it to an iSCSI bond on the same subnet without disruption or downtime.
-
Modify the current logical network so that it is not Required:
-
Click
. -
Click the cluster name. This opens the details view.
-
In the Logical Networks tab, select the current logical network (
net-1
) and click Manage Networks. -
Clear the Require check box and click OK.
-
-
Create a new logical network that is not Required and not VM network:
-
Click Add Network. This opens the New Logical Network window.
-
In the General tab, enter the Name (
net-2
) and clear the VM network check box. -
In the Cluster tab, clear the Require check box and click OK.
-
-
Remove the current network bond and reassign the logical networks:
-
Click
. -
Click the host name. This opens the details view.
-
In the Network Interfaces tab, click Setup Host Networks.
-
Drag
net-1
to the right to unassign it. -
Drag the current bond to the right to remove it.
-
Drag
net-1
andnet-2
to the left to assign them to physical interfaces. -
Click the pencil icon of
net-2
. This opens the Edit Network window. -
In the IPV4 tab, select Static.
-
Enter the IP and Netmask/Routing Prefix of the subnet and click OK.
-
-
Create the iSCSI bond:
-
Click
. -
Click the data center name. This opens the details view.
-
In the iSCSI Multipathing tab, click Add.
-
In the Add iSCSI Bond window, enter a Name, select the networks,
net-1
andnet-2
, and click OK.
-
Your data center has an iSCSI bond containing the old and new logical networks.
Preparing FCP Storage
oVirt supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
oVirt system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage.
For information on setting up and configuring FCP or multipathing on Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide.
If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the |
oVirt currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. |
If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:
|
Adding FCP Storage
This procedure shows you how to attach existing FCP storage to your oVirt environment as a data domain.
-
Click
. -
Click New Domain.
-
Enter the Name of the storage domain.
-
Select an FCP Data Center from the drop-down list.
If you do not yet have an appropriate FCP data center, select
(none)
. -
Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available.
-
Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center’s SPM host.
All communication to the storage domain is through the selected host and not directly from the oVirt Engine. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
-
The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs.
-
Optionally, you can configure the advanced parameters.
-
Click Advanced Parameters.
-
Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
-
Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
-
Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
-
Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains.
-
-
Click OK.
The new FCP data domain remains in a Locked
status while it is being prepared for use. When ready, it is automatically attached to the data center.
Increasing iSCSI or FCP Storage
There are several ways to increase iSCSI or FCP storage size:
-
Add an existing LUN to the current storage domain.
-
Create a new storage domain with new LUNs and add it to an existing data center. See Adding iSCSI Storage.
-
Expand the storage domain by resizing the underlying LUNs.
For information about configuring or resizing FCP storage, see Using Fibre Channel Devices in Managing storage devices for Red Hat Enterprise Linux 8.
The following procedure explains how to expand storage area network (SAN) storage by adding a new LUN to an existing storage domain.
Prerequisites
-
The storage domain’s status must be
UP
. -
The LUN must be accessible to all the hosts whose status is
UP
, or else the operation will fail and the LUN will not be added to the domain. The hosts themselves, however, will not be affected. If a newly added host, or a host that is coming out of maintenance or aNon Operational
state, cannot access the LUN, the host’s state will beNon Operational
.
Increasing an Existing iSCSI or FCP Storage Domain
-
Click
and select an iSCSI or FCP domain. -
Click Manage Domain.
-
Click
and click the Discover Targets expansion button. -
Enter the connection information for the storage server and click Discover to initiate the connection.
-
Click
and select the check box of the newly available LUN. -
Click OK to add the LUN to the selected storage domain.
This will increase the storage domain by the size of the added LUN.
When expanding the storage domain by resizing the underlying LUNs, the LUNs must also be refreshed in the Administration Portal.
Refreshing the LUN Size
-
Click
and select an iSCSI or FCP domain. -
Click Manage Domain.
-
Click
. -
In the Additional Size column, click Add Additional_Storage_Size button of the LUN to refresh.
-
Click OK to refresh the LUN to indicate the new storage size.
Reusing LUNs
LUNs cannot be reused, as is, to create a storage domain or virtual disk. If you try to reuse the LUNs, the Administration Portal displays the following error message:
Physical device initialization failed. Please check that the device is empty and accessible by the host.
A self-hosted engine shows the following error during installation:
[ ERROR ] Error creating Volume Group: Failed to initialize physical device: ("[u'/dev/mapper/000000000000000000000000000000000']",)
[ ERROR ] Failed to execute stage 'Misc configuration': Failed to initialize physical device: ("[u'/dev/mapper/000000000000000000000000000000000']",)
Before the LUN can be reused, the old partitioning table must be cleared.
Procedure
You must run this procedure on the correct LUN so that you do not inadvertently destroy data. |
-
Delete the partition mappings in <LUN_ID>:
kpartx -dv /dev/mapper/<LUN_ID>
-
Erase the fileystem or raid signatures in <LUN_ID>:
wipefs -a /dev/mapper/<LUN_ID>
-
Inform the operating system about the partition table changes on <LUN_ID>:
partprobe
Removing stale LUNs
When a storage domain is removed, stale LUN links can remain on the storage server. This can lead to slow multipath scans, cluttered log files, and LUN ID conflicts.
oVirt does not manage the iSCSI servers and, therefore, cannot automatically remove LUNs when a storage domain is removed. The administrator can manually remove stale LUN links with the remove_stale_lun.yml
Ansible role. This role removes stale LUN links from all hosts that belong to given data center. For more information about this role and its variables, see the Remove Stale LUN role in the oVirt Ansible collection.
It is assumed that you are running remove_stale_lun.yml from the engine machine as the engine ssh key is already added on all the hosts. If the playbook is not running on the engine machine, a user’s SSH key must be added to all hosts that belong to the data center, or the user must provide an appropriate inventory file.
|
-
Click
. -
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Maintenance, then click OK.
-
Click Detatch, then click OK.
-
Click Remove.
-
Click OK to remove the storage domain from the source environment.
-
Remove the LUN from the storage server.
-
Remove the stale LUNs from the host using Ansible:
# ansible-playbook --extra-vars "lun=<LUN>" /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/remove_stale_lun/examples/remove_stale_lun.yml
where LUN is the LUN removed from the storage server in the steps above.
If you remove the stale LUN from the host using Ansible without first removing the LUN from the storage server, the stale LUN will reappear on the host the next time VDSM performs an iSCSI rescan.
Creating an LVM filter
An LVM filter is a capability that can be set in /etc/lvm/lvm.conf
to accept devices into or reject devices from the list of volumes based on a regex query. For example, to ignore /dev/cdrom
you can use filter=["r|^/dev/cdrom$|"]
, or add the following parameter to the lvm
command: lvs --config 'devices{filter=["r|cdrom|"]}'
.
This provides a simple way to prevent a host from scanning and activating logical volumes that are not required directly by the host. In particular, the solution addresses logical volumes on shared storage managed by oVirt, and logical volumes created by a guest in oVirt raw volumes. This solution is needed because scanning and activating other logical volumes may cause data corruption, slow boot, or other issues.
The solution is to configure an LVM filter on each host, which allows the LVM on a host to scan only the logical volumes that are required by the host.
You can use the command vdsm-tool config-lvm-filter
to analyze the current LVM configuration and decide if a filter needs to be configured.
If the LVM filter has not yet been configured, the command generates an LVM filter option for the host, and adds the option to the LVM configuration.
On a host yet to be configured, the command automatically configures the LVM once the user confirms the operation:
# vdsm-tool config-lvm-filter
Analyzing host... Found these mounted logical volumes on this host:
logical volume: /dev/mapper/vg0-lv_home mountpoint: /home devices: /dev/vda2
logical volume: /dev/mapper/vg0-lv_root mountpoint: / devices: /dev/vda2
logical volume: /dev/mapper/vg0-lv_swap mountpoint: [SWAP] devices: /dev/vda2
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/vda2$|", "r|.*|" ]
This filter will allow LVM to access the local devices used by the hypervisor, but not shared storage owned by VDSM. If you add a new device to the volume group, you will need to edit the filter manually.
Configure LVM filter? [yes,NO] ? [NO/yes] yes Configuration completed successfully!
Please reboot to verify the LVM configuration.
If the host is already configured, the command simply informs the user that the LVM filter is already configured:
# vdsm-tool config-lvm-filter
Analyzing host... LVM filter is already configured for Vdsm
If the host configuration does not match the configuration required by VDSM, the LVM filter will need to be configured manually:
# vdsm-tool config-lvm-filter
Analyzing host... Found these mounted logical volumes on this host:
logical volume: /dev/mapper/vg0-lv_home mountpoint: /home devices: /dev/vda2
logical volume: /dev/mapper/vg0-lv_root mountpoint: / devices: /dev/vda2
logical volume: /dev/mapper/vg0-lv_swap mountpoint: [SWAP] devices: /dev/vda2
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/vda2$|", "r|.*|" ]
This filter will allow LVM to access the local devices used by the hypervisor, but not shared storage owned by VDSM. If you add a new device to the volume group, you will need to edit the filter manually.
This is the current LVM filter:
filter = [ "a|^/dev/vda2$|", "a|^/dev/vdb1$|", "r|.*|" ]
WARNING: The current LVM filter does not match the recommended filter, Vdsm cannot configure the filter automatically.
Please edit /etc/lvm/lvm.conf and set the 'filter' option in the 'devices' section to the recommended value.
It is recommended to reboot after changing LVM filter.
2.6.7. Preparing and Adding Gluster Storage
Preparing Gluster Storage
For information on setting up and configuring Gluster Storage, see the Gluster Storage Installation Guide.
Adding Gluster Storage
To use Gluster Storage with oVirt, see Configuring oVirt with Gluster Storage.
For the Gluster Storage versions that are supported with oVirt, see Red Hat Gluster Storage Version Compatibility and Support.
2.6.8. Importing Existing Storage Domains
Overview of Importing Existing Storage Domains
Aside from adding new storage domains, which contain no data, you can import existing storage domains and access the data they contain. By importing storage domains, you can recover data in the event of a failure in the Manager database, and migrate data from one data center or environment to another.
The following is an overview of importing each storage domain type:
- Data
-
Importing an existing data storage domain allows you to access all of the virtual machines and templates that the data storage domain contains. After you import the storage domain, you must manually import virtual machines, floating disk images, and templates into the destination data center. The process for importing the virtual machines and templates that a data storage domain contains is similar to that for an export storage domain. However, because data storage domains contain all the virtual machines and templates in a given data center, importing data storage domains is recommended for data recovery or large-scale migration of virtual machines between data centers or environments.
You can import existing data storage domains that were attached to data centers with the correct supported compatibility level. See Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions for more information.
- ISO
-
Importing an existing ISO storage domain allows you to access all of the ISO files and virtual diskettes that the ISO storage domain contains. No additional action is required after importing the storage domain to access these resources; you can attach them to virtual machines as required.
- Export
-
Importing an existing export storage domain allows you to access all of the virtual machine images and templates that the export storage domain contains. Because export domains are designed for exporting and importing virtual machine images and templates, importing export storage domains is recommended method of migrating small numbers of virtual machines and templates inside an environment or between environments. For information on exporting and importing virtual machines and templates to and from export storage domains, see Exporting and Importing Virtual Machines and Templates in the Virtual Machine Management Guide.
The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center.
Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains.
Importing storage domains
Import a storage domain that was previously attached to a data center in the same environment or in a different environment. This procedure assumes the storage domain is no longer attached to any data center in any environment, to avoid data corruption. To import and attach an existing data storage domain to a data center, the target data center must be initialized.
-
Click
. -
Click Import Domain.
-
Select the Data Center you want to import the storage domain to.
-
Enter a Name for the storage domain.
-
Select the Domain Function and Storage Type from the drop-down lists.
-
Select a host from the Host drop-down list.
All communication to the storage domain is through the selected host and not directly from the oVirt Engine. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
-
Enter the details of the storage domain.
The fields for specifying the details of the storage domain change depending on the values you select in the Domain Function and Storage Type lists. These fields are the same as those available for adding a new storage domain.
-
Select the Activate Domain in Data Center check box to activate the storage domain after attaching it to the selected data center.
-
Click OK.
You can now import virtual machines and templates from the storage domain to the data center.
Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains. |
Migrating Storage Domains between Data Centers in the Same Environment
Migrate a storage domain from one data center to another in the same oVirt environment to allow the destination data center to access the data contained in the storage domain. This procedure involves detaching the storage domain from one data center, and attaching it to a different data center.
Migrating a data storage domain to a data center that has a higher compatibility level than the original data center upgrades the storage domain’s storage format version. |
If you want to move the storage domain back to the original data center for any reason, such as to migrate virtual machines to the new data center, be aware that the higher version prevents reattaching the data storage domain to the original data center.
The Administration Portal prompts you to confirm that you want to update the storage domain format, for example, from V3 to V5. It also warns that you will not be able to attach it back to an older data center with a lower DC level.
To work around this issue, you can create a target data center that has the same compatibility version as the source data center. When you no longer need to maintain the lower compatibility version, you can increase the target data center’s compatibility version.
For details, see Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions.
-
Shut down all virtual machines running on the required storage domain.
-
Click
. -
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Maintenance, then click OK.
-
Click Detach, then click OK.
-
Click Attach.
-
Select the destination data center and click OK.
The storage domain is attached to the destination data center and is automatically activated. You can now import virtual machines and templates from the storage domain to the destination data center.
Migrating Storage Domains between Data Centers in Different Environments
Migrate a storage domain from one oVirt environment to another to allow the destination environment to access the data contained in the storage domain. This procedure involves removing the storage domain from one oVirt environment, and importing it into a different environment. To import and attach an existing data storage domain to a oVirt data center, the storage domain’s source data center must have the correct supported compatibility level.
Migrating a data storage domain to a data center that has a higher compatibility level than the original data center upgrades the storage domain’s storage format version. |
If you want to move the storage domain back to the original data center for any reason, such as to migrate virtual machines to the new data center, be aware that the higher version prevents reattaching the data storage domain to the original data center.
The Administration Portal prompts you to confirm that you want to update the storage domain format, for example, from V3 to V5. It also warns that you will not be able to attach it back to an older data center with a lower DC level.
To work around this issue, you can create a target data center that has the same compatibility version as the source data center. When you no longer need to maintain the lower compatibility version, you can increase the target data center’s compatibility version.
For details, see Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions.
-
Log in to the Administration Portal of the source environment.
-
Shut down all virtual machines running on the required storage domain.
-
Click
. -
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Maintenance, then click OK.
-
Click Detach, then click OK.
-
Click Remove.
-
In the Remove Storage(s) window, ensure the Format Domain, i.e. Storage Content will be lost! check box is not selected. This step preserves the data in the storage domain for later use.
-
Click OK to remove the storage domain from the source environment.
-
Log in to the Administration Portal of the destination environment.
-
Click
. -
Click Import Domain.
-
Select the destination data center from the Data Center drop-down list.
-
Enter a name for the storage domain.
-
Select the Domain Function and Storage Type from the appropriate drop-down lists.
-
Select a host from the Host drop-down list.
-
Enter the details of the storage domain.
The fields for specifying the details of the storage domain change depending on the value you select in the Storage Type drop-down list. These fields are the same as those available for adding a new storage domain.
-
Select the Activate Domain in Data Center check box to automatically activate the storage domain when it is attached.
-
Click OK.
The storage domain is attached to the destination data center in the new oVirt environment and is automatically activated. You can now import virtual machines and templates from the imported storage domain to the destination data center.
Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains. |
Importing Templates from Imported Data Storage Domains
Import a template from a data storage domain you have imported into your oVirt environment. This procedure assumes that the imported data storage domain has been attached to a data center and has been activated.
-
Click
. -
Click the imported storage domain’s name. This opens the details view.
-
Click the Template Import tab.
-
Select one or more templates to import.
-
Click Import.
-
For each template in the Import Templates(s) window, ensure the correct target cluster is selected in the Cluster list.
-
Map external virtual machine vNIC profiles to profiles that are present on the target cluster(s):
-
Click vNic Profiles Mapping.
-
Select the vNIC profile to use from the Target vNic Profile drop-down list.
-
If multiple target clusters are selected in the Import Templates window, select each target cluster in the Target Cluster drop-down list and ensure the mappings are correct.
-
Click OK.
-
-
Click OK.
The imported templates no longer appear in the list under the Template Import tab.
2.6.9. Storage Tasks
Uploading Images to a Data Storage Domain
You can upload virtual disk images and ISO images to your data storage domain in the Administration Portal or with the REST API.
To upload images with the REST API, see IMAGETRANSFERS and IMAGETRANSFER in the REST API Guide. |
QEMU-compatible virtual disks can be attached to virtual machines. Virtual disk types must be either QCOW2 or raw. Disks created from a QCOW2 virtual disk cannot be shareable, and the QCOW2 virtual disk file must not have a backing file.
ISO images can be attached to virtual machines as CDROMs or used to boot virtual machines.
The upload function uses HTML 5 APIs, which requires your environment to have the following:
-
Certificate authority, imported into the web browser used to access the Administration Portal.
To import the certificate authority, browse to
https://engine_address/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA
and enable all the trust settings. Refer to the instructions to install the certificate authority in Firefox, Internet Explorer, or Google Chrome. -
Browser that supports HTML 5, such as Firefox 35, Internet Explorer 10, Chrome 13, or later.
-
Click
. -
Select Start from the Upload menu.
-
Click Choose File and select the image to upload.
-
Fill in the Disk Options fields. See Explanation of Settings in the New Virtual Disk Window for descriptions of the relevant fields.
-
Click OK.
A progress bar indicates the status of the upload. You can pause, cancel, or resume uploads from the Upload menu.
If the upload times out with the message, Reason: timeout due to transfer inactivity, increase the timeout value and restart the
|
Uploading the VirtIO image files to a storage domain
The virtio-win_version.iso
image contains the following for Windows virtual machines to improve performance and usability:
-
VirtIO drivers
-
an installer for the guest agents
-
an installer for the drivers
To install and upload the most recent version of virtio-win_version.iso
:
-
Install the image files on the Engine machine:
# dnf -y install virtio-win
After you install it on the Engine machine, the image file is
/usr/share/virtio-win/virtio-win_version.iso
-
Upload the image file to a data storage domain that was not created locally during installation. For more information, see Uploading Images to a Data Storage Domain in the Administration Guide.
-
Attach the image file to virtual machines.
The virtual machines can now use the virtio drivers and agents.
For information on attaching the image files to a virtual machine, see Installing the Guest Agents, Tools, and Drivers on Windows in the Virtual Machine Management Guide.
Uploading images to an ISO domain
The ISO domain is a deprecated storage domain type. The ISO Uploader tool, Although the ISO domain is deprecated, this information is provided in case you must use an ISO domain. |
To upload an ISO image to an ISO storage domain in order to make it available from within the Engine, follow these steps.
-
Login as root to the host that belongs to the Data Center where your ISO storage domain resides.
-
Get a directory tree of
/rhv/data-center
:# tree /rhev/data-center . |-- 80dfacc7-52dd-4d75-ab82-4f9b8423dc8b | |-- 76d1ecba-b61d-45a4-8eb5-89ab710a6275 → /rhev/data-center/mnt/10.10.10.10:_rhevnfssd/76d1ecba-b61d-45a4-8eb5-89ab710a6275 | |-- b835cd1c-111c-468d-ba70-fec5346af227 → /rhev/data-center/mnt/10.10.10.10:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227 | |-- mastersd → 76d1ecba-b61d-45a4-8eb5-89ab710a6275 | |-- tasks → mastersd/master/tasks | `-- vms → mastersd/master/vms |-- hsm-tasks `-- mnt |-- 10.10.10.10:_rhevisosd | |-- b835cd1c-111c-468d-ba70-fec5346af227 | | |-- dom_md | | | |-- ids | | | |-- inbox | | | |-- leases | | | |-- metadata | | | `-- outbox | | `-- images | | `-- 11111111-1111-1111-1111-111111111111 | `-- lost+found [error opening dir] (output trimmed)
-
Securely copy the image from the source location into the full path of
11111111-1111-1111-1111-111111111111
:# scp root@isosource:/isos/example.iso /rhev/data-center/mnt/10.96.4.50:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227/images/11111111-1111-1111-1111-111111111111
-
File permissions for the newly copied ISO image should be 36:36 (vdsm:kvm). If they are not, change user and group ownership of the ISO file to 36:36 (vdsm’s user and group):
# cd /rhev/data-center/mnt/10.96.4.50:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227/images/11111111-1111-1111-1111-111111111111 # chown 36.36 example.iso
The ISO image should now be available in the ISO domain in the data center.
Moving Storage Domains to Maintenance Mode
A storage domain must be in maintenance mode before it can be detached and removed. This is required to redesignate another data domain as the master
data domain.
You cannot move a storage domain into maintenance mode if a virtual machine has a lease on the storage domain. The virtual machine needs to be shut down, or the lease needs to be to removed or moved to a different storage domain first. See the Virtual Machine Management Guide for information about virtual machine leases. |
Expanding iSCSI domains by adding more LUNs can only be done when the domain is active.
-
Shut down all the virtual machines running on the storage domain.
-
Click
. -
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Maintenance.
The
Ignore OVF update failure
check box allows the storage domain to go into maintenance mode even if the OVF update fails. -
Click OK.
The storage domain is deactivated and has an Inactive
status in the results list. You can now edit, detach, remove, or reactivate the inactive storage domains from the data center.
You can also activate, detach, and place domains into maintenance mode using the Storage tab in the details view of the data center it is associated with. |
Editing Storage Domains
You can edit storage domain parameters through the Administration Portal. Depending on the state of the storage domain, either active or inactive, different fields are available for editing. Fields such as Data Center, Domain Function, Storage Type, and Format cannot be changed.
-
Active: When the storage domain is in an active state, the Name, Description, Comment, Warning Low Space Indicator (%), Critical Space Action Blocker (GB), Wipe After Delete, and Discard After Delete fields can be edited. The Name field can only be edited while the storage domain is active. All other fields can also be edited while the storage domain is inactive.
-
Inactive: When the storage domain is in maintenance mode or unattached, thus in an inactive state, you can edit all fields except Name, Data Center, Domain Function, Storage Type, and Format. The storage domain must be inactive to edit storage connections, mount options, and other advanced parameters. This is only supported for NFS, POSIX, and Local storage types.
iSCSI storage connections cannot be edited via the Administration Portal, but can be edited via the REST API. See Updating Storage Connections in the REST API Guide. |
-
Click
and select a storage domain. -
Click Manage Domain.
-
Edit the available fields as required.
-
Click OK.
-
Click
. -
If the storage domain is active, move it to maintenance mode:
-
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Maintenance.
-
Click OK.
-
-
Click Manage Domain.
-
Edit the storage path and other details as required. The new connection details must be of the same storage type as the original connection.
-
Click OK.
-
Activate the storage domain:
-
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Activate.
-
Updating OVFs
By default, OVFs are updated every 60 minutes. However, if you have imported an important virtual machine or made a critical update, you can update OVFs manually.
-
Click
. -
Select the storage domain and click More Actions (
), then click Update OVFs.
The OVFs are updated and a message appears in Events.
Activating Storage Domains from Maintenance Mode
If you have been making changes to a data center’s storage, you have to put storage domains into maintenance mode. Activate a storage domain to resume using it.
-
Click
. -
Click an inactive storage domain’s name. This opens the details view.
-
Click the Data Centers tab.
-
Click Activate.
If you attempt to activate the ISO domain before activating the data domain, an error message displays and the domain is not activated. |
Detaching a Storage Domain from a Data Center
Detach a storage domain from one data center to migrate it to another data center.
-
Click
. -
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Maintenance.
-
Click OK to initiate maintenance mode.
-
Click Detach.
-
Click OK to detach the storage domain.
The storage domain has been detached from the data center, ready to be attached to another data center.
Attaching a Storage Domain to a Data Center
Attach a storage domain to a data center.
-
Click
. -
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Attach.
-
Select the appropriate data center.
-
Click OK.
The storage domain is attached to the data center and is automatically activated.
Removing a Storage Domain
You have a storage domain in your data center that you want to remove from the virtualized environment.
-
Click
. -
Move the storage domain to maintenance mode and detach it:
-
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Maintenance, then click OK.
-
Click Detach, then click OK.
-
-
Click Remove.
-
Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain.
-
Click OK.
The storage domain is permanently removed from the environment.
Destroying a Storage Domain
A storage domain encountering errors may not be able to be removed through the normal procedure. Destroying a storage domain forcibly removes the storage domain from the virtualized environment.
-
Click
. -
Select the storage domain and click More Actions (
), then click Destroy.
-
Select the Approve operation check box.
-
Click OK.
Creating a Disk Profile
Disk profiles define the maximum level of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are created based on storage profiles defined under data centers, and must be manually assigned to individual virtual disks for the profile to take effect.
This procedure assumes you have already defined one or more storage quality of service entries under the data center to which the storage domain belongs.
-
Click
. -
Click the data storage domain’s name. This opens the details view.
-
Click the Disk Profiles tab.
-
Click New.
-
Enter a Name and a Description for the disk profile.
-
Select the quality of service to apply to the disk profile from the QoS list.
-
Click OK.
Removing a Disk Profile
Remove an existing disk profile from your oVirt environment.
-
Click
. -
Click the data storage domain’s name. This opens the details view.
-
Click the Disk Profiles tab.
-
Select the disk profile to remove.
-
Click Remove.
-
Click OK.
If the disk profile was assigned to any virtual disks, the disk profile is removed from those virtual disks.
Viewing the Health Status of a Storage Domain
Storage domains have an external health status in addition to their regular Status. The external health status is reported by plug-ins or external systems, or set by an administrator, and appears to the left of the storage domain’s Name as one of the following icons:
-
OK: No icon
-
Info:
-
Warning:
-
Error:
-
Failure:
To view further details about the storage domain’s health status, click the storage domain’s name. This opens the details view, and click the Events tab.
The storage domain’s health status can also be viewed using the REST API. A GET
request on a storage domain will include the external_status
element, which contains the health status.
You can set a storage domain’s health status in the REST API via the events
collection. For more information, see Adding Events in the REST API Guide.
Setting Discard After Delete for a Storage Domain
When the Discard After Delete check box is selected, a blkdiscard
command is called on a logical volume when it is removed and the underlying storage is notified that the blocks are free. The storage array can use the freed space and allocate it when requested. Discard After Delete only works on block storage. The flag is not available on the oVirt Engine for file storage, for example NFS.
Restrictions:
-
Discard After Delete is only available on block storage domains, such as iSCSI or Fibre Channel.
-
The underlying storage must support
Discard
.
Discard After Delete can be enabled both when creating a block storage domain or when editing a block storage domain. See Preparing and Adding Block Storage and Editing Storage Domains.
Enabling 4K support on environments with more than 250 hosts
By default, GlusterFS domains and local storage domains support 4K block size on oVirt environments with up to 250 hosts. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO.
The lockspace area that Sanlock allocates is 1 MB when the maximum number of hosts is the default 250. When you increase the maximum number of hosts when using 4K storage, the lockspace area is larger. For example, when using 2000 hosts, the lockspace area could be as large as 8 MB.
You can enable 4K block support on environments with more than 250 hosts by setting the engine configuration parameter MaxNumberOfHostsInStoragePool
.
-
On the Engine machine enable the required maximum number of hosts:
# engine-config -s MaxNumberOfHostsInStoragePool=NUMBER_OF_HOSTS
-
Restart the JBoss Application Server:
# service jboss-as restart
For example, if you have a cluster with 300 hosts, enter:
# engine-config -s MaxNumberOfHostsInStoragePool=300
# service jboss-as restart
View the value of the MaxNumberOfHostsInStoragePool
parameter on the Engine:
# engine-config --get=MaxNumberOfHostsInStoragePool
MaxNumberOfHostsInStoragePool: 250 version: general
Disabling 4K support
By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO.
You can disable 4K block support.
-
Ensure that 4K block support is enabled.
$ vdsm-client Host getCapabilities … { "GLUSTERFS" : [ 0, 512, 4096, ] …
-
Edit
/etc/vdsm/vdsm.conf.d/gluster.conf
and setenable_4k_storage
tofalse
. For example:$ vi /etc/vdsm/vdsm.conf.d/gluster.conf [gluster] # Use to disable 4k support # if needed. enable_4k_storage = false
Monitoring available space in a storage domain
You can monitor available space in a storage domain and create an alert to warn you when a storage domain is nearing capacity. You can also define a critical threshold at which point the domain shuts down.
With Virtual Data Optimizer (VDO) and thin pool support, you might see more available space than is physically available. For VDO this behavior is expected, but the Engine cannot predict how much data you can actually write. The Warning Low Confirmed Space Indicator parameter notifies you when the domain is nearing physical space capacity and shows how much confirmed space remains. Confirmed space refers to the actual space available to write data.
-
In the Administration Portal, click
and click the name of a storage domain. -
Click Manage Domain. The Manage Domains dialog box opens.
-
Expand Advanced Parameters.
-
For Warning Low Space Indicator (%) enter a percentage value. When the available space in the storage domain reaches this value, the Engine alerts you that the domain is nearing capacity.
-
For Critical Space Action Blocker (GB), enter a value in gigabytes. When the available space in the storage domain reaches this value, the Engine shuts down.
-
For Warning Low Confirmed Space Indicator (%) enter a percentage value. When the available space in the storage domain reaches this value, the Engine alerts you that the actual space available to write data is nearing capacity.
2.7. Pools
2.7.1. Introduction to Virtual Machine Pools
A virtual machine pool is a group of virtual machines that are all clones of the same template and that can be used on demand by any user in a given group. Virtual machine pools allow administrators to rapidly configure a set of generalized virtual machines for users.
Users access a virtual machine pool by taking a virtual machine from the pool. When a user takes a virtual machine from a pool, they are provided with any one of the virtual machines in the pool if any are available. That virtual machine will have the same operating system and configuration as that of the template on which the pool was based, but users may not receive the same member of the pool each time they take a virtual machine. Users can also take multiple virtual machines from the same virtual machine pool depending on the configuration of that pool.
Virtual machine pools are stateless by default, meaning that virtual machine data and configuration changes are not persistent across reboots. However, the pool can be configured to be stateful, allowing changes made by a previous user to persist. However, if a user configures console options for a virtual machine taken from a virtual machine pool, those options will be set as the default for that user for that virtual machine pool.
Virtual machines taken from a pool are not stateless when accessed from the Administration Portal. This is because administrators need to be able to write changes to the disk if necessary. |
In principle, virtual machines in a pool are started when taken by a user, and shut down when the user is finished. However, virtual machine pools can also contain pre-started virtual machines. Pre-started virtual machines are kept in an up state, and remain idle until they are taken by a user. This allows users to start using such virtual machines immediately, but these virtual machines will consume system resources even while not in use due to being idle.
2.7.2. Creating a virtual machine pool
You can create a virtual machine pool containing multiple virtual machines based on a common template. See Templates in the Virtual Machine Management Guide for information about sealing a virtual machine and creating a template.
Sysprep
File Configuration Options for Windows Virtual Machines
Several sysprep
file configuration options are available, depending on your requirements.
If your pool does not need to join a domain, you can use the default sysprep
file, located in /usr/share/ovirt-engine/conf/sysprep/
.
If your pool needs to join a domain, you can create a custom sysprep
for each Windows operating system:
-
Copy the relevant sections for each operating system from
/usr/share/ovirt-engine/conf/osinfo-defaults.properties
to a new file and save as99-defaults.properties
. -
In
99-defaults.properties
, specify the Windows product activation key and the path of your new customsysprep
file:os.operating_system.productKey.value=Windows_product_activation_key … os.operating_system.sysprepPath.value = ${ENGINE_USR}/conf/sysprep/sysprep.operating_system
-
Create a new
sysprep
file, specifying the domain, domain password, and domain administrator:<Credentials> <Domain>__AD_Domain__</Domain> <Password>__Domain_Password__</Password> <Username>__Domain_Administrator__</Username> </Credentials>
If you need to configure different sysprep
settings for different pools of Windows virtual machines, you can create a custom sysprep
file in the Administration Portal (see Creating a Virtual Machine Pool below). See Using Sysprep to Automate the Configuration of Virtual Machines in the Virtual Machine Guide for more information.
-
Click
. -
Click New.
-
Select a Cluster from the drop-down list.
-
Select a Template and version from the drop-down menu. A template provides standard settings for all the virtual machines in the pool.
-
Select an Operating System from the drop-down list.
-
Use Optimized for to optimize virtual machines for Desktop or Server.
High Performance optimization is not recommended for pools because a high performance virtual machine is pinned to a single host and concrete resources. A pool containing multiple virtual machines with such a configuration would not run well.
-
Enter a Name and, optionally, a Description and Comment.
The Name of the pool is applied to each virtual machine in the pool, with a numeric suffix. You can customize the numbering of the virtual machines with
?
as a placeholder.Example 6. Pool Name and Virtual Machine Numbering Examples-
Pool:
MyPool
Virtual machines:
MyPool-1
,MyPool-2
, …MyPool-10
-
Pool:
MyPool-???
Virtual machines:
MyPool-001
,MyPool-002
, …MyPool-010
-
-
Enter the Number of VMs for the pool.
-
Enter the number of virtual machines to be prestarted in the Prestarted field.
-
Select the Maximum number of VMs per user that a single user is allowed to run in a session. The minimum is 1.
-
Select the Delete Protection check box to enable delete protection.
-
If you are creating a pool of non-Windows virtual machines or if you are using the default
sysprep
, skip this step. If you are creating a customsysprep
file for a pool of Windows virtual machines:-
Click the Show Advanced Options button.
-
Click the Initial Run tab and select the Use Cloud-Init/Sysprep check box.
-
Click the Authentication arrow and enter the User Name and Password or select Use already configured password.
This
User Name
is the name of the local administrator. You can change its value from its default value (user
) here in the Authentication section or in a customsysprep
file. -
Click the Custom Script arrow and paste the contents of the default
sysprep
file, located in/usr/share/ovirt-engine/conf/sysprep/
, into the text box. -
You can modify the following values of the
sysprep
file:-
Key
. If you do not want to use the pre-defined Windows activation product key, replace<![CDATA[$ProductKey$]]>
with a valid product key:<ProductKey> <Key><![CDATA[$ProductKey$]]></Key> </ProductKey>
Example 7. Windows Product Key Example<ProductKey> <Key>0000-000-000-000</Key> </ProductKey>
-
Domain
that the Windows virtual machines will join, the domain’sPassword
, and the domain administrator’sUsername
:<Credentials> <Domain>__AD_Domain__</Domain> <Password>__Domain_Password__</Password> <Username>__Domain_Administrator__</Username> </Credentials>
Example 8. Domain Credentials Example<Credentials> <Domain>addomain.local</Domain> <Password>12345678</Password> <Username>Sarah_Smith</Username> </Credentials>
The
Domain
,Password
, andUsername
are required to join the domain. TheKey
is for activation. You do not necessarily need both.The domain and credentials cannot be modified in the Initial Run tab.
-
FullName
of the local administrator:<UserData> ... <FullName>__Local_Administrator__</FullName> ... </UserData>
-
DisplayName
andName
of the local administrator:<LocalAccounts> <LocalAccount wcm:action="add"> <Password> <Value><![CDATA[$AdminPassword$]]></Value> <PlainText>true</PlainText> </Password> <DisplayName>__Local_Administrator__</DisplayName> <Group>administrators</Group> <Name>__Local_Administrator__</Name> </LocalAccount> </LocalAccounts>
The remaining variables in the
sysprep
file can be filled in on the Initial Run tab.
-
-
-
Optional. Set a Pool Type:
-
Click the Type tab and select a Pool Type:
-
Manual - The administrator is responsible for explicitly returning the virtual machine to the pool.
-
Automatic - The virtual machine is automatically returned to the virtual machine pool.
-
-
Select the Stateful Pool check box to ensure that virtual machines are started in a stateful mode. This ensures that changes made by a previous user will persist on a virtual machine.
-
Click OK.
-
-
Optional. Override the SPICE proxy:
-
In the Console tab, select the Override SPICE Proxy check box.
-
In the Overridden SPICE proxy address text field, specify the address of a SPICE proxy to override the global SPICE proxy.
-
Click OK.
-
-
For a pool of Windows virtual machines, click
, select each virtual machine from the pool, and click .If the virtual machine does not start and
Info [windeploy.exe] Found no unattend file
appears in%WINDIR%\panther\UnattendGC\setupact.log
, add the UnattendFile key to the registry of the Windows virtual machine that was used to create the template for the pool:-
Check that the Windows virtual machine has an attached secondary CD-ROM device with the unattend file, for example,
A:\Unattend.xml
. -
Select the virtual machine and click
. -
Under Boot Options, check Attach Windows guest tools CD.
-
Click Start, click Run, type
regedit
in the Open text box, and click OK. -
In the left pane, go to
. -
Right-click the right pane and select
. -
Enter UnattendFile as the key name.
-
Double-click the new key and enter the
unattend
file name and path, for example, A:\Unattend.xml, as the key’s value. -
Save the registry, seal the Windows virtual machine, and create a new template. See Templates in the Virtual Machine Management Guide for details.
-
You have created and configured a virtual machine pool with the specified number of identical virtual machines. You can view these virtual machines in
, or by clicking the name of a pool to open its details view; a virtual machine in a pool is distinguished from independent virtual machines by its icon.2.7.3. Explanation of Settings and Controls in the New Pool and Edit Pool Windows
New Pool and Edit Pool General Settings Explained
The following table details the information required on the General tab of the New Pool and Edit Pool windows that are specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine window.
Field Name | Description |
---|---|
Template |
The template and template sub-version on which the virtual machine pool is based. If you create a pool based on the |
Description |
A meaningful description of the virtual machine pool. |
Comment |
A field for adding plain text human-readable comments regarding the virtual machine pool. |
Prestarted VMs |
Allows you to specify the number of virtual machines in the virtual machine pool that will be started before they are taken and kept in that state to be taken by users. The value of this field must be between |
Number of VMs/Increase number of VMs in pool by |
Allows you to specify the number of virtual machines to be created and made available in the virtual machine pool. In the edit window it allows you to increase the number of virtual machines in the virtual machine pool by the specified number. By default, the maximum number of virtual machines you can create in a pool is 1000. This value can be configured using the |
Maximum number of VMs per user |
Allows you to specify the maximum number of virtual machines a single user can take from the virtual machine pool at any one time. The value of this field must be between |
Delete Protection |
Allows you to prevent the virtual machines in the pool from being deleted. |
Sealed |
Ensures that machine-specific settings from the template are not reproduced in virtual machines that are provisioned from the template. For more information about the sealing process, see Sealing a Windows Virtual Machine for Deployment as a Template |
New Pool and Edit Pool Type Settings Explained
The following table details the information required on the Type tab of the New Pool and Edit Pool windows.
Field Name | Description |
---|---|
Pool Type |
This drop-down menu allows you to specify the type of the virtual machine pool. The following options are available:
|
Stateful Pool |
Specify whether the state of virtual machines in the pool is preserved when a virtual machine is passed to a different user. This means that changes made by a previous user will persist on the virtual machine. |
New Pool and Edit Pool Console Settings Explained
The following table details the information required on the Console tab of the New Pool or Edit Pool window that is specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine and Edit Virtual Machine windows.
Field Name | Description |
---|---|
Override SPICE proxy |
Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the VM Portal) is outside of the network where the hosts reside. |
Overridden SPICE proxy address |
The proxy by which the SPICE client connects to virtual machines. This proxy overrides both the global SPICE proxy defined for the oVirt environment and the SPICE proxy defined for the cluster to which the virtual machine pool belongs, if any. The address must be in the following format:
|
Virtual Machine Pool Host Settings Explained
The following table details the options available on the Host tab of the New Pool and Edit Pool windows.
Field Name | Sub-element | Description | ||
---|---|---|---|---|
Start Running On |
Defines the preferred host on which the virtual machine is to run. Select either:
|
|||
CPU options |
Pass-Through Host CPU |
When selected, allows virtual machines to use the host’s CPU flags. When selected, Migration Options is set to Allow manual migration only. |
||
Migrate only to hosts with the same TSC frequency |
When selected, this virtual machine can only be migrated to a host with the same TSC frequency. This option is only valid for High Performance virtual machines. |
|||
Migration Options |
Migration mode |
Defines options to run and migrate the virtual machine. If the options here are not used, the virtual machine will run or migrate according to its cluster’s policy.
|
||
Migration policy |
Defines the migration convergence policy. If the check box is left unselected, the host determines the policy.
|
|||
Enable migration encryption |
Allows the virtual machine to be encrypted during migration.
|
|||
Parallel Migrations |
Allows you to specify whether and how many parallel migration connections to use.
|
|||
Number of VM Migration Connections |
This setting is only available when Custom is selected. The preferred number of custom parallel migrations, between 2 and 255. |
|||
Configure NUMA |
NUMA Node Count |
The number of virtual NUMA nodes available in a host that can be assigned to the virtual machine. |
||
NUMA Pinning |
Opens the NUMA Topology window. This window shows the host’s total CPUs, memory, and NUMA nodes, and the virtual machine’s virtual NUMA nodes. You can manually pin virtual NUMA nodes to host NUMA nodes by clicking and dragging each vNUMA from the box on the right to a NUMA node on the left. You can also set Tune Mode for memory allocation: Strict - Memory allocation will fail if the memory cannot be allocated on the target node. Preferred - Memory is allocated from a single preferred node. If sufficient memory is not available, memory can be allocated from other nodes. Interleave - Memory is allocated across nodes in a round-robin algorithm. If you define NUMA pinning, Migration Options is set to Allow manual migration only. |
New Pool and Edit Pool Resource Allocation Settings Explained
The following table details the information required on the Resource Allocation tab of the New Pool and Edit Pool windows that are specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine window. See Virtual Machine Resource Allocation Settings Explained in the Virtual Machine Management Guide for more information.
Field Name | Sub-element | Description |
---|---|---|
Disk Allocation |
Auto select target |
Select this check box to automatically select the storage domain that has the most free space. The Target and Disk Profile fields are disabled. |
Format |
This field is read-only and always displays QCOW2. |
Editing a Virtual Machine Pool
After a virtual machine pool has been created, its properties can be edited. The properties available when editing a virtual machine pool are identical to those available when creating a new virtual machine pool except that the Number of VMs property is replaced by Increase number of VMs in pool by.
When editing a virtual machine pool, the changes introduced affect only new virtual machines. Virtual machines that existed already at the time of the introduced changes remain unaffected. |
-
Click
and select a virtual machine pool. -
Click Edit.
-
Edit the properties of the virtual machine pool.
-
Click Ok.
Prestarting Virtual Machines in a Pool
The virtual machines in a virtual machine pool are powered down by default. When a user requests a virtual machine from a pool, a machine is powered up and assigned to the user. In contrast, a prestarted virtual machine is already running and waiting to be assigned to a user, decreasing the amount of time a user has to wait before being able to access a machine. When a prestarted virtual machine is shut down it is returned to the pool and restored to its original state. The maximum number of prestarted virtual machines is the number of virtual machines in the pool.
Prestarted virtual machines are suitable for environments in which users require immediate access to virtual machines which are not specifically assigned to them. Only automatic pools can have prestarted virtual machines.
-
Click
and select the virtual machine pool. -
Click Edit.
-
Enter the number of virtual machines to be prestarted in the Prestarted VMs field.
-
Click the Type tab. Ensure Pool Type is set to Automatic.
-
Click OK.
Adding Virtual Machines to a Virtual Machine Pool
If you require more virtual machines than originally provisioned in a virtual machine pool, add more machines to the pool.
-
Click
and select the virtual machine pool. -
Click Edit.
-
Enter the number of additional virtual machines in the Increase number of VMs in pool by field.
-
Click OK.
Detaching Virtual Machines from a Virtual Machine Pool
You can detach virtual machines from a virtual machine pool. Detaching a virtual machine removes it from the pool to become an independent virtual machine.
-
Click
. -
Click the pool’s name. This opens the details view.
-
Click the Virtual Machines tab to list the virtual machines in the pool.
-
Ensure the virtual machine has a status of
Down
; you cannot detach a running virtual machine. -
Select one or more virtual machines and click Detach.
-
Click OK.
The virtual machine still exists in the environment and can be viewed and accessed from . Note that the icon changes to denote that the detached virtual machine is an independent virtual machine. |
Removing a Virtual Machine Pool
You can remove a virtual machine pool from a data center. You must first either delete or detach all of the virtual machines in the pool. Detaching virtual machines from the pool will preserve them as independent virtual machines.
-
Click
and select the virtual machine pool. -
Click Remove.
-
Click OK.
2.8. Virtual Disks
2.8.1. Understanding Virtual Machine Storage
oVirt supports three storage types: NFS, iSCSI and FCP.
In each type, a host known as the Storage Pool Manager (SPM) manages access between hosts and storage. The SPM host is the only node that has full access within the storage pool; the SPM can modify the storage domain metadata, and the pool’s metadata. All other hosts can only access virtual machine hard disk image data.
By default in an NFS, local, or POSIX compliant data center, the SPM creates the virtual disk using a thin provisioned format as a file in a file system.
In iSCSI and other block-based data centers, the SPM creates a volume group on top of the Logical Unit Numbers (LUNs) provided, and makes logical volumes to use as virtual disks. Virtual disks on block-based storage are preallocated by default.
If the virtual disk is preallocated, a logical volume of the specified size in GB is created. The virtual machine can be mounted on a Enterprise Linux server using kpartx
, vgscan
, vgchange
or mount
to investigate the virtual machine’s processes or problems.
If the virtual disk is thinly provisioned, a 1 GB logical volume is created. The logical volume is continuously monitored by the host on which the virtual machine is running. As soon as the usage nears a threshold the host notifies the SPM, and the SPM extends the logical volume by 1 GB. The host is responsible for resuming the virtual machine after the logical volume has been extended. If the virtual machine goes into a paused state it means that the SPM could not extend the disk in time. This occurs if the SPM is too busy or if there is not enough storage space.
A virtual disk with a preallocated (raw) format has significantly faster write speeds than a virtual disk with a thin provisioning (QCOW2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-I/O intensive virtual machines. The preallocated format is recommended for virtual machines with high I/O writes. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible.
2.8.2. Understanding Virtual Disks
oVirt features Preallocated (thick provisioned) and Sparse (thin provisioned) storage options.
-
Preallocated
A preallocated virtual disk allocates all the storage required for a virtual machine up front. For example, a 20 GB preallocated logical volume created for the data partition of a virtual machine will take up 20 GB of storage space immediately upon creation.
-
Sparse
A sparse allocation allows an administrator to define the total storage to be assigned to the virtual machine, but the storage is only allocated when required.
For example, a 20 GB thin provisioned logical volume would take up 0 GB of storage space when first created. When the operating system is installed it may take up the size of the installed file, and would continue to grow as data is added up to a maximum of 20 GB size.
You can view a virtual disk’s ID in
. The ID is used to identify a virtual disk because its device name (for example, /dev/vda0) can change, causing disk corruption. You can also view a virtual disk’s ID in /dev/disk/by-id.You can view the Virtual Size of a disk in
and in the Disks tab of the details view for storage domains, virtual machines, and templates. The Virtual Size is the total amount of disk space that the virtual machine can use. It is the number that you enter in the Size(GB) field when you create or edit a virtual disk.You can view the Actual Size of a disk in the Disks tab of the details view for storage domains and templates. This is the amount of disk space that has been allocated to the virtual machine so far. Preallocated disks show the same value for Virtual Size and Actual Size. Sparse disks may show different values, depending on how much disk space has been allocated.
The possible combinations of storage types and formats are described in the following table.
Storage | Format | Type | Note |
---|---|---|---|
NFS |
Raw |
Preallocated |
A file with an initial size that equals the amount of storage defined for the virtual disk, and has no formatting. |
NFS |
Raw |
Sparse |
A file with an initial size that is close to zero, and has no formatting. |
NFS |
QCOW2 |
Sparse |
A file with an initial size that is close to zero, and has QCOW2 formatting. Subsequent layers will be QCOW2 formatted. |
SAN |
Raw |
Preallocated |
A block device with an initial size that equals the amount of storage defined for the virtual disk, and has no formatting. |
SAN |
QCOW2 |
Sparse |
A block device with an initial size that is much smaller than the size defined for the virtual disk (currently 1 GB), and has QCOW2 formatting for which space is allocated as needed (currently in 1 GB increments). |
2.8.3. Settings to Wipe Virtual Disks After Deletion
The wipe_after_delete
flag, viewed in the Administration Portal as the Wipe After Delete check box will replace used data with zeros when a virtual disk is deleted. If it is set to false, which is the default, deleting the disk will open up those blocks for reuse but will not wipe the data. It is, therefore, possible for this data to be recovered because the blocks have not been returned to zero.
The wipe_after_delete
flag only works on block storage. On file storage, for example NFS, the option does nothing because the file system will ensure that no data exists.
Enabling wipe_after_delete
for virtual disks is more secure, and is recommended if the virtual disk has contained any sensitive data. This is a more intensive operation and users may experience degradation in performance and prolonged delete times.
The wipe after delete functionality is not the same as secure delete, and cannot guarantee that the data is removed from the storage, just that new disks created on same storage will not expose data from old disks. |
The wipe_after_delete
flag default can be changed to true
during the setup process (see Configuring the oVirt Engine), or by using the engine-config
tool on the oVirt Engine. Restart the ovirt-engine
service for the setting change to take effect.
Changing the |
Setting SANWipeAfterDelete to Default to True Using the Engine Configuration Tool
-
Run the
engine-config
tool with the--set
action:# engine-config --set SANWipeAfterDelete=true
-
Restart the
ovirt-engine
service for the change to take effect:# systemctl restart ovirt-engine.service
The /var/log/vdsm/vdsm.log file located on the host can be checked to confirm that a virtual disk was successfully wiped and deleted.
For a successful wipe, the log file will contain the entry, storage_domain_id/volume_id was zeroed and will be deleted
. For example:
a9cb0625-d5dc-49ab-8ad1-72722e82b0bf/a49351a7-15d8-4932-8d67-512a369f9d61 was zeroed and will be deleted
For a successful deletion, the log file will contain the entry, finished with VG:storage_domain_id LVs: list_of_volume_ids, img: image_id
. For example:
finished with VG:a9cb0625-d5dc-49ab-8ad1-72722e82b0bf LVs: {'a49351a7-15d8-4932-8d67-512a369f9d61': ImgsPar(imgs=['11f8b3be-fa96-4f6a-bb83-14c9b12b6e0d'], parent='00000000-0000-0000-0000-000000000000')}, img: 11f8b3be-fa96-4f6a-bb83-14c9b12b6e0d
An unsuccessful wipe will display a log message zeroing storage_domain_id/volume_id failed. Zero and remove this volume manually
, and an unsuccessful delete will display Remove failed for some of VG: storage_domain_id zeroed volumes: list_of_volume_ids
.
2.8.4. Shareable Disks in oVirt
Some applications require storage to be shared between servers. oVirt allows you to mark virtual machine hard disks as Shareable and attach those disks to virtual machines. That way a single virtual disk can be used by multiple cluster-aware guests.
Shared disks are not to be used in every situation. For applications like clustered database servers, and other highly available services, shared disks are appropriate. Attaching a shared disk to multiple guests that are not cluster-aware is likely to cause data corruption because their reads and writes to the disk are not coordinated.
You cannot take a snapshot of a shared disk. Virtual disks that have snapshots taken of them cannot later be marked shareable.
You can mark a disk shareable either when you create it, or by editing the disk later.
Only RAW format disks can be made shareable. |
2.8.5. Read Only Disks in oVirt
Some applications require administrators to share data with read-only rights. You can do this when creating or editing a disk attached to a virtual machine via the Disks tab in the details view of the virtual machine and selecting the Read Only check box. That way, a single disk can be read by multiple cluster-aware guests, while an administrator maintains writing privileges.
You cannot change the read-only status of a disk while the virtual machine is running.
Mounting a journaled file system requires read-write access. Using the Read Only option is not appropriate for virtual disks that contain such file systems (e.g. EXT3, EXT4, or XFS). |
2.8.6. Virtual Disk Tasks
Creating a Virtual Disk
Image disk creation is managed entirely by the Engine. Direct LUN disks require externally prepared targets that already exist.
You can create a virtual disk that is attached to a specific virtual machine. Additional options are available when creating an attached virtual disk, as specified in Explanation of Settings in the New Virtual Disk Window.
Creating a Virtual Disk Attached to a Virtual Machine
-
Click
. -
Click the virtual machine’s name. This opens the details view.
-
Click the Disks tab.
-
Click New.
-
Click the appropriate button to specify whether the virtual disk will be an Image or Direct LUN disk.
-
Select the options required for your virtual disk. The options change based on the disk type selected. See Explanation of Settings in the New Virtual Disk Window for more details on each option for each disk type.
-
Click OK.
You can also create a floating virtual disk that does not belong to any virtual machines. You can attach this disk to a single virtual machine, or to multiple virtual machines if the disk is shareable. Some options are not available when creating a virtual disk, as specified in Explanation of Settings in the New Virtual Disk Window.
Creating a Floating Virtual Disk
-
Click
. -
Click New.
-
Click the appropriate button to specify whether the virtual disk will be an Image or Direct LUN disk.
-
Select the options required for your virtual disk. The options change based on the disk type selected. See Explanation of Settings in the New Virtual Disk Window for more details on each option for each disk type.
-
Click OK.
Explanation of settings in the New Virtual Disk window
Because the New Virtual Disk windows for creating floating and attached virtual disks are very similar, their settings are described in a single section.
Field Name | Description |
---|---|
Size(GB) |
The size of the new virtual disk in GB. |
Alias |
The name of the virtual disk, limited to 40 characters. |
Description |
A description of the virtual disk. This field is recommended but not mandatory. |
Interface |
This field only appears when creating an attached disk. The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but you can install them from the virtio-win ISO image. IDE and SATA devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. |
Data Center |
This field only appears when creating a floating disk. The data center in which the virtual disk will be available. |
Storage Domain |
The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given data center, and also shows the total space and currently available space in the storage domain. |
Allocation Policy |
The provisioning policy for the new virtual disk.
|
Disk Profile |
The disk profile assigned to the virtual disk. Disk profiles define the maximum amount of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are defined on the storage domain level based on storage quality of service entries created for data centers. |
Activate Disk(s) |
This field only appears when creating an attached disk. Activate the virtual disk immediately after creation. |
Wipe After Delete |
Allows you to enable enhanced security for deletion of sensitive material when the virtual disk is deleted. |
Bootable |
This field only appears when creating an attached disk. Allows you to enable the bootable flag on the virtual disk. |
Shareable |
Allows you to attach the virtual disk to more than one virtual machine at a time. |
Read-Only |
This field only appears when creating an attached disk. Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. |
Enable Incremental Backup |
Enables incremental backup on the virtual disk. Incremental backup requires disks to be formatted in QCOW2 format instead of RAW format. See Incremental backup and restore. |
Enable Discard |
This field only appears when creating an attached disk. Allows you to shrink a thin provisioned disk while the virtual machine is up. For block storage, the underlying storage device must support discard calls, and the option cannot be used with Wipe After Delete unless the underlying storage supports the discard_zeroes_data property. For file storage, the underlying file system and the block device must support discard calls. If all requirements are met, SCSI UNMAP commands issued from guest virtual machines is passed on by QEMU to the underlying storage to free up the unused space. |
The Direct LUN settings can be displayed in either Targets > LUNs or LUNs > Targets. Targets > LUNs sorts available LUNs according to the host on which they are discovered, whereas LUNs > Targets displays a single list of LUNs.
Fill in the fields in the Discover Targets section and click Discover to discover the target server. You can then click the Login All button to list the available LUNs on the target server and, using the radio buttons next to each LUN, select the LUN to add.
Using LUNs directly as virtual machine hard disk images removes a layer of abstraction between your virtual machines and their data.
The following considerations must be made when using a direct LUN as a virtual machine hard disk image:
-
Live storage migration of direct LUN hard disk images is not supported.
-
Direct LUN disks are not included in virtual machine exports.
-
Direct LUN disks are not included in virtual machine snapshots.
Field Name | Description |
---|---|
Alias |
The name of the virtual disk, limited to 40 characters. |
Description |
A description of the virtual disk. This field is recommended but not mandatory. By default the last 4 characters of the LUN ID is inserted into the field. The default behavior can be configured by setting the |
Interface |
This field only appears when creating an attached disk. The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but they can be installed from the virtio-win ISO . IDE and SATA devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. |
Data Center |
This field only appears when creating a floating disk. The data center in which the virtual disk will be available. |
Host |
The host on which the LUN will be mounted. You can select any host in the data center. |
Storage Type |
The type of external LUN to add. You can select from either iSCSI or Fibre Channel. |
Discover Targets |
This section can be expanded when you are using iSCSI external LUNs and Targets > LUNs is selected. Address - The host name or IP address of the target server. Port - The port by which to attempt a connection to the target server. The default port is 3260. User Authentication - The iSCSI server requires User Authentication. The User Authentication field is visible when you are using iSCSI external LUNs. CHAP user name - The user name of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected. CHAP password - The password of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected. |
Activate Disk(s) |
This field only appears when creating an attached disk. Activate the virtual disk immediately after creation. |
Bootable |
This field only appears when creating an attached disk. Allows you to enable the bootable flag on the virtual disk. |
Shareable |
Allows you to attach the virtual disk to more than one virtual machine at a time. |
Read-Only |
This field only appears when creating an attached disk. Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. |
Enable Discard |
This field only appears when creating an attached disk. Allows you to shrink a thin provisioned disk while the virtual machine is up. With this option enabled, SCSI UNMAP commands issued from guest virtual machines is passed on by QEMU to the underlying storage to free up the unused space. |
Enable SCSI Pass-Through |
This field only appears when creating an attached disk. Available when the Interface is set to VirtIO-SCSI. Selecting this check box enables passthrough of a physical SCSI device to the virtual disk. A VirtIO-SCSI interface with SCSI passthrough enabled automatically includes SCSI discard support. Read-Only is not supported when this check box is selected. When this check box is not selected, the virtual disk uses an emulated SCSI device. Read-Only is supported on emulated VirtIO-SCSI disks. |
Allow Privileged SCSI I/O |
This field only appears when creating an attached disk. Available when the Enable SCSI Pass-Through check box is selected. Selecting this check box enables unfiltered SCSI Generic I/O (SG_IO) access, allowing privileged SG_IO commands on the disk. This is required for persistent reservations. |
Using SCSI Reservation |
This field only appears when creating an attached disk. Available when the Enable SCSI Pass-Through and Allow Privileged SCSI I/O check boxes are selected. Selecting this check box disables migration for any virtual machine using this disk, to prevent virtual machines that are using SCSI reservation from losing access to the disk. |
Mounting a journaled file system requires read-write access. Using the Read-Only option is not appropriate for virtual disks that contain such file systems (e.g. EXT3, EXT4, or XFS). |
Overview of Live Storage Migration
Virtual disks can be migrated from one storage domain to another while the virtual machine to which they are attached is running. This is referred to as live storage migration. When a disk attached to a running virtual machine is migrated, a snapshot of that disk’s image chain is created in the source storage domain, and the entire image chain is replicated in the destination storage domain. As such, ensure that you have sufficient storage space in both the source storage domain and the destination storage domain to host both the disk image chain and the snapshot. A new snapshot is created on each live storage migration attempt, even when the migration fails.
Consider the following when using live storage migration:
-
You can live migrate multiple disks at one time.
-
Multiple disks for the same virtual machine can reside across more than one storage domain, but the image chain for each disk must reside on a single storage domain.
-
You can live migrate disks between any two storage domains in the same data center.
-
You cannot live migrate direct LUN hard disk images or disks marked as shareable.
Moving a Virtual Disk
Move a virtual disk that is attached to a virtual machine or acts as a floating virtual disk from one storage domain to another. You can move a virtual disk that is attached to a running virtual machine; this is referred to as live storage migration. Alternatively, shut down the virtual machine before continuing.
Consider the following when moving a disk:
-
You can move multiple disks at the same time.
-
You can move disks between any two storage domains in the same data center.
-
If the virtual disk is attached to a virtual machine that was created based on a template and used the thin provisioning storage allocation option, you must copy the disks for the template on which the virtual machine was based to the same storage domain as the virtual disk.
-
Click
and select one or more virtual disks to move. -
Click Move.
-
From the Target list, select the storage domain to which the virtual disk(s) will be moved.
-
From the Disk Profile list, select a profile for the disk(s), if applicable.
-
Click OK.
The virtual disks are moved to the target storage domain. During the move procedure, the Status column displays Locked
and a progress bar indicating the progress of the move operation.
Changing the Disk Interface Type
Users can change a disk’s interface type after the disk has been created. This enables you to attach an existing disk to a virtual machine that requires a different interface type. For example, a disk using the VirtIO
interface can be attached to a virtual machine requiring the VirtIO-SCSI
or IDE
interface. This provides flexibility to migrate disks for the purpose of backup and restore, or disaster recovery. The disk interface for shareable disks can also be updated per virtual machine. This means that each virtual machine that uses the shared disk can use a different interface type.
To update a disk interface type, all virtual machines using the disk must first be stopped.
-
Click
and stop the appropriate virtual machine(s). -
Click the virtual machine’s name. This opens the details view.
-
Click the Disks tab and select the disk.
-
Click Edit.
-
From the Interface list, select the new interface type and click OK.
You can attach a disk to a different virtual machine that requires a different interface type.
-
Click
and stop the appropriate virtual machine(s). -
Click the virtual machine’s name. This opens the details view.
-
Click the Disks tab and select the disk.
-
Click Remove, then click OK.
-
Go back to Virtual Machines and click the name of the new virtual machine that the disk will be attached to.
-
Click the Disks tab, then click Attach.
-
Select the disk in the Attach Virtual Disks window and select the appropriate interface from the Interface drop-down.
-
Click OK.
Copying a Virtual Disk
You can copy a virtual disk from one storage domain to another. The copied disk can be attached to virtual machines.
-
Click
and select the virtual disk(s). -
Click Copy .
-
Optionally, enter a new name in the Alias field.
-
From the Target list, select the storage domain to which the virtual disk(s) will be copied.
-
From the Disk Profile list, select a profile for the disk(s), if applicable.
-
Click OK.
The virtual disks have a status of Locked
while being copied.
Improving disk performance
In the Administration Portal, on the virtual machine’s Resource Allocation tab, the default I/O Threads Enabled setting is checked (enabled), and the number of threads is 1
.
Suppose a virtual machine has multiple disks that have VirtIO controllers, and its workloads make significant use of those controllers. In that case, you can improve performance by increasing the number of I/O threads.
However, also consider that increasing the number of I/O threads decreases the virtual machine’s pool of threads. If your workloads do not use the VirtIO controllers and the threads you allocate to them, increasing the number of I/O threads might decrease overall performance.
To find the optimal number of threads, benchmark the performance of your virtual machine running workloads before and after you adjust the number of threads.
-
On
, Power Off the virtual machine. -
Click the name of the virtual machine.
-
In the details pane, click the Vm Devices tab.
-
Count the number of controllers whose Type is
virtio
orvirtio-scsi
. -
Click Edit.
-
In the Edit Virtual Machine window, click the Resource Allocation tab.
-
Confirm that I/O Threads Enabled is checked (enabled).
-
To the right of I/O Threads Enabled, increase the number of threads, but do not exceed number of controllers whose type is
virtio
orvirtio-scsi
. -
Click OK.
-
In the details pane, click the Disks tab.
-
For each disk, use More Actions (
) to Deactivate and Activate the disk. This action remaps the disks to the controllers.
-
Click Run to start the virtual machine.
-
To see which controllers have an I/O thread, click Vm Devices in the details pane and look for
ioThreadid=
in the Spec Params column. -
To see the mapping of disks to controllers, log into the host machine and enter the following command:
# virsh -r dumpxml virtual_machine_name
Uploading Images to a Data Storage Domain
You can upload virtual disk images and ISO images to your data storage domain in the Administration Portal or with the REST API. See Uploading Images to a Data Storage Domain for details.
Importing a Disk Image from an Imported Storage Domain
Import floating virtual disks from an imported storage domain.
Only QEMU-compatible disks can be imported into the Engine. |
-
Click
. -
Click the name of an imported storage domain. This opens the details view.
-
Click the Disk Import tab.
-
Select one or more disks and click Import.
-
Select the appropriate Disk Profile for each disk.
-
Click OK.
Importing an Unregistered Disk Image from an Imported Storage Domain
Import floating virtual disks from a storage domain. Floating disks created outside of a oVirt environment are not registered with the Engine. Scan the storage domain to identify unregistered floating disks to be imported.
Only QEMU-compatible disks can be imported into the Engine. |
-
Click
. -
Click the storage domain’s name. This opens the details view.
-
Click More Actions (
), then click Scan Disks so that the Engine can identify unregistered disks.
-
Click the Disk Import tab.
-
Select one or more disk images and click Import.
-
Select the appropriate Disk Profile for each disk.
-
Click OK.
Importing a Virtual Disk from an OpenStack Image Service
Virtual disks managed by an OpenStack Image Service can be imported into the oVirt Engine if that OpenStack Image Service has been added to the Engine as an external provider.
-
Click
. -
Click the OpenStack Image Service domain’s name. This opens the details view.
-
Click the Images tab and select an image.
-
Click Import.
-
Select the Data Center into which the image will be imported.
-
From the Domain Name drop-down list, select the storage domain in which the image will be stored.
-
Optionally, select a quota to apply to the image from the Quota drop-down list.
-
Click OK.
The disk can now be attached to a virtual machine.
Exporting a Virtual Disk to an OpenStack Image Service
Virtual disks can be exported to an OpenStack Image Service that has been added to the Engine as an external provider.
Virtual disks can only be exported if they do not have multiple volumes, are not thin provisioned, and do not have any snapshots. |
-
Click
and select the disks to export. -
Click More Actions (
), then click Export.
-
From the Domain Name drop-down list, select the OpenStack Image Service to which the disks will be exported.
-
From the Quota drop-down list, select a quota for the disks if a quota is to be applied.
-
Click OK.
Reclaiming Virtual Disk Space
Virtual disks that use thin provisioning do not automatically shrink after deleting files from them. For example, if the actual disk size is 100GB and you delete 50GB of files, the allocated disk size remains at 100GB, and the remaining 50GB is not returned to the host, and therefore cannot be used by other virtual machines. This unused disk space can be reclaimed by the host by performing a sparsify operation on the virtual machine’s disks. This transfers the free space from the disk image to the host. You can sparsify multiple virtual disks in parallel.
Perform this operation before cloning a virtual machine, creating a template based on a virtual machine, or cleaning up a storage domain’s disk space.
Limitations
-
NFS storage domains must use NFS version 4.2 or higher.
-
You cannot sparsify a disk that uses a direct LUN.
-
You cannot sparsify a disk that uses a preallocated allocation policy. If you are creating a virtual machine from a template, you must select Thin from the Storage Allocation field, or if selecting Clone, ensure that the template is based on a virtual machine that has thin provisioning.
-
You can only sparsify active snapshots.
Sparsifying a Disk
-
Click
and shut down the required virtual machine. -
Click the virtual machine’s name. This opens the details view.
-
Click the Disks tab. Ensure that the disk’s status is
OK
. -
Click More Actions (
), then click Sparsify.
-
Click OK.
A Started to sparsify
event appears in the Events tab during the sparsify operation and the disk’s status displays as Locked
. When the operation is complete, a Sparsified successfully
event appears in the Events tab and the disk’s status displays as OK
. The unused disk space has been returned to the host and is available for use by other virtual machines.
2.9. External Providers
2.9.1. Introduction to External Providers in oVirt
In addition to resources managed by the oVirt Engine itself, oVirt can also take advantage of resources managed by external sources. The providers of these resources, known as external providers, can provide resources such as virtualization hosts, virtual machine images, and networks.
oVirt currently supports the following external providers:
- Red Hat Satellite for Host Provisioning
-
Satellite is a tool for managing all aspects of the life cycle of both physical and virtual hosts. In oVirt, hosts managed by Satellite can be added to and used by the oVirt Engine as virtualization hosts. After you add a Satellite instance to the Engine, the hosts managed by the Satellite instance can be added by searching for available hosts on that Satellite instance when adding a new host. For more information on installing Red Hat Satellite and managing hosts using Red Hat Satellite, see the Red Hat Satellite Quick Start Guide and Red Hat Satellite Managing Hosts.
- KubeVirt/Openshift Virtualization
-
Openshift Virtualization (formerly container-native virtualization or "CNV") enables you to bring virtual machines (VMs) into containerized workflows so you can develop, manage, and deploy virtual machines side-by-side with containers and serverless. In oVirt Engine, adding this provider is one of the requirements for using Openshift Virtualization. For details, see Adding KubeVirt/Openshift Virtualization as an external provider.
- OpenStack Image Service (Glance) for Image Management
-
OpenStack Image Service provides a catalog of virtual machine images. In oVirt, these images can be imported into the oVirt Engine and used as floating disks or attached to virtual machines and converted into templates. After you add an OpenStack Image Service to the Engine, it appears as a storage domain that is not attached to any data center. Virtual disks in a oVirt environment can also be exported to an OpenStack Image Service as virtual disks.
Support for OpenStack Glance is now deprecated. This functionality will be removed in a later release. |
- VMware for Virtual Machine Provisioning
-
Virtual machines created in VMware can be converted using V2V (
virt-v2v
) and imported into a oVirt environment. After you add a VMware provider to the Engine, you can import the virtual machines it provides. V2V conversion is performed on a designated proxy host as part of the import operation. - RHEL 5 Xen for Virtual Machine Provisioning
-
Virtual machines created in RHEL 5 Xen can be converted using V2V (
virt-v2v
) and imported into a oVirt environment. After you add a RHEL 5 Xen host to the Engine, you can import the virtual machines it provides. V2V conversion is performed on a designated proxy host as part of the import operation. - KVM for Virtual Machine Provisioning
-
Virtual machines created in KVM can be imported into a oVirt environment. After you add a KVM host to the Engine, you can import the virtual machines it provides.
- Open Virtual Network (OVN) for Network Provisioning
-
Open Virtual Network (OVN) is an Open vSwitch (OVS) extension that provides software-defined networks. After you add OVN to the Engine, you can import existing OVN networks, and create new OVN networks from the Engine. You can also automatically install OVN on the Engine using
engine-setup
.
2.9.2. Adding External Providers
Adding a Red Hat Satellite Instance for Host Provisioning
Add a Satellite instance for host provisioning to the oVirt Engine. oVirt 4.2 is supported with Red Hat Satellite 6.1.
-
Click
. -
Click Add.
-
Enter a Name and Description.
-
Select Foreman/Satellite from the Type drop-down list.
-
Enter the URL or fully qualified domain name of the machine on which the Satellite instance is installed in the Provider URL text field. You do not need to specify a port number.
IP addresses cannot be used to add a Satellite instance.
-
Select the Requires Authentication check box.
-
Enter the Username and Password for the Satellite instance. You must use the same user name and password as you would use to log in to the Satellite provisioning portal.
-
Test the credentials:
-
Click Test to test whether you can authenticate successfully with the Satellite instance using the provided credentials.
-
If the Satellite instance uses SSL, the Import provider certificates window opens; click OK to import the certificate that the Satellite instance provides to ensure the Engine can communicate with the instance.
-
-
Click OK.
Adding an OpenStack Image (Glance) Instance for Image Management
Support for OpenStack Glance is now deprecated. This functionality will be removed in a later release. |
Add an OpenStack Image (Glance) instance for image management to the oVirt Engine.
-
Click
. -
Click Add and enter the details in the General Settings tab. For more information on these fields, see Add Provider General Settings Explained.
-
Enter a Name and Description.
-
Select OpenStack Image from the Type drop-down list.
-
Enter the URL or fully qualified domain name of the machine on which the OpenStack Image instance is installed in the Provider URL text field.
-
Optionally, select the Requires Authentication check box and enter the Username and Password for the OpenStack Image instance user registered in Keystone. You must also define the authentication URL of the Keystone server by defining the Protocol (must be
HTTP
), Hostname, and API Port.Enter the Tenant for the OpenStack Image instance.
-
Test the credentials:
-
Click Test to test whether you can authenticate successfully with the OpenStack Image instance using the provided credentials.
-
If the OpenStack Image instance uses SSL, the Import provider certificates window opens. Click OK to import the certificate that the OpenStack Image instance provides to ensure the Engine can communicate with the instance.
-
-
Click OK.
Adding KubeVirt/Openshift Virtualization as an external provider
To run virtual machines in a container on the OKD, you add OpenShift as an external provider in Red Hat Virtualization.
This capability is known as OpenShift Virtualization. |
-
In the oVirt Administration Portal, go to
and click New. -
In Add Provider, set Type to KubeVirt/Openshift Virtualization.
-
Enter the Provider URL and Token, which are required.
-
Optional: Enter values for Advanced parameters such as Certificate Authority, Prometheus URL, and Prometheus Certificate Authority.
-
Click Test to verify the connection to the new provider.
-
Click OK to finish adding this new provider.
-
In the oVirt Administration Portal, click
. -
Click the name of new cluster you just created. This cluster name, kubevirt for example, is based on the name of the provider. This action opens the cluster details view.
-
Click the Hosts tab to verify that the status of the OKD worker nodes is
up
.The status of the control plane nodes is
down
, even if they are running, because they cannot host virtual machines. -
Use
to deploy a virtual machine to the new cluster. -
In the OKD web console, in the Administrator perspective, use
to view the virtual machine you deployed.
Adding a VMware Instance as a Virtual Machine Provider
Add a VMware vCenter instance to import virtual machines from VMware to the oVirt Engine.
oVirt uses V2V to convert VMware virtual machines to the correct format before they are imported. The virt-v2v
package must be installed on at least one host. The virt-v2v
package is available by default on oVirt Nodes (oVirt Node) and is installed on Enterprise Linux hosts as a dependency of VDSM when added to the oVirt environment. Enterprise Linux hosts must be Enterprise Linux 7.2 or later.
The |
-
Click
. -
Click Add.
-
Enter a Name and Description.
-
Select VMware from the Type drop-down list.
-
Select the Data Center into which VMware virtual machines will be imported, or select Any Data Center to instead specify the destination data center during individual import operations.
-
Enter the IP address or fully qualified domain name of the VMware vCenter instance in the vCenter field.
-
Enter the IP address or fully qualified domain name of the host from which the virtual machines will be imported in the ESXi field.
-
Enter the name of the data center in which the specified ESXi host resides in the Data Center field.
-
If you have exchanged the SSL certificate between the ESXi host and the Engine, leave the Verify server’s SSL certificate check box selected to verify the ESXi host’s certificate. If not, clear the check box.
-
Select a host in the chosen data center with
virt-v2v
installed to serve as the Proxy Host during virtual machine import operations. This host must also be able to connect to the network of the VMware vCenter external provider. If you selected Any Data Center above, you cannot choose the host here, but instead can specify a host during individual import operations. -
Enter the Username and Password for the VMware vCenter instance. The user must have access to the VMware data center and ESXi host on which the virtual machines reside.
-
Test the credentials:
-
Click Test to test whether you can authenticate successfully with the VMware vCenter instance using the provided credentials.
-
If the VMware vCenter instance uses SSL, the Import provider certificates window opens; click OK to import the certificate that the VMware vCenter instance provides to ensure the Engine can communicate with the instance.
-
-
Click OK.
To import virtual machines from the VMware external provider, see Importing a Virtual Machine from a VMware Provider in the Virtual Machine Management Guide.
Adding a RHEL 5 Xen Host as a Virtual Machine Provider
Add a RHEL 5 Xen host to import virtual machines from Xen to oVirt.
oVirt uses V2V to convert RHEL 5 Xen virtual machines to the correct format before they are imported. The virt-v2v
package must be installed on at least one host. The virt-v2v
package is available by default on oVirt Nodes (oVirt Node) and is installed on Enterprise Linux hosts as a dependency of VDSM when added to the oVirt environment. Enterprise Linux hosts must be Enterprise Linux 7.2 or later.
The |
-
Enable public key authentication between the proxy host and the RHEL 5 Xen host:
-
Log in to the proxy host and generate SSH keys for the vdsm user.
# sudo -u vdsm ssh-keygen
-
Copy the vdsm user’s public key to the RHEL 5 Xen host. The proxy host’s known_hosts file will also be updated to include the host key of the RHEL 5 Xen host.
# sudo -u vdsm ssh-copy-id root@xenhost.example.com
-
Log in to the RHEL 5 Xen host to verify that the login works correctly.
# sudo -u vdsm ssh root@xenhost.example.com
-
-
Click
. -
Click Add.
-
Enter a Name and Description.
-
Select XEN from the Type drop-down list.
-
Select the Data Center into which Xen virtual machines will be imported, or select Any Data Center to specify the destination data center during individual import operations.
-
Enter the URI of the RHEL 5 Xen host in the URI field.
-
Select a host in the chosen data center with
virt-v2v
installed to serve as the Proxy Host during virtual machine import operations. This host must also be able to connect to the network of the RHEL 5 Xen external provider. If you selected Any Data Center above, you cannot choose the host here, but instead can specify a host during individual import operations. -
Click Test to test whether you can authenticate successfully with the RHEL 5 Xen host.
-
Click OK.
To import virtual machines from a RHEL 5 Xen external provider, see Importing a Virtual Machine from a RHEL 5 Xen Host in the Virtual Machine Management Guide.
Adding a KVM Host as a Virtual Machine Provider
Add a KVM host to import virtual machines from KVM to oVirt Engine.
-
Enable public key authentication between the proxy host and the KVM host:
-
Log in to the proxy host and generate SSH keys for the vdsm user.
# sudo -u vdsm ssh-keygen
-
Copy the vdsm user’s public key to the KVM host. The proxy host’s known_hosts file will also be updated to include the host key of the KVM host.
# sudo -u vdsm ssh-copy-id root@kvmhost.example.com
-
Log in to the KVM host to verify that the login works correctly.
# sudo -u vdsm ssh root@kvmhost.example.com
-
-
Click
. -
Click Add.
-
Enter a Name and Description.
-
Select KVM from the Type drop-down list.
-
Select the Data Center into which KVM virtual machines will be imported, or select Any Data Center to specify the destination data center during individual import operations.
-
Enter the URI of the KVM host in the URI field.
qemu+ssh://root@host.example.com/system
-
Select a host in the chosen data center to serve as the Proxy Host during virtual machine import operations. This host must also be able to connect to the network of the KVM external provider. If you selected Any Data Center in the Data Center field above, you cannot choose the host here. The field is greyed out and shows Any Host in Data Center. Instead you can specify a host during individual import operations.
-
Optionally, select the Requires Authentication check box and enter the Username and Password for the KVM host. The user must have access to the KVM host on which the virtual machines reside.
-
Click Test to test whether you can authenticate successfully with the KVM host using the provided credentials.
-
Click OK.
To import virtual machines from a KVM external provider, see Importing a Virtual Machine from a KVM Host in the Virtual Machine Management Guide.
Adding Open Virtual Network (OVN) as an External Network Provider
You can use Open Virtual Network (OVN) to create overlay virtual networks that enable communication among the virtual machines without adding VLANs or changing the infrastructure. OVN is an extension of Open vSwitch (OVS) that provides native support for virtual L2 and L3 overlays.
You can also connect an OVN network to a native oVirt network. See Connecting an OVN Network to a Physical Network for more information.
The ovirt-provider-ovn
exposes an OpenStack Networking REST API. You can use this API to create networks, subnets, ports, and routers. For details, see OpenStack Networking API v2.0.
For more details, see the Open vSwitch Documentation and Open vSwitch Manpages.
Installing a New OVN Network Provider
Installing OVN using engine-setup
performs the following steps:
-
Sets up an OVN central server on the Engine machine.
-
Adds OVN to oVirt as an external network provider.
-
On the Default cluster only, sets the Default Network Provider to
ovirt-provider-ovn
.
|
-
Optional: If you use a preconfigured answer file with
engine-setup
, add the following entry to install OVN:OVESETUP_OVN/ovirtProviderOvn=bool:True
-
Run
engine-setup
on the Engine machine. -
If you do not use a preconfigured answer file, answer
Yes
when theengine-setup
asks:Configuring ovirt-provider-ovn also sets the Default cluster's default network provider to ovirt-provider-ovn. Non-Default clusters may be configured with an OVN after installation. Configure ovirt-provider-ovn (Yes, No) [Yes]:
-
Answer the following question:
Use default credentials (admin@internal) for ovirt-provider-ovn (Yes, No) [Yes]?:
If
Yes
,engine-setup
uses the default engine user and password specified earlier in the setup process. This option is only available during new installations.oVirt OVN provider user[admin]: oVirt OVN provider password[empty]:
You can use the default values or specify the oVirt OVN provider user and password.
To change the authentication method later, you can edit the |
Before you can create virtual machines that use a newly-installed OVN network, complete these additional steps:
-
Add a network to the Default cluster.
-
While doing so, select the Create on external provider check box. This creates a network based on
ovirt-provider-ovn
. -
Optional: To connect the OVN network to a physical network, select the Connect to physical network check box and specify the oVirt network to use.
-
Optional: Determine whether the network should use a security group and select one from the Security Groups drop-down. For more information on the available options see Logical Network General Settings Explained.
-
-
Add hosts to or reinstall the hosts on the Default cluster so they use the cluster’s new Default Network Provider,
ovirt-provider-ovn
. -
Optional: Edit non-Default clusters and set Default Network Provider to
ovirt-provider-ovn
.-
Optional: Reinstall the hosts on each non-Default cluster so they use the cluster’s new Default Network Provider,
ovirt-provider-ovn
.
-
-
To configure your hosts to use an existing, non-default network, see Configuring Hosts for an OVN tunnel network.
Updating the OVN Tunnel Network on a Single Host
You can update the OVN tunnel network on a single host with vdsm-tool
:
# vdsm-tool ovn-config OVN_Central_IP Tunneling_IP_or_Network_Name Host_FQDN
The Host_FQDN must match the FQDN that is specified in the engine for this host. |
vdsm-tool
# vdsm-tool ovn-config 192.168.0.1 MyNetwork MyFQDN
Connecting an OVN Network to a Physical Network
You can create an external provider network that overlays a native oVirt network so that the virtual machines on each appear to be sharing the same subnet.
If you created a subnet for the OVN network, a virtual machine using that network will receive an IP address from there. If you want the physical network to allocate the IP address, do not create a subnet for the OVN network. |
-
The cluster must have OVS selected as the Switch Type. Hosts added to this cluster must not have any pre-existing oVirt networks configured, such as the ovirtmgmt bridge.
-
The physical network must be available on the hosts. You can enforce this by setting the physical network as required for the cluster (in the Manage Networks window, or the Cluster tab of the New Logical Network window).
-
Click
. -
Click the cluster’s name. This opens the details view.
-
Click the Logical Networks tab and click Add Network.
-
Enter a Name for the network.
-
Select the Create on external provider check box.
ovirt-provider-ovn
is selected by default. -
Select the Connect to physical network check box if it is not already selected by default.
-
Choose the physical network to connect the new network to:
-
Click the Data Center Network radio button and select the physical network from the drop-down list. This is the recommended option.
-
Click the Custom radio button and enter the name of the physical network. If the physical network has VLAN tagging enabled, you must also select the Enable VLAN tagging check box and enter the physical network’s VLAN tag.
The physical network’s name must not be longer than 15 characters, or contain special characters.
-
-
Click OK.
////Removing for BZ2006228 include::topics/Adding_an_External_Network_Provider.adoc[leveloffset=+2]
Add Provider General Settings Explained
The General tab in the Add Provider window allows you to register the core details of the external provider.
Setting | Explanation |
---|---|
Name |
A name to represent the provider in the Engine. |
Description |
A plain text, human-readable description of the provider. |
Type |
The type of external provider. Changing this setting alters the available fields for configuring the provider. External Network Provider
Foreman/Satellite
KubeVirt/OpenShift Virtualization
OpenStack Image
OpenStack Volume
VMware
RHEL 5 Xen
KVM
|
Test |
Allows users to test the specified credentials. This button is available to all provider types. |
2.9.3. Editing an External Provider
-
Click
and select the external provider to edit. -
Click Edit.
-
Change the current values for the provider to the preferred values.
-
Click OK.
2.9.4. Removing an External Provider
-
Click
and select the external provider to remove. -
Click Remove.
-
Click OK.
3. Administering the Environment
3.1. Administering the Self-Hosted Engine
3.1.1. Maintaining the Self-hosted engine
Self-hosted engine maintenance modes explained
The maintenance modes enable you to start, stop, and modify the Engine virtual machine without interference from the high-availability agents, and to restart and modify the self-hosted engine nodes in the environment without interfering with the Engine.
There are three maintenance modes:
-
global
- All high-availability agents in the cluster are disabled from monitoring the state of the Engine virtual machine. The global maintenance mode must be applied for any setup or upgrade operations that require theovirt-engine
service to be stopped, such as upgrading to a later version of oVirt. -
local
- The high-availability agent on the node issuing the command is disabled from monitoring the state of the Engine virtual machine. The node is exempt from hosting the Engine virtual machine while in local maintenance mode; if hosting the Engine virtual machine when placed into this mode, the Engine will migrate to another node, provided there is one available. The local maintenance mode is recommended when applying system changes or updates to a self-hosted engine node. -
none
- Disables maintenance mode, ensuring that the high-availability agents are operating.
Setting local maintenance mode
Enabling local maintenance mode stops the high-availability agent on a single self-hosted engine node.
-
Put a self-hosted engine node into local maintenance mode:
-
In the Administration Portal, click
and select a self-hosted engine node. -
Click
and OK. Local maintenance mode is automatically triggered for that node.
-
-
After you have completed any maintenance tasks, disable the maintenance mode:
-
In the Administration Portal, click
and select the self-hosted engine node. -
Click
.
-
-
Log in to a self-hosted engine node and put it into local maintenance mode:
# hosted-engine --set-maintenance --mode=local
-
After you have completed any maintenance tasks, disable the maintenance mode:
# hosted-engine --set-maintenance --mode=none
Setting global maintenance mode
Enabling global maintenance mode stops the high-availability agents on all self-hosted engine nodes in the cluster.
-
Put all of the self-hosted engine nodes into global maintenance mode:
-
In the Administration Portal, click
and select any self-hosted engine node. -
Click More Actions (
), then click Enable Global HA Maintenance.
-
-
After you have completed any maintenance tasks, disable the maintenance mode:
-
In the Administration Portal, click
and select any self-hosted engine node. -
Click More Actions (
), then click Disable Global HA Maintenance.
-
-
Log in to any self-hosted engine node and put it into global maintenance mode:
# hosted-engine --set-maintenance --mode=global
-
After you have completed any maintenance tasks, disable the maintenance mode:
# hosted-engine --set-maintenance --mode=none
3.1.2. Administering the Engine Virtual Machine
The hosted-engine
utility provides many commands to help administer the Engine virtual machine. You can run hosted-engine
on any self-hosted engine node. To see all available commands, run hosted-engine --help
. For additional information on a specific command, run hosted-engine --command --help
.
Updating the Self-Hosted Engine Configuration
To update the self-hosted engine configuration, use the hosted-engine --set-shared-config
command. This command updates the self-hosted engine configuration on the shared storage domain after the initial deployment.
To see the current configuration values, use the hosted-engine --get-shared-config
command.
To see a list of all available configuration keys and their corresponding types, enter the following command:
# hosted-engine --set-shared-config key --type=type --help
Where type
is one of the following:
he_local
|
Sets values in the local instance of |
he_shared
|
Sets values in |
ha
|
Sets values in |
broker
|
Sets values in |
Configuring Email Notifications
You can configure email notifications using SMTP for any HA state transitions on the self-hosted engine nodes. The keys that can be updated include: smtp-server
, smtp-port
, source-email
, destination-emails
, and state_transition
.
To configure email notifications:
-
On a self-hosted engine node, set the
smtp-server
key to the desired SMTP server address:# hosted-engine --set-shared-config smtp-server smtp.example.com --type=broker
To verify that the self-hosted engine configuration file has been updated, run:
# hosted-engine --get-shared-config smtp-server --type=broker broker : smtp.example.com, type : broker
-
Check that the default SMTP port (port 25) has been configured:
# hosted-engine --get-shared-config smtp-port --type=broker broker : 25, type : broker
-
Specify an email address you want the SMTP server to use to send out email notifications. Only one address can be specified.
# hosted-engine --set-shared-config source-email source@example.com --type=broker
-
Specify the destination email address to receive email notifications. To specify multiple email addresses, separate each address by a comma.
# hosted-engine --set-shared-config destination-emails destination1@example.com,destination2@example.com --type=broker
To verify that SMTP has been properly configured for your self-hosted engine environment, change the HA state on a self-hosted engine node and check if email notifications were sent. For example, you can change the HA state by placing HA agents into maintenance mode. See Maintaining the Self-Hosted Engine for more information.
3.1.3. Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts
If the Engine virtual machine shuts down or needs to be migrated, there must be enough memory on a self-hosted engine node for the Engine virtual machine to restart on or migrate to it. This memory can be reserved on multiple self-hosted engine nodes by using a scheduling policy. The scheduling policy checks if enough memory to start the Engine virtual machine will remain on the specified number of additional self-hosted engine nodes before starting or migrating any virtual machines. See Creating a Scheduling Policy in the Administration Guide for more information about scheduling policies.
To add more self-hosted engine nodes to the oVirt Engine, see Adding self-hosted engine nodes to the Engine.
Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts
-
Click
and select the cluster containing the self-hosted engine nodes. -
Click Edit.
-
Click the Scheduling Policy tab.
-
Click + and select HeSparesCount.
-
Enter the number of additional self-hosted engine nodes that will reserve enough free memory to start the Engine virtual machine.
-
Click OK.
3.1.4. Adding Self-Hosted Engine Nodes to the oVirt Engine
Add self-hosted engine nodes in the same way as a standard host, with an additional step to deploy the host as a self-hosted engine node. The shared storage domain is automatically detected and the node can be used as a failover host to host the Engine virtual machine when required. You can also attach standard hosts to a self-hosted engine environment, but they cannot host the Engine virtual machine. Have at least two self-hosted engine nodes to ensure the Engine virtual machine is highly available. You can also add additional hosts using the REST API. See Hosts in the REST API Guide.
-
All self-hosted engine nodes must be in the same cluster.
-
If you are reusing a self-hosted engine node, remove its existing self-hosted engine configuration. See Removing a Host from a Self-Hosted Engine Environment.
-
In the Administration Portal, click
. -
Click New.
For information on additional host settings, see Explanation of Settings and Controls in the New Host and Edit Host Windows in the Administration Guide.
-
Use the drop-down list to select the Data Center and Host Cluster for the new host.
-
Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.
-
Select an authentication method to use for the Engine to access the host.
-
Enter the root user’s password to use password authentication.
-
Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
-
-
Optionally, configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide.
-
Click the Hosted Engine tab.
-
Select Deploy.
-
Click OK.
3.1.5. Reinstalling an Existing Host as a Self-Hosted Engine Node
You can convert an existing, standard host in a self-hosted engine environment to a self-hosted engine node capable of hosting the Engine virtual machine.
When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. |
-
Click
and select the host. -
Click
and OK. -
Click
. -
Click the Hosted Engine tab and select DEPLOY from the drop-down list.
-
Click OK.
The host is reinstalled with self-hosted engine configuration, and is flagged with a crown icon in the Administration Portal.
3.1.6. Booting the Engine Virtual Machine in Rescue Mode
This topic describes how to boot the Engine virtual machine into rescue mode when it does not start. For more information, see Booting to Rescue Mode in the Enterprise Linux System Administrator’s Guide.
-
Connect to one of the hosted-engine nodes:
$ ssh root@host_address
-
Put the self-hosted engine in global maintenance mode:
# hosted-engine --set-maintenance --mode=global
-
Check if there is already a running instance of the Engine virtual machine:
# hosted-engine --vm-status
If a Engine virtual machine instance is running, connect to its host:
# ssh root@host_address
-
Shut down the virtual machine:
# hosted-engine --vm-shutdown
If the virtual machine does not shut down, execute the following command:
# hosted-engine --vm-poweroff
-
Start the Engine virtual machine in pause mode:
hosted-engine --vm-start-paused
-
Set a temporary VNC password:
hosted-engine --add-console-password
The command outputs the necessary information you need to log in to the Manger virtual machine with VNC.
-
Log in to the Engine virtual machine with VNC. The Engine virtual machine is still paused, so it appears to be frozen.
-
Resume the Engine virtual machine with the following command on its host:
After running the following command, the boot loader menu appears. You need to enter into rescue mode before the boot loader proceeds with the normal boot process. Read the next step about entering into rescue mode before proceeding with this command.
# /usr/bin/virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume HostedEngine
-
Boot the Engine virtual machine in rescue mode.
-
Disable global maintenance mode
# hosted-engine --set-maintenance --mode=none
You can now run rescue tasks on the Engine virtual machine.
3.1.7. Removing a Host from a Self-Hosted Engine Environment
To remove a self-hosted engine node from your environment, place the node into maintenance mode, undeploy the node, and optionally remove it. The node can be managed as a regular host after the HA services have been stopped, and the self-hosted engine configuration files have been removed.
-
In the Administration Portal, click
and select the self-hosted engine node. -
Click
and OK. -
Click
. -
Click the Hosted Engine tab and select UNDEPLOY from the drop-down list. This action stops the
ovirt-ha-agent
andovirt-ha-broker
services and removes the self-hosted engine configuration file. -
Click OK.
-
Optionally, click Remove. This opens the Remove Host(s) confirmation window.
-
Click OK.
3.1.8. Updating a Self-Hosted Engine
To update a self-hosted engine from your current version to the latest version, you must place the environment in global maintenance mode and then follow the standard procedure for updating between minor versions.
Enabling global maintenance mode
You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Engine virtual machine.
-
Log in to one of the self-hosted engine nodes and enable global maintenance mode:
# hosted-engine --set-maintenance --mode=global
-
Confirm that the environment is in global maintenance mode before proceeding:
# hosted-engine --vm-status
You should see a message indicating that the cluster is in global maintenance mode.
Updating the oVirt Engine
-
On the Engine machine, check if updated packages are available:
# engine-upgrade-check
-
Update the setup packages:
# dnf update ovirt\*setup\*
-
Update the oVirt Engine with the
engine-setup
script. Theengine-setup
script prompts you with some configuration questions, then stops theovirt-engine
service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts theovirt-engine
service.# engine-setup
When the script completes successfully, the following message appears:
Execution of setup completed successfully
The
engine-setup
script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date ifengine-config
was used to update configuration after installation. For example, ifengine-config
was used to updateSANWipeAfterDelete
totrue
after installation,engine-setup
will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten byengine-setup
.The update process might take some time. Do not stop the process before it completes.
-
Update the base operating system and any optional packages installed on the Engine:
# yum update --nobest
If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).
If any kernel packages were updated:
-
Disable global maintenance mode
-
Reboot the machine to complete the update.
-
Disabling global maintenance mode
-
Log in to the Engine virtual machine and shut it down.
-
Log in to one of the self-hosted engine nodes and disable global maintenance mode:
# hosted-engine --set-maintenance --mode=none
When you exit global maintenance mode, ovirt-ha-agent starts the Engine virtual machine, and then the Engine automatically starts. It can take up to ten minutes for the Engine to start.
-
Confirm that the environment is running:
# hosted-engine --vm-status
The listed information includes Engine Status. The value for Engine status should be:
{"health": "good", "vm": "up", "detail": "Up"}
When the virtual machine is still booting and the Engine hasn’t started yet, the Engine status is:
{"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}
If this happens, wait a few minutes and try again.
3.1.9. Changing the FQDN of the Engine in a Self-Hosted Engine
You can use the ovirt-engine-rename
command to update records of the fully qualified domain name (FQDN) of the Engine.
For details see Renaming the Engine with the Ovirt Engine Rename Tool.
3.2. Backups and Migration
3.2.1. Backing Up and Restoring the oVirt Engine
Backing up oVirt Engine - Overview
Use the engine-backup
tool to take regular backups of the oVirt Engine. The tool backs up the engine database and configuration files into a single file and can be run without interrupting the ovirt-engine
service.
Syntax for the engine-backup Command
The engine-backup
command works in one of two basic modes:
# engine-backup --mode=backup
# engine-backup --mode=restore
These two modes are further extended by a set of options that allow you to specify the scope of the backup and different credentials for the engine database. Run engine-backup --help
for a full list of options and their function.
Basic Options
--mode
-
Specifies whether the command performs a backup operation or a restore operation. The available options are:
backup
(set by default),restore
, andverify
. You must define themode
option forverify
orrestore
operations. --file
-
Specifies the path and name of a file (for example, file_name.backup) into which backups are saved in backup mode, and to be read as backup data in restore mode. The path is defined by default as
/var/lib/ovirt-engine-backup/
. --log
-
Specifies the path and name of a file (for example, log_file_name) into which logs of the backup or restore operation are written. The path is defined by default as
/var/log/ovirt-engine-backup/
. --scope
-
Specifies the scope of the backup or restore operation. There are four options:
all
, to back up or restore all databases and configuration data (set by default);files
, to back up or restore only files on the system;db
, to back up or restore only the Engine database; anddwhdb
, to back up or restore only the Data Warehouse database.The
--scope
option can be specified multiple times in the sameengine-backup
command.
Engine Database Options
The following options are only available when using the engine-backup
command in restore
mode. The option syntax below applies to restoring the Engine database. The same options exist for restoring the Data Warehouse database. See engine-backup --help
for the Data Warehouse option syntax.
--provision-db
-
Creates a PostgreSQL database for the Engine database backup to be restored to. This is a required option when restoring a backup on a remote host or fresh installation that does not have a PostgreSQL database already configured. When this option is used in restore mode, the
--restore-permissions
option is added by default. --provision-all-databases
-
Creates databases for all memory dumps included in the archive. When enabled, this is the default.
--change-db-credentials
-
Allows you to specify alternate credentials for restoring the Engine database using credentials other than those stored in the backup itself. See
engine-backup --help
for the additional parameters required by this option. --restore-permissions
or--no-restore-permissions
-
Restores or does not restore the permissions of database users. One of these options is required when restoring a backup. When the
--provision-*
option is used in restore mode,--restore-permissions
is applied by default.If a backup contains grants for extra database users, restoring the backup with the
--restore-permissions
and--provision-db
(or--provision-dwh-db
) options creates the extra users with random passwords. You must change these passwords manually if the extra users require access to the restored system. See How to grant access to an extra database user after restoring Red Hat Virtualization from a backup.
Creating a backup with the engine-backup command
You can back up the oVirt Engine with the engine-backup
command while the Engine is active. Append one of the following values to the --scope
option to specify what you want to back up:
all
-
A full backup of all databases and configuration files on the Engine. This is the default setting for the
--scope
option. files
-
A backup of only the files on the system
db
-
A backup of only the Engine database
dwhdb
-
A backup of only the Data Warehouse database
cinderlibdb
-
A backup of only the Cinderlib database
grafanadb
-
A backup of only the Grafana database
You can specify the --scope
option more than once.
You can also configure the engine-backup
command to back up additional files. It restores everything that it backs up.
To restore a database to a fresh installation of oVirt Engine, a database backup alone is not sufficient. The Engine also requires access to the configuration files. If you specify a scope other than |
For a complete explanation of the engine-backup
command, enter engine-backup --help
on the Engine machine.
-
Log on to the Engine machine.
-
Create a backup:
# engine-backup
The following settings are applied by default:
--scope=all
--mode=backup
The command generates the backup in /var/lib/ovirt-engine-backup/file_name.backup
, and a log file in /var/log/ovirt-engine-backup/log_file_name
.
Use file_name.tar
to restore the environment.
The following examples demonstrate several different backup scenarios.
# engine-backup
# engine-backup --scope=files --scope=db
# engine-backup --scope=files --scope=dwhdb
-
Make a directory to store configuration customizations for the
engine-backup
command:# mkdir -p /etc/ovirt-engine-backup/engine-backup-config.d
-
Create a text file in the new directory named
ntp-chrony.sh
with the following contents:BACKUP_PATHS="${BACKUP_PATHS} /etc/chrony.conf /etc/ntp.conf /etc/ovirt-engine-backup"
-
When you run the
engine-backup
command, use--scope=files
. The backup and restore includes/etc/chrony.conf
,/etc/ntp.conf
, and/etc/ovirt-engine-backup
.
Restoring a Backup with the engine-backup Command
Restoring a backup using the engine-backup command involves more steps than creating a backup does, depending on the restoration destination. For example, the engine-backup
command can be used to restore backups to fresh installations of oVirt, on top of existing installations of oVirt, and using local or remote databases.
The version of the oVirt Engine (such as 4.4.8) used to restore a backup must be later than or equal to the oVirt Engine version (such as 4.4.7) used to create the backup. Starting with oVirt 4.4.7, this policy is strictly enforced by the engine-backup command. To view the version of oVirt contained in a backup file, unpack the backup file and read the value in the version file located in the root directory of the unpacked files. |
Restoring a Backup to a Fresh Installation
The engine-backup
command can be used to restore a backup to a fresh installation of the oVirt Engine. The following procedure must be performed on a machine on which the base operating system has been installed and the required packages for the oVirt Engine have been installed, but the engine-setup
command has not yet been run. This procedure assumes that the backup file or files can be accessed from the machine on which the backup is to be restored.
-
Log on to the Engine machine. If you are restoring the engine database to a remote host, you will need to log on to and perform the relevant actions on that host. Likewise, if also restoring the Data Warehouse to a remote host, you will need to log on to and perform the relevant actions on that host.
-
Restore a complete backup or a database-only backup.
-
Restore a complete backup:
# engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db
When the
--provision-*
option is used in restore mode,--restore-permissions
is applied by default.If Data Warehouse is also being restored as part of the complete backup, provision the additional database:
engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --provision-dwh-db
-
Restore a database-only backup by restoring the configuration files and database backup:
# engine-backup --mode=restore --scope=files --scope=db --file=file_name --log=log_file_name --provision-db
The example above restores a backup of the Engine database.
# engine-backup --mode=restore --scope=files --scope=dwhdb --file=file_name --log=log_file_name --provision-dwh-db
The example above restores a backup of the Data Warehouse database.
If successful, the following output displays:
You should now run engine-setup. Done.
-
-
Run the following command and follow the prompts to configure the restored Engine:
# engine-setup
The oVirt Engine has been restored to the version preserved in the backup. To change the fully qualified domain name of the new oVirt system, see The oVirt Engine Rename Tool.
Restoring a Backup to Overwrite an Existing Installation
The engine-backup
command can restore a backup to a machine on which the oVirt Engine has already been installed and set up. This is useful when you have taken a backup of an environment, performed changes on that environment, and then want to undo the changes by restoring the environment from the backup.
Changes made to the environment since the backup was taken, such as adding or removing a host, will not appear in the restored environment. You must redo these changes.
-
Log in to the Engine machine.
-
Remove the configuration files and clean the database associated with the Engine:
# engine-cleanup
The
engine-cleanup
command only cleans the Engine database; it does not drop the database or delete the user that owns that database. -
Restore a full backup or a database-only backup. You do not need to create a new database or specify the database credentials because the user and database already exist.
-
Restore a full backup:
# engine-backup --mode=restore --file=file_name --log=log_file_name --restore-permissions
-
Restore a database-only backup by restoring the configuration files and the database backup:
# engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file=file_name --log=log_file_name --restore-permissions
To restore only the Engine database (for example, if the Data Warehouse database is located on another machine), you can omit the
--scope=dwhdb
parameter.If successful, the following output displays:
You should now run engine-setup. Done.
-
-
Reconfigure the Engine:
# engine-setup
Restoring a Backup with Different Credentials
The engine-backup
command can restore a backup to a machine on which the oVirt Engine has already been installed and set up, but the credentials of the database in the backup are different to those of the database on the machine on which the backup is to be restored. This is useful when you have taken a backup of an installation and want to restore the installation from the backup to a different system.
When restoring a backup to overwrite an existing installation, you must run the |
-
Log in to the oVirt Engine machine.
-
Run the following command and follow the prompts to remove the Engine’s configuration files and to clean the Engine’s database:
# engine-cleanup
-
Change the password for the owner of the
engine
database if the credentials of that user are not known:-
Enter the postgresql command line:
# su - postgres -c 'psql'
-
Change the password of the user that owns the
engine
database:postgres=# alter role user_name encrypted password 'new_password';
Repeat this for the user that owns the
ovirt_engine_history
database if necessary.
-
-
Restore a complete backup or a database-only backup with the
--change-db-credentials
parameter to pass the credentials of the new database. The database_location for a database local to the Engine islocalhost
.The following examples use a
--*password
option for each database without specifying a password, which prompts for a password for each database. Alternatively, you can use--*passfile=password_file
options for each database to securely pass the passwords to theengine-backup
tool without the need for interactive prompts.-
Restore a complete backup:
# engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --no-restore-permissions
If Data Warehouse is also being restored as part of the complete backup, include the revised credentials for the additional database:
engine-backup --mode=restore --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions
-
Restore a database-only backup by restoring the configuration files and the database backup:
# engine-backup --mode=restore --scope=files --scope=db --file=file_name --log=log_file_name --change-db-credentials --db-host=database_location --db-name=database_name --db-user=engine --db-password --no-restore-permissions
The example above restores a backup of the Engine database.
# engine-backup --mode=restore --scope=files --scope=dwhdb --file=file_name --log=log_file_name --change-dwh-db-credentials --dwh-db-host=database_location --dwh-db-name=database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions
The example above restores a backup of the Data Warehouse database.
If successful, the following output displays:
You should now run engine-setup. Done.
-
-
Run the following command and follow the prompts to reconfigure the firewall and ensure the
ovirt-engine
service is correctly configured:# engine-setup
Backing up and Restoring a Self-Hosted Engine
You can back up a self-hosted engine and restore it in a new self-hosted environment. Use this procedure for tasks such as migrating the environment to a new self-hosted engine storage domain with a different storage type.
When you specify a backup file during deployment, the backup is restored on a new Engine virtual machine, with a new self-hosted engine storage domain. The old Engine is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment. If you deploy on a new host, you must assign a unique name to the host. Reusing the name of an existing host included in the backup can cause conflicts in the new environment.
The backup and restore operation involves the following key actions:
This procedure assumes that you have access and can make changes to the original Engine.
-
A fully qualified domain name prepared for your Engine and the host. Forward and reverse lookup records must both be set in the DNS. The new Engine must have the same fully qualified domain name as the original Engine.
-
The original Engine must be updated to the latest minor version. The version of the oVirt Engine (such as 4.4.8) used to restore a backup must be later than or equal to the oVirt Engine version (such as 4.4.7) used to create the backup. Starting with oVirt 4.4.7, this policy is strictly enforced by the engine-backup command. See Updating the oVirt Engine in the Upgrade Guide.
If you need to restore a backup, but do not have a new appliance, the restore process will pause, and you can log into the temporary Engine machine via SSH, register, subscribe, or configure channels as needed, and upgrade the Engine packages before resuming the restore process.
-
The data center compatibility level must be set to the latest version to ensure compatibility with the updated storage version.
-
There must be at least one regular host in the environment. This host (and any other regular hosts) will remain active to host the SPM role and any running virtual machines. If a regular host is not already the SPM, move the SPM role before creating the backup by selecting a regular host and clicking
.If no regular hosts are available, there are two ways to add one:
-
Remove the self-hosted engine configuration from a node (but do not remove the node from the environment). See Removing a Host from a Self-Hosted Engine Environment.
-
Add a new regular host. See Adding standard hosts to the Engine host tasks.
-
Backing up the Original Engine
Back up the original Engine using the engine-backup
command, and copy the backup file to a separate location so that it can be accessed at any point during the process.
For more information about engine-backup --mode=backup
options, see Backing Up and Restoring the oVirt Engine in the Administration Guide.
-
Log in to one of the self-hosted engine nodes and move the environment to global maintenance mode:
# hosted-engine --set-maintenance --mode=global
-
Log in to the original Engine and stop the
ovirt-engine
service:# systemctl stop ovirt-engine # systemctl disable ovirt-engine
Though stopping the original Engine from running is not obligatory, it is recommended as it ensures no changes are made to the environment after the backup is created. Additionally, it prevents the original Engine and the new Engine from simultaneously managing existing resources.
-
Run the
engine-backup
command, specifying the name of the backup file to create, and the name of the log file to create to store the backup log:# engine-backup --mode=backup --file=file_name --log=log_file_name
-
Copy the files to an external server. In the following example,
storage.example.com
is the fully qualified domain name of a network storage server that will store the backup until it is needed, and/backup/
is any designated folder or path.# scp -p file_name log_file_name storage.example.com:/backup/
-
Log in to one of the self-hosted engine nodes and shut down the original Engine virtual machine:
# hosted-engine --vm-shutdown
After backing up the Engine, deploy a new self-hosted engine and restore the backup on the new virtual machine.
Restoring the Backup on a New Self-Hosted Engine
Run the hosted-engine
script on a new host, and use the --restore-from-file=path/to/file_name
option to restore the Engine backup during the deployment.
If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator’s ACL, the deployment may fail with a
Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target). |
-
Copy the backup file to the new host. In the following example,
host.example.com
is the FQDN for the host, and/backup/
is any designated folder or path.# scp -p file_name host.example.com:/backup/
-
Log in to the new host.
-
If you are deploying on oVirt Node,
ovirt-hosted-engine-setup
is already installed, so skip this step. If you are deploying on Enterprise Linux, install theovirt-hosted-engine-setup
package:# dnf install ovirt-hosted-engine-setup
-
Use the
tmux
window manager to run the script to avoid losing the session in case of network or terminal disruption.Install and run
tmux
:# dnf -y install tmux # tmux
-
Run the
hosted-engine
script, specifying the path to the backup file:# hosted-engine --deploy --restore-from-file=backup/file_name
To escape the script at any time, use CTRL+D to abort deployment.
-
Select Yes to begin the deployment.
-
Configure the network. The script detects possible NICs to use as a management bridge for the environment.
-
If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the Engine Appliance.
-
Enter the root password for the Engine.
-
Enter an SSH public key that will allow you to log in to the Engine as the root user, and specify whether to enable SSH access for the root user.
-
Enter the virtual machine’s CPU and memory configuration.
-
Enter a MAC address for the Engine virtual machine, or accept a randomly generated one. If you want to provide the Engine virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.
-
Enter the virtual machine’s networking details. If you specify Static, enter the IP address of the Engine.
The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Engine virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
-
Specify whether to add entries for the Engine virtual machine and the base host to the virtual machine’s
/etc/hosts
file. You must ensure that the host names are resolvable. -
Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications:
-
Enter a password for the
admin@internal
user to access the Administration Portal.The script creates the virtual machine. This can take some time if the Engine Appliance needs to be installed.
If the host becomes non operational, due to a missing required network or a similar problem, the deployment pauses and a message such as the following is displayed:
[ INFO ] You can now connect to https://<host name>:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up' [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.<random>_he_setup_lock is removed, delete it once ready to proceed]
Pausing the process allows you to:
-
Connect to the Administration Portal using the provided URL.
-
Assess the situation, find out why the host is non operational, and fix whatever is needed. For example, if this deployment was restored from a backup, and the backup included required networks for the host cluster, configure the networks, attaching the relevant host NICs to these networks.
-
Once everything looks OK, and the host status is Up, remove the lock file presented in the message above. The deployment continues.
-
-
Select the type of storage to use:
-
For NFS, enter the version, full address and path to the storage, and any mount options.
Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.
-
For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.
To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.
-
For Gluster storage, enter the full address and path to the storage, and any mount options.
Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.
Only replica 1 and replica 3 Gluster storage are supported. Ensure you configure the volume as follows:
gluster volume set VOLUME_NAME group virt gluster volume set VOLUME_NAME performance.strict-o-direct on gluster volume set VOLUME_NAME network.remote-dio off gluster volume set VOLUME_NAME storage.owner-uid 36 gluster volume set VOLUME_NAME storage.owner-gid 36 gluster volume set VOLUME_NAME network.ping-timeout 30
-
For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
-
-
Enter the Engine disk size.
The script continues until the deployment is complete.
-
The deployment process changes the Engine’s SSH keys. To allow client machines to access the new Engine without SSH errors, remove the original Engine’s entry from the
.ssh/known_hosts
file on any client machines that accessed the original Engine.
When the deployment is complete, log in to the new Engine virtual machine and enable the required repositories.
Enabling the oVirt Engine Repositories
Ensure the correct repositories are enabled.
For oVirt 4.5: If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.
# dnf install -y centos-release-ovirt45
As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available. |
For oVirt 4.4:
Common procedure valid for both 4.4 and 4.5 on Enterprise Linux 8 only:
You can check which repositories are currently enabled by running dnf repolist
.
-
Enable the
javapackages-tools
module.# dnf module -y enable javapackages-tools
-
Enable the
pki-deps
module.# dnf module -y enable pki-deps
-
Enable version 12 of the
postgresql
module.# dnf module -y enable postgresql:12
-
Enable version 2.3 of the
mod_auth_openidc
module.# dnf module -y enable mod_auth_openidc:2.3
-
Enable version 14 of the
nodejs
module:# dnf module -y enable nodejs:14
-
Synchronize installed packages to update them to the latest available versions.
# dnf distro-sync --nobest
For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components
The Engine and its resources are now running in the new self-hosted environment. The self-hosted engine nodes must be reinstalled in the Engine to update their self-hosted engine configuration. Standard hosts are not affected. Perform the following procedure for each self-hosted engine node.
Reinstalling Hosts
Reinstall oVirt Nodes (oVirt Node) and Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host.
When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. |
-
If the cluster has migration enabled, virtual machines can automatically migrate to another host in the cluster. Therefore, reinstall a host while its usage is relatively low.
-
Ensure that the cluster has sufficient memory for its hosts to perform maintenance. If a cluster lacks memory, migration of virtual machines will hang and then fail. To reduce memory usage, shut down some or all of the virtual machines before moving the host to maintenance.
-
Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time. One host must remain available to perform Storage Pool Manager (SPM) tasks.
-
Click
and select the host. -
Click
and OK. -
Click
. This opens the Install Host window. -
Click the Hosted Engine tab and select DEPLOY from the drop-down list.
-
Click OK to reinstall the host.
After a host has been reinstalled and its status returns to Up, you can migrate virtual machines back to the host.
After you register a oVirt Node to the oVirt Engine and reinstall it, the Administration Portal may erroneously display its status as Install Failed. Click , and the host will change to an Up status and be ready for use. |
After reinstalling the self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes:
# hosted-engine --vm-status
During the restoration, the old self-hosted engine storage domain was renamed, but was not removed from the new environment in case the restoration was faulty. After confirming that the environment is running normally, you can remove the old self-hosted engine storage domain.
Removing a Storage Domain
You have a storage domain in your data center that you want to remove from the virtualized environment.
-
Click
. -
Move the storage domain to maintenance mode and detach it:
-
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Maintenance, then click OK.
-
Click Detach, then click OK.
-
-
Click Remove.
-
Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain.
-
Click OK.
The storage domain is permanently removed from the environment.
Recovering a Self-Hosted Engine from an Existing Backup
If a self-hosted engine is unavailable due to problems that cannot be repaired, you can restore it in a new self-hosted environment using a backup taken before the problem began, if one is available.
When you specify a backup file during deployment, the backup is restored on a new Engine virtual machine, with a new self-hosted engine storage domain. The old Engine is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment. If you deploy on a new host, you must assign a unique name to the host. Reusing the name of an existing host included in the backup can cause conflicts in the new environment.
Restoring a self-hosted engine involves the following key actions:
This procedure assumes that you do not have access to the original Engine, and that the new host can access the backup file.
-
A fully qualified domain name prepared for your Engine and the host. Forward and reverse lookup records must both be set in the DNS. The new Engine must have the same fully qualified domain name as the original Engine.
Restoring the Backup on a New Self-Hosted Engine
Run the hosted-engine
script on a new host, and use the --restore-from-file=path/to/file_name
option to restore the Engine backup during the deployment.
If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator’s ACL, the deployment may fail with a
Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target). |
-
Copy the backup file to the new host. In the following example,
host.example.com
is the FQDN for the host, and/backup/
is any designated folder or path.# scp -p file_name host.example.com:/backup/
-
Log in to the new host.
-
If you are deploying on oVirt Node,
ovirt-hosted-engine-setup
is already installed, so skip this step. If you are deploying on Enterprise Linux, install theovirt-hosted-engine-setup
package:# dnf install ovirt-hosted-engine-setup
-
Use the
tmux
window manager to run the script to avoid losing the session in case of network or terminal disruption.Install and run
tmux
:# dnf -y install tmux # tmux
-
Run the
hosted-engine
script, specifying the path to the backup file:# hosted-engine --deploy --restore-from-file=backup/file_name
To escape the script at any time, use CTRL+D to abort deployment.
-
Select Yes to begin the deployment.
-
Configure the network. The script detects possible NICs to use as a management bridge for the environment.
-
If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the Engine Appliance.
-
Enter the root password for the Engine.
-
Enter an SSH public key that will allow you to log in to the Engine as the root user, and specify whether to enable SSH access for the root user.
-
Enter the virtual machine’s CPU and memory configuration.
-
Enter a MAC address for the Engine virtual machine, or accept a randomly generated one. If you want to provide the Engine virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.
-
Enter the virtual machine’s networking details. If you specify Static, enter the IP address of the Engine.
The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Engine virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
-
Specify whether to add entries for the Engine virtual machine and the base host to the virtual machine’s
/etc/hosts
file. You must ensure that the host names are resolvable. -
Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications:
-
Enter a password for the
admin@internal
user to access the Administration Portal.The script creates the virtual machine. This can take some time if the Engine Appliance needs to be installed.
If the host becomes non operational, due to a missing required network or a similar problem, the deployment pauses and a message such as the following is displayed:
[ INFO ] You can now connect to https://<host name>:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up' [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.<random>_he_setup_lock is removed, delete it once ready to proceed]
Pausing the process allows you to:
-
Connect to the Administration Portal using the provided URL.
-
Assess the situation, find out why the host is non operational, and fix whatever is needed. For example, if this deployment was restored from a backup, and the backup included required networks for the host cluster, configure the networks, attaching the relevant host NICs to these networks.
-
Once everything looks OK, and the host status is Up, remove the lock file presented in the message above. The deployment continues.
-
-
Select the type of storage to use:
-
For NFS, enter the version, full address and path to the storage, and any mount options.
Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.
-
For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.
To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.
-
For Gluster storage, enter the full address and path to the storage, and any mount options.
Do not use the old self-hosted engine storage domain’s mount point for the new storage domain, as you risk losing virtual machine data.
Only replica 1 and replica 3 Gluster storage are supported. Ensure you configure the volume as follows:
gluster volume set VOLUME_NAME group virt gluster volume set VOLUME_NAME performance.strict-o-direct on gluster volume set VOLUME_NAME network.remote-dio off gluster volume set VOLUME_NAME storage.owner-uid 36 gluster volume set VOLUME_NAME storage.owner-gid 36 gluster volume set VOLUME_NAME network.ping-timeout 30
-
For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
-
-
Enter the Engine disk size.
The script continues until the deployment is complete.
-
The deployment process changes the Engine’s SSH keys. To allow client machines to access the new Engine without SSH errors, remove the original Engine’s entry from the
.ssh/known_hosts
file on any client machines that accessed the original Engine.
When the deployment is complete, log in to the new Engine virtual machine and enable the required repositories.
Enabling the oVirt Engine Repositories
Ensure the correct repositories are enabled.
For oVirt 4.5: If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.
# dnf install -y centos-release-ovirt45
As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available. |
For oVirt 4.4:
Common procedure valid for both 4.4 and 4.5 on Enterprise Linux 8 only:
You can check which repositories are currently enabled by running dnf repolist
.
-
Enable the
javapackages-tools
module.# dnf module -y enable javapackages-tools
-
Enable the
pki-deps
module.# dnf module -y enable pki-deps
-
Enable version 12 of the
postgresql
module.# dnf module -y enable postgresql:12
-
Enable version 2.3 of the
mod_auth_openidc
module.# dnf module -y enable mod_auth_openidc:2.3
-
Enable version 14 of the
nodejs
module:# dnf module -y enable nodejs:14
-
Synchronize installed packages to update them to the latest available versions.
# dnf distro-sync --nobest
For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components
The Engine and its resources are now running in the new self-hosted environment. The self-hosted engine nodes must be reinstalled in the Engine to update their self-hosted engine configuration. Standard hosts are not affected. Perform the following procedure for each self-hosted engine node.
Reinstalling Hosts
Reinstall oVirt Nodes (oVirt Node) and Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host.
When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. |
-
If the cluster has migration enabled, virtual machines can automatically migrate to another host in the cluster. Therefore, reinstall a host while its usage is relatively low.
-
Ensure that the cluster has sufficient memory for its hosts to perform maintenance. If a cluster lacks memory, migration of virtual machines will hang and then fail. To reduce memory usage, shut down some or all of the virtual machines before moving the host to maintenance.
-
Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time. One host must remain available to perform Storage Pool Manager (SPM) tasks.
-
Click
and select the host. -
Click
and OK. -
Click
. This opens the Install Host window. -
Click the Hosted Engine tab and select DEPLOY from the drop-down list.
-
Click OK to reinstall the host.
After a host has been reinstalled and its status returns to Up, you can migrate virtual machines back to the host.
After you register a oVirt Node to the oVirt Engine and reinstall it, the Administration Portal may erroneously display its status as Install Failed. Click , and the host will change to an Up status and be ready for use. |
After reinstalling the self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes:
# hosted-engine --vm-status
During the restoration, the old self-hosted engine storage domain was renamed, but was not removed from the new environment in case the restoration was faulty. After confirming that the environment is running normally, you can remove the old self-hosted engine storage domain.
Removing a Storage Domain
You have a storage domain in your data center that you want to remove from the virtualized environment.
-
Click
. -
Move the storage domain to maintenance mode and detach it:
-
Click the storage domain’s name. This opens the details view.
-
Click the Data Center tab.
-
Click Maintenance, then click OK.
-
Click Detach, then click OK.
-
-
Click Remove.
-
Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain.
-
Click OK.
The storage domain is permanently removed from the environment.
Overwriting a Self-Hosted Engine from an Existing Backup
If a self-hosted engine is accessible, but is experiencing an issue such as database corruption, or a configuration error that is difficult to roll back, you can restore the environment to a previous state using a backup taken before the problem began, if one is available.
Restoring a self-hosted engine’s previous state involves the following steps:
For more information about engine-backup --mode=restore
options, see Backing Up and Restoring the Engine.
Enabling global maintenance mode
You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Engine virtual machine.
-
Log in to one of the self-hosted engine nodes and enable global maintenance mode:
# hosted-engine --set-maintenance --mode=global
-
Confirm that the environment is in global maintenance mode before proceeding:
# hosted-engine --vm-status
You should see a message indicating that the cluster is in global maintenance mode.
Restoring a Backup to Overwrite an Existing Installation
The engine-backup
command can restore a backup to a machine on which the oVirt Engine has already been installed and set up. This is useful when you have taken a backup of an environment, performed changes on that environment, and then want to undo the changes by restoring the environment from the backup.
Changes made to the environment since the backup was taken, such as adding or removing a host, will not appear in the restored environment. You must redo these changes.
-
Log in to the Engine machine.
-
Remove the configuration files and clean the database associated with the Engine:
# engine-cleanup
The
engine-cleanup
command only cleans the Engine database; it does not drop the database or delete the user that owns that database. -
Restore a full backup or a database-only backup. You do not need to create a new database or specify the database credentials because the user and database already exist.
-
Restore a full backup:
# engine-backup --mode=restore --file=file_name --log=log_file_name --restore-permissions
-
Restore a database-only backup by restoring the configuration files and the database backup:
# engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file=file_name --log=log_file_name --restore-permissions
To restore only the Engine database (for example, if the Data Warehouse database is located on another machine), you can omit the
--scope=dwhdb
parameter.If successful, the following output displays:
You should now run engine-setup. Done.
-
-
Reconfigure the Engine:
# engine-setup
Disabling global maintenance mode
-
Log in to the Engine virtual machine and shut it down.
-
Log in to one of the self-hosted engine nodes and disable global maintenance mode:
# hosted-engine --set-maintenance --mode=none
When you exit global maintenance mode, ovirt-ha-agent starts the Engine virtual machine, and then the Engine automatically starts. It can take up to ten minutes for the Engine to start.
-
Confirm that the environment is running:
# hosted-engine --vm-status
The listed information includes Engine Status. The value for Engine status should be:
{"health": "good", "vm": "up", "detail": "Up"}
When the virtual machine is still booting and the Engine hasn’t started yet, the Engine status is:
{"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "Powering up"}
If this happens, wait a few minutes and try again.
When the environment is running again, you can start any virtual machines that were stopped, and check that the resources in the environment are behaving as expected.
3.2.2. Migrating the Data Warehouse to a Separate Machine
This section describes how to migrate the Data Warehouse database and service from the oVirt Engine machine to a separate machine. Hosting the Data Warehouse service on a separate machine reduces the load on each individual machine, and avoids potential conflicts caused by sharing CPU and memory resources with other processes.
oVirt only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. |
You have the following migration options:
-
You can migrate the Data Warehouse service away from the Engine machine and connect it with the existing Data Warehouse database (
ovirt_engine_history
). -
You can migrate the Data Warehouse database away from the Engine machine and then migrate the Data Warehouse service.
Migrating the Data Warehouse Database to a Separate Machine
Migrate the Data Warehouse database (ovirt_engine_history
) before you migrate the Data Warehouse service. Use engine-backup
to create a database backup and restore it on the new database machine. For more information on engine-backup
, run engine-backup --help
.
oVirt only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. |
The new database server must have Enterprise Linux 8 installed.
Enable the required repositories on the new database server.
Enabling the oVirt Engine Repositories
Ensure the correct repositories are enabled.
For oVirt 4.5: If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.
# dnf install -y centos-release-ovirt45
As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available. |
For oVirt 4.4:
Common procedure valid for both 4.4 and 4.5 on Enterprise Linux 8 only:
You can check which repositories are currently enabled by running dnf repolist
.
-
Enable the
javapackages-tools
module.# dnf module -y enable javapackages-tools
-
Enable version 12 of the
postgresql
module.# dnf module -y enable postgresql:12
-
Enable version 2.3 of the
mod_auth_openidc
module.# dnf module -y enable mod_auth_openidc:2.3
-
Enable version 14 of the
nodejs
module:# dnf module -y enable nodejs:14
-
Synchronize installed packages to update them to the latest available versions.
# dnf distro-sync --nobest
For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components
Migrating the Data Warehouse Database to a Separate Machine
-
Create a backup of the Data Warehouse database and configuration files on the Engine:
# engine-backup --mode=backup --scope=grafanadb --scope=dwhdb --scope=files --file=file_name --log=log_file_name
-
Copy the backup file from the Engine to the new machine:
# scp /tmp/file_name root@new.dwh.server.com:/tmp
-
Install
engine-backup
on the new machine:# dnf install ovirt-engine-tools-backup
-
Install the PostgreSQL server package:
# dnf install postgresql-server postgresql-contrib
-
Initialize the PostgreSQL database, start the
postgresql
service, and ensure that this service starts on boot:# su - postgres -c 'initdb' # systemctl enable postgresql # systemctl start postgresql
-
Restore the Data Warehouse database on the new machine. file_name is the backup file copied from the Engine.
# engine-backup --mode=restore --scope=files --scope=grafanadb --scope=dwhdb --file=file_name --log=log_file_name --provision-dwh-db
When the
--provision-*
option is used in restore mode,--restore-permissions
is applied by default.
The Data Warehouse database is now hosted on a separate machine from that on which the Engine is hosted. After successfully restoring the Data Warehouse database, a prompt instructs you to run the engine-setup
command. Before running this command, migrate the Data Warehouse service.