This chapter describes the advantages, limitations, and available options for various {virt-product-fullname} components.
Host Types
Use the host type that best suits your environment. You can also use both types of host in the same cluster if required.
All managed hosts within a cluster must have the same CPU type. Intel and AMD CPUs cannot co-exist within the same cluster.
{hypervisor-fullname}s
{hypervisor-fullname}s ({hypervisor-shortname}) have the following advantages over {enterprise-linux-host-fullname}s:
-
{hypervisor-shortname} is included in the subscription for {virt-product-fullname}. {enterprise-linux-host-fullname}s may require additional subscriptions.
-
{hypervisor-shortname} is deployed as a single image. This results in a streamlined update process; the entire image is updated as a whole, as opposed to packages being updated individually.
-
Only the packages and services needed to host virtual machines or manage the host itself are included. This streamlines operations and reduces the overall attack vector; unnecessary packages and services are not deployed and, therefore, cannot be exploited.
-
The Cockpit web interface is available by default and includes extensions specific to {virt-product-fullname}, including virtual machine monitoring tools and a GUI installer for the self-hosted engine. Cockpit is supported on {enterprise-linux-host-fullname}s, but must be manually installed.
{enterprise-linux-host-fullname}s
{enterprise-linux-host-fullname}s have the following advantages over {hypervisor-fullname}s:
-
{enterprise-linux-host-fullname}s are highly customizable, so may be preferable if, for example, your hosts require a specific file system layout.
-
{enterprise-linux-host-fullname}s are better suited for frequent updates, especially if additional packages are installed. Individual packages can be updated, rather than a whole image.
Storage Types
Each data center must have at least one data storage domain. An ISO storage domain per data center is also recommended. Export storage domains are deprecated, but can still be created if necessary.
A storage domain can be made of either block devices (iSCSI or Fibre Channel) or a file system.
By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO.
{virt-product-fullname} currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. |
The storage types described in the following sections are supported for use as data storage domains. ISO and export storage domains only support file-based storage types. The ISO domain supports local storage when used in a local storage data center.
See:
-
Storage in the Administration Guide.
NFS
NFS versions 3 and 4 are supported by {virt-product-fullname} 4. Production workloads require an enterprise-grade NFS server, unless NFS is only being used as an ISO storage domain. When enterprise NFS is deployed over 10GbE, segregated with VLANs, and individual services are configured to use specific ports, it is both fast and secure.
As NFS exports are grown to accommodate more storage needs, {virt-product-fullname} recognizes the larger data store immediately. No additional configuration is necessary on the hosts or from within {virt-product-fullname}. This provides NFS a slight edge over block storage from a scale and operational perspective.
See:
-
Preparing and Adding NFS Storage in the Administration Guide.
iSCSI
Production workloads require an enterprise-grade iSCSI server. When enterprise iSCSI is deployed over 10GbE, segregated with VLANs, and utilizes CHAP authentication, it is both fast and secure. iSCSI can also use multipathing to improve high availability.
{virt-product-fullname} supports 1500 logical volumes per block-based storage domain. No more than 300 LUNs are permitted.
See:
-
Adding iSCSI Storage in the Administration Guide.
Fibre Channel
Fibre Channel is both fast and secure, and should be taken advantage of if it is already in use in the target data center. It also has the advantage of low CPU overhead as compared to iSCSI and NFS. Fibre Channel can also use multipathing to improve high availability.
{virt-product-fullname} supports 1500 logical volumes per block-based storage domain. No more than 300 LUNs are permitted.
See:
-
Adding FCP Storage in the Administration Guide.
Fibre Channel over Ethernet
To use Fibre Channel over Ethernet (FCoE) in {virt-product-fullname}, you must enable the fcoe key on the {engine-name}, and install the vdsm-hook-fcoe package on the hosts.
{virt-product-fullname} supports 1500 logical volumes per block-based storage domain. No more than 300 LUNs are permitted.
See:
-
How to Set Up {virt-product-fullname} {engine-name} to Use FCoE in the Administration Guide.
oVirt Hyperconverged Infrastructure
oVirt Hyperconverged Infrastructure combines {virt-product-fullname} and {gluster-storage-fullname} on the same infrastructure, instead of connecting {virt-product-fullname} to a remote {gluster-storage-fullname} server. This compact option reduces operational expenses and overhead.
See:
-
Deploying oVirt Hyperconverged Infrastructure for Virtualization
-
Deploying oVirt Hyperconverged Infrastructure for Virtualization On A Single Node
-
Automating Virtualization Deployment
POSIX-Compliant FS
Other POSIX-compliant file systems can be used as storage domains in {virt-product-fullname}, as long as they are clustered file systems, such as Red Hat Global File System 2 (GFS2), and support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with {virt-product-fullname}.
See:
-
Global File System 2
-
Adding POSIX Compliant File System Storage in the Administration Guide.
Local Storage
Local storage is set up on an individual host, using the host’s own resources. When you set up a host to use local storage, it is automatically added to a new data center and cluster that no other hosts can be added to. Virtual machines created in a single-host cluster cannot be migrated, fenced, or scheduled.
For {hypervisor-fullname}s, local storage should always be defined on a file system that is separate from / (root). Use a separate logical volume or disk.
See: Preparing and Adding Local Storage in the Administration Guide.
Networking Considerations
Familiarity with network concepts and their use is highly recommended when planning and setting up networking in a {virt-product-fullname} environment. Read your network hardware vendor’s guides for more information on managing networking.
Logical networks may be supported using physical devices such as NICs, or logical devices such as network bonds. Bonding improves high availability, and provides increased fault tolerance, because all network interface cards in the bond must fail for the bond itself to fail. Bonding modes 1, 2, 3, and 4 support both virtual machine and non-virtual machine network types. Modes 0, 5, and 6 only support non-virtual machine networks. {virt-product-fullname} uses Mode 4 by default.
It is not necessary to have one device for each logical network, as multiple logical networks can share a single device by using Virtual LAN (VLAN) tagging to isolate network traffic. To make use of this feature, VLAN tagging must also be supported at the switch level.
The limits that apply to the number of logical networks that you may define in a {virt-product-fullname} environment are:
-
The number of logical networks attached to a host is limited to the number of available network devices combined with the maximum number of Virtual LANs (VLANs), which is 4096.
-
The number of networks that can be attached to a host in a single operation is currently limited to 50.
-
The number of logical networks in a cluster is limited to the number of logical networks that can be attached to a host as networking must be the same for all hosts in a cluster.
-
The number of logical networks in a data center is limited only by the number of clusters it contains in combination with the number of logical networks permitted per cluster.
Take additional care when modifying the properties of the Management network ( |
If you plan to use {virt-product-fullname} to provide services for other environments, remember that the services will stop if the {virt-product-fullname} environment stops operating. |
{virt-product-fullname} is fully integrated with Cisco Application Centric Infrastructure (ACI), which provides comprehensive network management capabilities, thus mitigating the need to manually configure the {virt-product-fullname} networking infrastructure. The integration is performed by configuring {virt-product-fullname} on Cisco’s Application Policy Infrastructure Controller (APIC) version 3.1(1) and later, according to the Cisco’s documentation.
Directory Server Support
During installation, {virt-product-fullname} {engine-name} creates a default admin user in a default internal domain. This account is intended for use when initially configuring the environment, and for troubleshooting. You can create additional users on the internal domain using ovirt-aaa-jdbc-tool
. User accounts created on local domains are known as local users. See Administering User Tasks From the Command Line in the Administration Guide.
You can also attach an external directory server to your {virt-product-fullname} environment and use it as an external domain. User accounts created on external domains are known as directory users. Attachment of more than one directory server to the {engine-name} is also supported.
The following directory servers are supported for use with {virt-product-fullname}. For more detailed information on installing and configuring a supported directory server, see the vendor’s documentation.
A user with permissions to read all users and groups must be created in the directory server specifically for use as the {virt-product-fullname} administrative user. Do not use the administrative user for the directory server as the {virt-product-fullname} administrative user. |
See: Users and Roles in the Administration Guide.
Infrastructure Considerations
Local or Remote Hosting
The following components can be hosted on either the {engine-name} or a remote machine. Keeping all components on the {engine-name} machine is easier and requires less maintenance, so is preferable when performance is not an issue. Moving components to a remote machine requires more maintenance, but can improve the performance of both the {engine-name} and Data Warehouse.
- Data Warehouse database and service
-
To host Data Warehouse on the {engine-name}, select
Yes
when prompted byengine-setup
.To host Data Warehouse on a remote machine, select
No
when prompted byengine-setup
, and see Installing and Configuring Data Warehouse on a Separate Machine in Installing {virt-product-fullname} as a standalone {engine-name} with remote databases.To migrate Data Warehouse post-installation, see Migrating Data Warehouse to a Separate Machine in the Data Warehouse Guide.
You can also host the Data Warehouse service and the Data Warehouse database separately from one another.
- {engine-name} database
-
To host the {engine-name} database on the {engine-name}, select
Local
when prompted byengine-setup
.To host the {engine-name} database on a remote machine, see Preparing a Remote PostgreSQL Database in Installing {virt-product-fullname} as a standalone {engine-name} with remote databases before running
engine-setup
on the {engine-name}. - Websocket proxy
-
To host the websocket proxy on the {engine-name}, select
Yes
when prompted byengine-setup
.
Self-hosted engine environments use an appliance to install and configure the {engine-name} virtual machine, so Data Warehouse, the {engine-name} database, and the websocket proxy can only be made external post-installation. |
Remote Hosting Only
The following components must be hosted on a remote machine:
- DNS
-
Due to the extensive use of DNS in a {virt-product-fullname} environment, running the environment’s DNS service as a virtual machine hosted in the environment is not supported.
- Storage
-
With the exception of local storage, the storage service must not be on the same machine as the {engine-name} or any host.
- Identity Management
-
IdM (
ipa-server
) is incompatible with themod_ssl
package, which is required by the {engine-name}.