oVirt 4.1.0 Release Notes

The oVirt Project is pleased to announce the availability of 4.1.0 Release as of February 1st, 2017.

oVirt is an open source alternative to VMware™ vSphere™, and provides an awesome KVM management interface for multi-node virtualization. This release is available now for Red Hat Enterprise Linux 7.3, CentOS Linux 7.3 (or similar).

To find out more about features which were added in previous oVirt releases, check out the previous versions release notes. For a general overview of oVirt, read the Quick Start Guide and the about oVirt page.

An updated documentation has been provided by our downstream [Red Hat Virtualization](https://access.redhat.com/documentation/en/red-hat-virtualization?version=4.0/)

  1. oVirt 4.1.0 Release Notes
    1. Install / Upgrade from previous versions
      1. Fedora / CentOS / RHEL
      2. oVirt Hosted Engine
      3. EPEL
    2. What's New in 4.1.0?
      1. Enhancements
        1. oVirt Engine
          1. Storage:
          2. Gluster
          3. Infra
          4. Integration
          5. Network
          6. SLA
          7. UX
          8. Virt
        2. oVirt Engine Dashboard
        3. oVirt Release Package
        4. VDSM
          1. Gluster
          2. Infra
          3. Network
          4. Storage
          5. Virt
        5. oVirt Hosted Engine Setup
        6. oVirt Hosted Engine HA
        7. oVirt Windows Guest Agent
        8. oVirt Cockpit Plugin
        9. imgbased
        10. oVirt Engine SDK 4 Java
        11. oVirt Engine SDK 4 Python
        12. oVirt image transfer daemon and proxy
        13. oVirt Release Package
        14. oVirt Engine
          1. Infra
          2. Integration
          3. SLA
          4. Virt
        15. VDSM
        16. oVirt Hosted Engine Setup
      2. Release Note
        1. oVirt Hosted Engine Setup
      3. Unclassified
        1. oVirt image transfer daemon and proxy
        2. oVirt Engine
          1. Gluster
          2. Infra
          3. Integration
          4. Network
          5. SLA
          6. Storage
          7. UX
          8. Virt
        3. oVirt Host Deploy
          1. Gluster
          2. Integration
        4. OTOPI
        5. VDSM JSON-RPC Java
        6. oVirt Engine Dashboard
        7. VDSM
          1. Gluster
          2. Infra
          3. Network
          4. SLA
          5. Storage
          6. Virt
        8. oVirt Hosted Engine Setup
          1. Storage
        9. oVirt Hosted Engine HA
          1. Integration
          2. SLA
        10. oVirt Windows Guest Agent
        11. oVirt Cockpit Plugin
          1. Gluster
          2. Node
          3. Virt
        12. oVirt Engine SDK 4 Ruby
        13. imgbased
        14. oVirt Engine SDK 4 Python
    3. Bug fixes
      1. oVirt image transfer daemon and proxy
      2. oVirt Engine
        1. Gluster
        2. Infra
        3. Integration
        4. Network
        5. SLA
        6. Storage
        7. UX
        8. Virt
      3. oVirt Host Deploy
      4. oVirt Engine DWH
      5. oVirt Setup Lib
      6. VDSM
        1. Infra
        2. Network
        3. Storage
        4. Virt
      7. oVirt Hosted Engine Setup
      8. oVirt Hosted Engine HA
      9. oVirt Cockpit Plugin
      10. imgbased
      11. Deprecated Functionality
        1. oVirt Host Deploy

Install / Upgrade from previous versions

Fedora / CentOS / RHEL

In order to install it on a clean system, you need to install

# yum install <http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm>

and then follow our Installation Guide

If you're upgrading from a previous release on Enterprise Linux 7 you just need to execute:

  # yum install <http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm>
  # yum update "ovirt-*-setup*"
  # engine-setup

Upgrade on Fedora 23 is not supported and you should follow our Migration Guide in order to migrate to Fedora 24.

oVirt Hosted Engine

If you're going to install oVirt as Hosted Engine on a clean system please follow Hosted_Engine_Howto#Fresh_Install guide or the corresponding Red Hat Virtualization [Self Hosted Engine Guide](https://access.redhat.com/documentation/en/red-hat-virtualization/4.0/paged/self-hosted-engine-guide/)

If you're upgrading an existing Hosted Engine setup, please follow Hosted_Engine_Howto#Upgrade_Hosted_Engine guide or the corresponding Red Hat Virtualization Upgrade Guide

EPEL

TL;DR Don't enable all of EPEL on oVirt machines.

The ovirt-release package enables the epel repositories and includes several specific packages that are required from there. It also enables and uses the CentOS OpsTools SIG repos, for other packages.

EPEL currently includes collectd 5.7.1, and the collectd package there includes the write_ plugin.

OpsTools currently includes collectd 5.7.0, and the write_ plugin is packaged separately.

ovirt-release does not use collectd from epel, so if you only use it, you should be ok.

If you want to use other packages from EPEL, you should make sure to not include collectd. Either use includepkgs and add those you need, or use excludepkgs=collectd*. ## What's New in 4.1.0 Async release? On February 3rd 2017 the ovirt team issued an async release of ovirt-engine package including a fix for: - [BZ 1417597](https://bugzilla.redhat.com/1417597) Failed to update template

What's New in 4.1.0?

Enhancements

oVirt Engine

Storage:
  • [BZ 1342919](https://bugzilla.redhat.com/1342919) [RFE] Make discard configurable by a storage domain rather than a host

    This feature makes it possible to configure "Discard After Delete" (DAD) per block storage domain.
    Up until now, one could get a similar functionality by configuring the discard_enable parameter in VDSM config file (please refer to BZ 981626 for more info). That would have caused each logical volume (disk or snapshot) that was about to be removed by this specific host to be discarded first.
    Now, one can enable DAD for a block storage domain rather then a host, and therefore decouple the functionality from the execution. That is, no matter which host will actually remove the logical volume, if DAD is enabled for a storage domain, each logical volume under this domain will be discarded before it is removed.

    For more information, please refer to the feature page:
    http://www.ovirt.org/develop/release-management/features/storage/discard-after-delete/
  • [BZ 1380365](https://bugzilla.redhat.com/1380365) [RFE][HC] - allow forcing import of a VM from a storage domain, even if some of its disks are not accessible.
    Feature: Add the ability to import partial VM
    Reason: HCI DR solution is based on the concept that only data disks are replicated and system disks are not. Currently if some of the VM's disks are not replicated the import of the VM fails. Since some of the disks have snapshots, they can not be imported as floating disks.
    To allow the DR to works we need to force import of a VM from a storage domain, even if some of its disks are not accessible.
    Result: Add the ability to import partial VMs only through REST.
    The following is a REST request for importing a partial unregistered VM (Same goes for Template):
    POST /api/storagedomains/xxxxxxx-xxxx-xxxx-xxxxxx/vms/xxxxxxx-xxxx-xxxx-xxxxxx/register HTTP/1.1
    Accept: application/xml
    Content-type: application/xml

    <action>
    <cluster id='bf5a9e9e-5b52-4b0d-aeba-4ee4493f1072'></cluster>
    <allow_partial_import>true</allow_partial_import>
    </action>
  • [BZ 1241106](https://bugzilla.redhat.com/1241106) [RFE] Allow TRIM from within the guest to shrink thin-provisioned disks on iSCSI and FC storage domains
    Previously, discard commands (UNMAP SCSI commands) that were sent from the guest were ignored by qemu and were not passed on to the underlying storage. This meant that storage that was no longer in use could not be freed up.
    In this release, it is now possible to pass on discard commands to the underlying storage. A new property called Pass Discard was added to the Virtual Disk window. When selected, and if all the restrictions are met, discard commands that are sent from the guest will not be ignored by qemu and will be passed on to the underlying storage. The reported unused blocks in the underlying storage thinly provisioned LUNs will be marked as free, and the reported consumed space will be reduced.
  • [BZ 1317429](https://bugzilla.redhat.com/1317429) [RFE] Improve HA failover, so that even when power fencing is not available, automatic HA will work without manual confirmation on host rebooted.
  • [BZ 1314387](https://bugzilla.redhat.com/1314387) [RFE][Tracker] - Provide streaming API for oVirt
    This feature adds the possibility to download ovirt images (E.g VM disks) using oVirt's API.
  • [BZ 1246114](https://bugzilla.redhat.com/1246114) [RFE][scale] Snapshot deletion of poweredoff VM takes longer time.
    Previously, when the Virtual Machine was powered down, deleting a snapshot could potentially be a very long process. This was due to the need to copy the data from the base snapshot to the top snapshot, where the base snapshot is usually larger than the top snapshot.
    Now, when deleting a snapshot when the Virtual Machine is powered down, data is copied from the top snapshot to the base snapshot, which significantly reduces the time required to delete the snapshot.
  • [BZ 1302185](https://bugzilla.redhat.com/1302185) [RFE] Allow attaching shared storage domains to a local DC
    Feature: Allow attaching shared storage domains to a local DC
    Reason: With the ability to attach and detach a data domain (introduced in 3.5), data domains became a better option for moving VMs/Templates around than an export domain. In order to gain this ability in local DCs, it should be possible to attach a Storage Domain of a shared type to that DC.
    Result: The user will now have the ability to change an initialized Data Center type (Local vs Shared). The following updates will be available:
    1. Shared to Local - Only for a Data Center that does not contain more than one Host and more than one cluster, since local Data Center does not support it. The engine should validate and block this operation with the following messages:
    CLUSTER_CANNOT_ADD_MORE_THEN_ONE_HOST_TO_LOCAL_STORAGE
    VDS_CANNOT_ADD_MORE_THEN_ONE_HOST_TO_LOCAL_STORAGE
    2. Local to Shared - Only for a Data Center that does not contain a local Storage Domain. The engine should validate and block this operation with the following message: ERROR_CANNOT_CHANGE_STORAGE_POOL_TYPE_WITH_LOCAL.
  • [BZ 827529](https://bugzilla.redhat.com/827529) [RFE] QCOW2 v3 Image Format
    This release introduces QCOW2 v3 which has a compatibility level of 1.1. This enables the QEMU to use this volume in a more efficient way, with its improved performance capabilities. In addition, it is fully backwards-compatible with the QCOW2 feature set, it is easy to upgrade from QCOW2 v2 to QCOW2 v3, and it supports extensibility.
  • [BZ 1379771](https://bugzilla.redhat.com/1379771) Introduce a 'force' flag for updating a storage server connection
    In order to update a storage server connection regardless to the associated storage domain status (i.e. updating also when the domain is not in status Maintenance) - introduced a 'force' flag.
    For example:
    PUT /ovirt-engine/api/storageconnections/123?force
  • [BZ 1408876](https://bugzilla.redhat.com/1408876) Deactivating a storage domain containing leases of running VMs should be blocked
    This release enables Virtual Machines to lease areas on the storage domain. If a Virtual Machine has a lease on a storage domain, it will not be possible to move this storage domain into maintenance mode.
    If the user attempts to do so, an error message will appear explaining that a virtual machine currently has a lease on this storage.
Gluster
  • [BZ 1398593](https://bugzilla.redhat.com/1398593) RFE: Integrate geo-replication based DR sync for storage domain
    This feature integrates the setup for data sync to a remote location using geo-replication for Gluster-based storage domains, to improve disaster recovery. A user is able to schedule data sync to a remote location from the Red Hat Virtualization UI.
  • [BZ 1196433](https://bugzilla.redhat.com/1196433) [RFE] [HC] entry into maintenance mode should consider whether self-heal is ongoing
    Previously, in GlusterFS, if a node went down and then returned, GlusterFS would automatically initiate a self-heal operation. During this operation, which could be timely, a subsequent maintenance mode action within the same GlusterFS replica set could result in a split brain scenario.
    In this release, if a Gluster host is performing a self-heal activity, administrators will not be able to move it into maintenance mode. In extreme cases, administrators can use the force option to forcefully move a host into maintenance mode.
  • [BZ 1182369](https://bugzilla.redhat.com/1182369) [RFE][HC] - glusterfs volume create/extend should fail when bricks from the same server
    Previously in a hyper-converged cluster environment containing gluster and virt nodes, it was possible to create a replica set containing bricks from the same server. A warning appeared but the action was enabled even though there was a risk of losing data or service. In this release, it will no longer be possible to create a replica set containing multiple bricks from the same server in a hyper-converged environment.
  • [BZ 1177782](https://bugzilla.redhat.com/1177782) [RFE][HC] – link to gluster volumes while creating storage domains
    This update provides a link to the gluster volume when creating a gluster storage domain, and enables a single unified flow.

    This enables the backup volfile servers mount options to be auto-populated, and paves the way for integration features like Disaster Recovery setup using gluster geo-replication.
  • [BZ 1364999](https://bugzilla.redhat.com/1364999) [RFE] Show gluster volume info in ovirt dashboard
    The Red Hat Virtualization dashboard now displays gluster volume information. This enables the user to see a summary of all gluster volumes in the system.
Infra
  • [BZ 1347631](https://bugzilla.redhat.com/1347631) [RFE] adding logging to REST API calls
    This feature adds the /var/log/<httpd/ovirt-requests-log> file which will now log all requests made to the ovirt engine via HTTPS and how long the the request took. There will be the 'Correlation-Id' header included, for easier comparison of requests with the engine.log
    CorrelationIds are now generated for every request automatically and can be passed to the REST Api per Correlation-Id header or correlation_id query parameter.
  • [BZ 1024063](https://bugzilla.redhat.com/1024063) [RFE] Provide way to reboot host without using Power Management
    Previously, it was impossible to reboot a host without using Power Management. In this release, it is now possible to shut down and reboot a host without using Power Management. From the Management menu, a new option called SSH Management is available, enabling administrators to select either
    Restart or Stop.
  • [BZ 1406814](https://bugzilla.redhat.com/1406814) [RFE] Add ability to disable automatic checks for upgrades on hosts
    This fix allows administrators to set the engine-config option "HostPackagesUpdateTimeInHours" to 0, which disables automatic periodical checks for host upgrades. Automatic periodical checks are not always needed, for example when managing hosts using Satellite.
  • [BZ 1279378](https://bugzilla.redhat.com/1279378) [RFE] Add manual execution of 'Check for upgrades' into webadmin and RESTAPI
    A new menu item 'Check for Upgrade' has been added to Webadmin Installation menu. This can be used to trigger checking for upgrades on the host.

    The check for upgrades can also be trigger by using rest api using the hosts upgradecheck endpoint.
  • [BZ 1286632](https://bugzilla.redhat.com/1286632) [RFE] When editing fence agents, options displayed should be specific to that agent
    In this release, a link has been added to the Edit fence agent window which opens the online help and displays information about the parameters that can be set for fence agents.
  • [BZ 1343562](https://bugzilla.redhat.com/1343562) Updates should not be checked on hosts on maintenance
    Feature:
    Before this patch we checked for updates all hosts that were in status Up, Maintenance or NonOperational. Unfortunately hosts in status Maintenance may not be reachable, which caused unnecessary errors shown in Events.
    So from now only hosts in status Up or NonOperational are being checked for upgrades.
  • [BZ 1295678](https://bugzilla.redhat.com/1295678) [RFE] better error messages for beanvalidation validation failures.
  • [BZ 1092907](https://bugzilla.redhat.com/1092907) [RFE][notifier] Implement logging of successful sending of mail notification
    Previously, when notification emails were successfully sent to a configured SMTP server, a success message did not appear in the notifier.log file.In this release, when a message is successfully sent to an SMTP server, the following message appears in the notifier.log file:
    E-mail subject='…' to='…' sent successfully
  • [BZ 1126753](https://bugzilla.redhat.com/1126753) [RFE]Map PM iLO3 and iLO4 to their native agents
Integration
  • [BZ 1270719](https://bugzilla.redhat.com/1270719) [RFE] Add an option to automatically accept defaults
    <Feature: Add an option '–accept-defaults' to engine-setup, that makes it not prompt for answers, in questions that supply a default one, but instead accept the default.
    Reason:
    1. Save users from repeatedly pressing Enter if they already know that the defaults are good enough for them.
    2. Lower the maintenance for other tools that want to run engine-setup unattended - if they use this option, they will not break when a question is added in the future, if this question has a default answer
  • [BZ 1235200](https://bugzilla.redhat.com/1235200) [RFE] Make it easier to remove hosts when restoring hosted-engine from backup
    Previously, when restoring a backup of a hosted engine on a different environment, for disaster recovery purposes, administrators were sometimes required to remove the previous hosts from the engine. This was accomplished from within the engine's database, which is a risk-prone procedure.
    In this release, a new CLI option can be used during the restore procedure to enable administrators to remove the previous host directly from the engine backup.
  • [BZ 1300947](https://bugzilla.redhat.com/1300947) engine-backup user experience need to be improved
Network
  • [BZ 994283](https://bugzilla.redhat.com/994283) [RFE] Per cluster MAC address pool
    Feature: MAC Pool association was altered, so that it's possible to attach different MAC Pool to each individual cluster.
  • [BZ 1038550](https://bugzilla.redhat.com/1038550) [RFE] RHEV-M portal should highlight primary interface in bond configured using 'primary' option in custom mode.
  • [BZ 1317447](https://bugzilla.redhat.com/1317447) [RFE] Ability to choose new Mac address from pool when importing VMs from data storage domain.
    Feature: The feature allows a user to request oVirt to assign a new MAC address in the flow of importing a VM from a data storage domain (Disaster recovery) and the current MAC address is bad.
    Reason: Importing a VM with a bad MAC address might cause MAC collision in the target LAN.
    A MAC address would be considered as "bad" when it is in use already in the target oVirt cluster or it is out of the range of the mac pool of the target cluster.
    Result: A user is able to request oVirt to assign a new MAC address in the flow of importing a VM from a data storage domain. [BZ 1277675](https://bugzilla.redhat.com/1277675) [RFE] Ability to change network information in a VM import from storage domain in DR scenario
    Feature:The feature enables to map external vnic profiles that are defined on the imported VM to the ones that are present in the cluster the VM is going to be imported to.
    Reason:The current solutions exchanges all external vNic profiles that are not present in the target cluster by the empty profile, which makes such imported VM lack network functionality.
    Result: After importing a VM from a data domain (disaster recovery flow), it is configured properly according to the vNic profiles that are defined in the cluster, the VM was imported to.
  • [BZ 1226206](https://bugzilla.redhat.com/1226206) [RFE] Ability to choose new Mac address from pool when importing VMs from data storage domain.
    Feature:The feature allows a user to request oVirt to assign a new MAC address in the flow of importing a VM from a data storage domain (Disaster recovery) and the current MAC address is bad.
    Reason:Importing a VM with a bad MAC address might cause MAC collision in the target LAN.
    A MAC address would be considered as "bad" when it is in use already in the target oVirt cluster or it is out of the range of the mac pool of the target cluster.
    Result:
    A user is able to request oVirt to assign a new MAC address in the flow of importing a VM from a data storage domain.
SLA
  • [BZ 1392393](https://bugzilla.redhat.com/1392393) [RFE] Soft host to VM affinity support
    Support for virtual machine to host affinity has been added. This enables users to create affinity groups for virtual machines to be associated with designated hosts. Virtual machine host affinity can be disabled or enabled on request.

    Virtual machine to host affinity is useful in the following scenarios:
    - Hosts with specific hardware are required by certain virtual machines.
    - Virtual machines that form a logical management unit can be run on a certain set of hosts for SLA or management. For example a separate rack for each customer.
    - Virtual machines with licensed software must run on specific physical machines to avoid scheduling virtual machines to hosts that need to be decommissioned or upgraded.
  • [BZ 1404660](https://bugzilla.redhat.com/1404660) VM affinity: enforcement mechanism adjustments
    This feature adds rule enforcement support for VM to host affinity. VM to host affinity groups require the affinity rule enforcer to handle them in addition to the existing enforcement of VM to VM affinity. The rule enforcer will now be able to find VM to host affinity violations and choose a VM to migrate according to these violations.
  • [BZ 1392418](https://bugzilla.redhat.com/1392418) [RFE] - improve usability of global maintenance buttons for HE environments.
    The user experience for HA global maintenance has been improved in the UI by moving the options to a more logical location, and providing a visual indication about the current state of HA global maintenance for a given host.

    The "Enable HA Global Maintenance" and "Disable HA Global Maintenance" buttons are now displayed on the right-click menu for hosts instead of virtual machines, and reflect the global maintenance state of the host by disabling the button matching the host's current HA global maintenance state.

    The previous method of displaying the options for virtual machines was unintuitive, additionally both the enable and disable options remained available regardless of whether or not the host was in HA global maintenance mode.
  • [BZ 1392407](https://bugzilla.redhat.com/1392407) [RFE] - HE hosts should have indicators and a way to filter them from the rest of the hosts.
  • [BZ 1392412](https://bugzilla.redhat.com/1392412) [RFE] - HE storage should have a indicator.
  • [BZ 1135976](https://bugzilla.redhat.com/1135976) Edit pinned vm placement option clear vm cpu pinning options without any error message
    Feature:Added a dialog warning the user of loosing CPU pinning information when saving a VM.

    Reason:Previously, CPU pinning information was silently lost.

    Result:
    Now user gets notified if it will be lost, with a chance to cancel the operation.
  • [BZ 1306263](https://bugzilla.redhat.com/1306263) Normalize policy unit weights
    The weighting for virtual machine scheduling has been updated. The best host for the virtual machine is now selected using a weighted rank algorithm instead of the pure sum of weights. A separate rank is calculated for the policy unit and host, and the weight multiplier is then used to multiply the ranks for the given policy unit. The host with the highest number is selected.

    The reason for the change is that current weight policy units do not use a common result value range. Each unit reports numbers as needed, and this causes issues with user configured preferences. For example, memory (which has high numbers) always wins over CPU (0-100).

    This update ensures that the impact of the policy unit multiplier for the scheduling policy configuration is more predictable. However, users that are using it should check their configuration for sanity when upgrading.
UX
  • [BZ 1353556](https://bugzilla.redhat.com/1353556) UX: login to the admin portal is going first to the VMs tab, then hops to the dashboard UI plugin
    Feature: oVirt 4.0 introduced new "Dashboard" tab in WebAdmin UI. This tab is implemented via oVirt UI plugin (ovirt-engine-dashboard) and therefore loaded asynchronously.
    Reason: When loading WebAdmin UI, user lands at "Virtual Machines" tab, followed by immediate switch to "Dashboard" tab. This hinders overall user experience, since the general intention is to have the user landing at "Dashboard" tab.
    Result: Improved UI plugin infra to allow pre-loading UI plugins, such as ovirt-engine-dashboard. The end result is user landing directly at "Dashboard" tab (no intermediate switch to "Virtual Machines").
Virt
  • [BZ 734120](https://bugzilla.redhat.com/734120) [RFE] use virt-sparsify to reduce image size
    See "Sparsifying a Virtual Disk" in http://www.ovirt.org/documentation/admin-guide/administration-guide/
  • [BZ 1344521](https://bugzilla.redhat.com/1344521) [RFE] when GA data are missing, a warning should be shown in webadmin asking the user to install/start the GA
    Previously, if the guest agent was not running or was out of date, the hover text message that appeared next to the explanation mark for the problematic Virtual Machine informed the user that the operating system did not match or that the timezone configuration was incorrect. In this release, the hover text will correctly display a message informing the user that the guest agent needs to be installed and running in the guest.
  • [BZ 1097589](https://bugzilla.redhat.com/1097589) [RFE] [7.3] Hot Un-Plug CPU - Support dynamic virtual CPU deallocation
    This release adds support for CPU hot unplug to Red Hat Virtualization. Note that the guest operating system must also support the feature, and only previously hot plugged CPUs can be hot unplugged.
  • [BZ 1036221](https://bugzilla.redhat.com/1036221) [RFE] Automatic prompt for cert import for HTML5 console
    If web console (noVnc or spice html 5) can't connect to websocket proxy server, popup is shown suggesting what should be checked. The popup contains a link to default CA certificate.
  • [BZ 1294629](https://bugzilla.redhat.com/1294629) Improve loading external VMs speed
    Feature: Improve the loading performance of external VMs from external server. Done for the following sources: VMware, KVM, Xen.

    Reason: For displaying the lists of VMs to import in the first dialog, there is no need to ask libvirt for the full information per each VM and since it takes few seconds per VM, we can improve that by receiving the vm name only in that phase.

    Result: displaying VMs names only in the 1st phase, i.e. in the 1st import dialog, and only when choosing the VMS to import and clicking on the "Next" button, then the full VMs data list is displayed on the 2nd dialog.
  • [BZ 1388724](https://bugzilla.redhat.com/1388724) [RFE] Guest Support for Windows Server 2016 in RHV.
    Added Guest support for Windows Server 2016 in RHV/oVirt
  • [BZ 1381184](https://bugzilla.redhat.com/1381184) [RFE] allow starting VMs without graphical console (headless)
    Red Hat Virtualization now supports headless virtual machines that run without a graphical console and display device. Headless mode is also supported for templates, pools and instance types. This feature supports running a headless virtual machine from start, or after the initial setup (after "Run Once"). Headless mode can be enabled or disabled for a new or existing virtual machine at any time.
  • [BZ 1360983](https://bugzilla.redhat.com/1360983) Setting VM name as hostname automatically missing in RunOnce
    Feature: Host name is set automatically to VM name in RunOnce

    Reason: More user-friendly

    Result: The host name is set to VM name by default in RunOnce dialog. The user can change it, if needed.
  • [BZ 1374227](https://bugzilla.redhat.com/1374227) Add /dev/urandom as entropy source for virtio-rng
    random number generator source '/dev/random' is no longer optional (checkbox in cluster dialogs was removed) and is required from all hosts.

    random number generator (RNG) device was added to Blank template and predefined instance types. This means that new VMs will have RNG device by default.

    Note: RNG device was not added to user-created instance types or templates (to avoid unexpected changes in behavior) so if user wants to have RNG device on new VMs that are created based on custom instance types or templates RNG device needs to be added to these instance types / templates manually.
  • [BZ 1392872](https://bugzilla.redhat.com/1392872) [RFE] Add Skylake CPU model
    Intel Skylake family CPUs are now supported
  • [BZ 1399142](https://bugzilla.redhat.com/1399142) [RFE] Change disk default interface to virtio-scsi
    Feature: Change default disk interface type from virtio-blk to virtio-scsi.

    Reason: Motivate users to use better and more modern default for disk interfaces. (virtio-blk will still be supported)

    Result: Now when creating or attaching a disk to VM the virtio-scsi interface type will be selected as default.
  • [BZ 1081536](https://bugzilla.redhat.com/1081536) [RFE] Making VM pools able to allocate VMs to multiple storage domains to balance disk usage
    With this release, when creating virtual machine pools using a template that is present in more than one storage domain, virtual machine disks can be distributed to multiple storage domains by selecting "Auto select target" in New Pool -> Resource Allocation -> Disk Allocation.
  • [BZ 1161625](https://bugzilla.redhat.com/1161625) [RFE] Expose creator of vm via api and/or gui
    Feature: Search VMs on CREATED_BY_USER_ID

    Reason: The user can query VMs on CREATED_BY_USER_ID (REST API).

    Result:
    The REST API search query is extended for:
    …/api/vms?search=created_by_user_id%3D[USER_ID]

    The User ID can be retrieved i.e. by following REST call:
    …/api/users

    Please note, the user might be removed from the system since the VM creation.

    In addition, the Administration Portal shows the creators name (or login) in the VM General Subtab.
  • [BZ 1364456](https://bugzilla.redhat.com/1364456) VM's cluster compatibility version override does not change the default machine type
    A virtual machine snapshot with memory from a previous cluster version can now be previewed.

    The virtual machine's custom compatibility version will be temporarily set to the previous cluster version. The custom compatibility version is reverted by undoing the preview, or via a cold reboot (shut down and restart).
  • [BZ 1388245](https://bugzilla.redhat.com/1388245) [RFE] Configurable maximum memory size
    This release adds the ability to specify a Maximum Memory value in all VM-like dialogs (Virtual Machine, Template, Pool, and Instance Type). It is accessible in the {vm, template, instance_type}/memory_policy/max tag in the REST API. The value defines the upper limit to which memory hot plug can be performed. The default value is 4x memory size.
  • [BZ 1337101](https://bugzilla.redhat.com/1337101) [RFE] enable virtio-rng /dev/urandom by default
    Previously, when creating cluster, selecting /dev/random as the random number generator was optional. In this release, this source is no longer optional as it required by all hosts. Therefore, it has been removed from the relevant windows. The random number generator (RNG) device was added to Blank template and predefined instance types. This means that new Virtual Machines will have the RNG device by default.
    Note that the RNG device was not added to user-created instance types or templates, and administrators must manually add the RNG to new Virtual Machines based on these instance types or templates.
  • [BZ 1383342](https://bugzilla.redhat.com/1383342) [RFE] API ticket support in graphics devices
    Feature: Allow requesting console ticket for specific graphics device via REST API.

    Reason: The existing endpoint /api/vms/{vmId}/ticket defaulted to SPICE in scenarios when SPICE+VNC was configured as the graphics protocol making it impossible to request a VNC ticket.

    Result: A ticket action was added to the /api/vms/{vmId}/graphicsconsoles/{consoleId} resource making it possible to request ticket for specific console. Usage of this specific endpoints should be preferred from now on and the preexisting per-vm endpoint /api/vms/{vmId}/ticket should be considered deprecated.
  • [BZ 1333436](https://bugzilla.redhat.com/1333436) [RFE] drop Legacy USB
    Previously, support for Legacy USB was deprecated and the UI displayed three options: Native, Legacy (Deprecated) and Disabled. In this release, the Legacy option has been completely removed and the UI now displays two options: Enabled and Disabled.
  • [BZ 1333045](https://bugzilla.redhat.com/1333045) original template field is not exposed to REST API
    Feature: New 'original_template' property is introduced for the 'vm' REST API resource.

    Reason: Cloned VM has it's template set to Blank, no matter of what template was original VM based on.

    Result: The user can now get information about template, the VM was based on before cloning.
  • [BZ 1349321](https://bugzilla.redhat.com/1349321) [RFE] Implement option for adding XEN as external providers
    User can save a provider for external Xen on Rhel connection in the providers tree sections.
    When user will try to import a VM from Xen on Rhel to oVirt environment it will easily access to the saved provider address instead of re-entering the address.
  • [BZ 1348107](https://bugzilla.redhat.com/1348107) [RFE] Implement option for adding KVM as external providers
    User can save a provider for external libvirt connection in the providers tree sections.
    When user will try to import a VM from libvirt+kvm to oVirt environment it will easily access to the saved provider address instead of re-entering the address.
  • [BZ 1341153](https://bugzilla.redhat.com/1341153) [RFE] 'Remove' template dialog on an export domain should show subversion name
    Feature: Include Templates subversion-name and subversion-number into the "remove template(s)" dialogs.

    Reason: When choosing templates to remove, the remove template(s) dialog showed only templates name and it was hard to identify between templates with subversion

    Result: After the fix,the two template(s) remove dialogs display the following:
    Are you sure you want to remove the following items?
    - template-name (Version: subversion-name(subversion-number))
  • [BZ 1373223](https://bugzilla.redhat.com/1373223) Use nec-xhci USB controller by default on ppc64
    If SPICE USB redirection is enabled (VM-like dialog > Console > USB Support), the behavior remains unchanged: each VM has a quadruple of usb controllers: ich9-ehci1, ich9-uhci1,ich9-uhci2, ich9-uhci3.
    If SPICE USB redirection is disabled, then VM has newly USB controller as specified in osinfo-defaults.properties configuration file, i.e. it is configurable per guest operating system and effective cluster version. Previously no usb controller was send to libvirt and libvirt created a default usb controller.

    Default for all intel (x86, x86-64) operating systems is "piix3-uhci", for ppc64 systems it is "nec-xhci".

    The osinfo key is "devices.usb.controller", example configuration line:

    os.other.devices.usb.controller.value = piix3-uhci

    Allowed configuration values are:
    "piix3-uhci" | "piix4-uhci" | "ehci" | "ich9-ehci1" | "ich9-uhci1" | "ich9-uhci2" | "ich9-uhci3" | "vt82c686b-uhci" | "pci-ohci" | "nec-xhci" | "qusb1" | "qusb2" | "none".
    Documented (a bit) at https://libvirt.org/formatdomain.html#elementsControllers.

oVirt Engine Dashboard

  • [BZ 1353556](https://bugzilla.redhat.com/1353556) UX: login to the admin portal is going first to the VMs tab, then hops to the dashboard UI plugin
    Feature: oVirt 4.0 introduced new "Dashboard" tab in WebAdmin UI. This tab is implemented via oVirt UI plugin (ovirt-engine-dashboard) and therefore loaded asynchronously.
    Reason: When loading WebAdmin UI, user lands at "Virtual Machines" tab, followed by immediate switch to "Dashboard" tab. This hinders overall user experience, since the general intention is to have the user landing at "Dashboard" tab.
    Result: Improved UI plugin infra to allow pre-loading UI plugins, such as ovirt-engine-dashboard. The end result is user landing directly at "Dashboard" tab (no intermediate switch to "Virtual Machines").

oVirt Release Package

VDSM

Gluster
  • [BZ 1361115](https://bugzilla.redhat.com/1361115) [RFE] Add fencing policies for gluster hosts
    Feature:Add gluster related fencing policies for hyper-converged clusters.
    Reason:Currently available fencing policies doesn't care about Gluster processes. But in Hyper-converged mode, we need fencing policies that ensure that a host is not fenced if:
    1. there's a brick process running
    2. shutting down the host with active brick will cause loss of quorum
    Result:
    Following fencing policies are added to Hyper-converged cluster.
    1. SkipFencingIfGlusterBricksUp
    Fencing will be skipped if bricks are running and can be reached from other peers.
    2. SkipFencingIfGlusterQuorumNotMet
    Fencing will be skipped if bricks are running and shutting down the host will cause loss of quorum
Infra
  • [BZ 1141422](https://bugzilla.redhat.com/1141422) [RFE] Show vdsm thread name in system monitoring tools
    Feature: show the thread name in the system monitoring tools
    Reason: Vdsm uses many threads. Make it easier to track the resource usages of the threads.
    Result: now Vdsm use explicative system names for its threads.
Network
Storage
  • [BZ 1317429](https://bugzilla.redhat.com/1317429) [RFE] Improve HA failover, so that even when power fencing is not available, automatic HA will work without manual confirmation on host rebooted.
  • [BZ 1246114](https://bugzilla.redhat.com/1246114) [RFE][scale] Snapshot deletion of poweredoff VM takes longer time.
    Previously, when the Virtual Machine was powered down, deleting a snapshot could potentially be a very long process. This was due to the need to copy the data from the base snapshot to the top snapshot, where the base snapshot is usually larger than the top snapshot.

    Now, when deleting a snapshot when the Virtual Machine is powered down, data is copied from the top snapshot to the base snapshot, which significantly reduces the time required to delete the snapshot.
  • [BZ 1342919](https://bugzilla.redhat.com/1342919) [RFE] Make discard configurable by a storage domain rather than a host
    This feature makes it possible to configure "Discard After Delete" (DAD) per block storage domain.

    Up until now, one could get a similar functionality by configuring the discard_enable parameter in VDSM config file (please refer to BZ 981626 for more info). That would have caused each logical volume (disk or snapshot) that was about to be removed by this specific host to be discarded first.
    Now, one can enable DAD for a block storage domain rather then a host, and therefore decouple the functionality from the execution. That is, no matter which host will actually remove the logical volume, if DAD is enabled for a storage domain, each logical volume under this domain will be discarded before it is removed.

    For more information, please refer to the feature page:
    http://www.ovirt.org/develop/release-management/features/storage/discard-after-delete/
  • [BZ 1241106](https://bugzilla.redhat.com/1241106) [RFE] Allow TRIM from within the guest to shrink thin-provisioned disks on iSCSI and FC storage domains
    Previously, discard commands (UNMAP SCSI commands) that were sent from the guest were ignored by qemu and were not passed on to the underlying storage. This meant that storage that was no longer in use could not be freed up.
    In this release, it is now possible to pass on discard commands to the underlying storage. A new property called Pass Discard was added to the Virtual Disk window. When selected, and if all the restrictions are met, discard commands that are sent from the guest will not be ignored by qemu and will be passed on to the underlying storage. The reported unused blocks in the underlying storage thinly provisioned LUNs will be marked as free, and the reported consumed space will be reduced.
  • [BZ 827529](https://bugzilla.redhat.com/827529) [RFE] QCOW2 v3 Image Format
    This release introduces QCOW2 v3 which has a compatibility level of 1.1. This enables the QEMU to use this volume in a more efficient way, with its improved performance capabilities. In addition, it is fully backwards-compatible with the QCOW2 feature set, it is easy to upgrade from QCOW2 v2 to QCOW2 v3, and it supports extensibility.
Virt
  • [BZ 1354343](https://bugzilla.redhat.com/1354343) [RFE] Add support for post copy migration (tech preview)
    This update includes the Post-copy migration policy, which is available as a Technology Preview feature. The policy is similar to the Minimal Downtime policy, and enables the virtual machine to start running on the destination host as soon as possible. During the final phase of the migration (post-copy phase), the missing parts of the memory content is transferred between the hosts on demand. This guarantees that the migration will eventually converge with very little downtime. The disadvantage of this policy is that in the post-copy phase, the virtual machine may slow down significantly as the missing parts of memory are transferred between the hosts. If anything goes wrong during the post-copy phase, such as a network failure between the hosts, then the running virtual machine instance will be lost. It is therefore not possible to abort a migration during the post-copy phase.
  • [BZ 734120](https://bugzilla.redhat.com/734120) [RFE] use virt-sparsify to reduce image size
    See "Sparsifying a Virtual Disk" in http://www.ovirt.org/documentation/admin-guide/administration-guide/
  • [BZ 1294629](https://bugzilla.redhat.com/1294629) Improve loading external VMs speed
    Feature: Improve the loading performance of external VMs from external server. Done for the following sources: VMware, KVM, Xen.

    Reason: For displaying the lists of VMs to import in the first dialog, there is no need to ask libvirt for the full information per each VM and since it takes few seconds per VM, we can improve that by receiving the vm name only in that phase.

    Result: displaying VMs names only in the 1st phase, i.e. in the 1st import dialog, and only when choosing the VMS to import and clicking on the "Next" button, then the full VMs data list is displayed on the 2nd dialog.
  • [BZ 1356161](https://bugzilla.redhat.com/1356161) [RFE] prefer numa nodes close to host devices when using hostdev passthrough
    This RFE is related to host devices and should be reflected in virtual machine management guide as a note (somewhere close to Procedure 6.15. Adding Host Devices to a Virtual Machine).

    For some context, the feature tries to do best effort to implement https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-NUMA-NUMA_and_libvirt.html#sect-Virtualization_Tuning_Optimization_Guide-NUMA-Node_Locality_for_PCI<br
    If> the user does not specify any NUMA mapping himself, oVirt now tries to prefer NUMA node where device MMIO is. Main constraint is that we only prefer such node rather than strictly requiring memory from it. Implication is that the optimization may or may not be active depending on host's memory load AND only works as long as all assigned devices are from single NUMA node.
  • [BZ 1350465](https://bugzilla.redhat.com/1350465) [RFE] Store detailed log of virt-v2v when importing VM
    Previously, when importing a Virtual Machine, if the import failed, the output of the virt-v2v tool was not available for investigating the reason for the failure, and the import had to be reproduced manually. In this release, the output of virt-v2v is now stored in the /var/log/vdsm/import directory. All logs older than 30 days are automatically removed.
  • [BZ 1321010](https://bugzilla.redhat.com/1321010) [RFE] use virtlogd as introduced in libvirt >= 1.3.0
  • [BZ 1349907](https://bugzilla.redhat.com/1349907) RFE: Guest agent hooks for hibernation should be always executed.
    This feature will have the before_hibernation / after_hibernation hooks executed on the guest operating system (with the ovirt guest agent) always in case of suspending / resuming a Virtual Machine

oVirt Hosted Engine Setup

  • [BZ 1001181](https://bugzilla.redhat.com/1001181) [RFE] Provide clean up script for complete cleaning the hosted engine VM installation after failed installation.
    Provide a clean up script for complete cleaning the host after a failed attempt installing hosted-engine
  • [BZ 1393918](https://bugzilla.redhat.com/1393918) Move ancillary commands to jsonrpc
    Some ancillary hosted-engine commands were still based on xmlrpc, moving them to jsonrpc.
  • [BZ 1349301](https://bugzilla.redhat.com/1349301) [RFE] Successfully complete hosted engine setup without appliance pre-installed.
    Feature:Let the user install the appliance rpm directly from ovirt-hosted-engine-setup

    Reason:ovirt-hosted-engine-setup supports now only the appliance based flow

    Result:
    The user can install ovirt-egnine-appliance directly from ovirt-hosted-engine-setup
  • [BZ 1331858](https://bugzilla.redhat.com/1331858) [RFE] Allow user to enable ssh access for RHEV-M appliance during hosted-engine deploy
    Feature:Let the user optionally enable ssh access for RHEV-M appliance during hosted-engine deploy.
    The user can choose between yes, no and without-password.
    The user can also pass a public ssh key for the root user at hosted-engine-setup time.
  • [BZ 1366183](https://bugzilla.redhat.com/1366183) [RFE] - Remove all bootstrap flows other than appliance and remove addition of additional hosts via CLI.
    Having now the capability to deploy additional hosted-engine hosts from the engine with host-deploy, the capability to deploy additional hosted-engine hosts from hosted-engine setup is not required anymore. Removing it.
    The engine-appliance has proved to be the easiest flow to have a working hosted-engine env; removing all other bootstrap flows.
  • [BZ 1300591](https://bugzilla.redhat.com/1300591) [RFE] let the user customize the engine VM disk size also using the engine-appliance
    Let the user customize the engine VM disk size also if he choose to use the engine-appliance.
  • [BZ 1402435](https://bugzilla.redhat.com/1402435) HE still uses 6.5-based machine type
    Upgrade the machine type since the engine VM is running for sure on el7
  • [BZ 1365022](https://bugzilla.redhat.com/1365022) [RFE] hosted-engine –deploy question ordering improvements
  • [BZ 1318350](https://bugzilla.redhat.com/1318350) [RFE] configure the timezone for the engine VM as the host one via cloudinit
    Feature: Ask customer about NTP configuration inside the appliance

    Reason:
    Result:
  • [BZ 1301681](https://bugzilla.redhat.com/1301681) [RFE] - Once HE deployed, it's not possible to change notifications settings later on shared storage.
    Feature: Allow editing configuration stored on shared storage.
    Reason: There was no way of changing the stored configuration.
    Result: The configuration can be edited on the shared storage.
    Full design and documentation can be found here:
    http://www.ovirt.org/develop/release-management/features/sla/hosted-engine-edit-configuration-on-shared-storage/

oVirt Hosted Engine HA

  • [BZ 1001181](https://bugzilla.redhat.com/1001181) [RFE] Provide clean up script for complete cleaning the hosted engine VM installation after failed installation.
    Provide a clean up script for complete cleaning the host after a failed attempt installing hosted-engine
  • [BZ 1396672](https://bugzilla.redhat.com/1396672) modify output of the hosted engine CLI to show info on auto import process
    Since Red Hat Enterprise Virtualization 3.6, ovirt-ha-agent has read its configuration, and the Manager virtual machine specification, from shared storage. Previously, they were just local files replicated on each involved host. This enhancement modifies the output of hosted-engine –vm-status to show if the configuration and the Manager virtual machine specification has been, on each reported host, correctly read from the shared storage.
  • [BZ 1101554](https://bugzilla.redhat.com/1101554) [RFE] HE-ha: use vdsm api instead of vdsClient
    vdsClient uses xmlrpc that got deprecated in 4.0. Directly using vdsm api to take advantages of jsonrpc.

  • [BZ 1301681](https://bugzilla.redhat.com/1301681) [RFE] - Once HE deployed, it's not possible to change notifications settings later on shared storage.
    Feature: Allow editing configuration stored on shared storage

    Reason: There was no way of changing the stored configuration.

    Result:
    The configuration can be edited on the shared storage.

    Full design and documentation can be found here:
    http://www.ovirt.org/develop/release-management/features/sla/hosted-engine-edit-configuration-on-shared-storage/

oVirt Windows Guest Agent

  • [BZ 1310621](https://bugzilla.redhat.com/1310621) [RFE] oVirt Guest Tools name should include version in install apps list
  • [BZ 1398560](https://bugzilla.redhat.com/1398560) [RFE] add virtio-rng driver to installer
    An updated Windows Guest Tools ISO is now available.

    Changes compared to the 4.0 version:
    - Uninstall fixes
    - Correct path to QEMU GA MSI files
    - Add Display Version as a postfix to the Display Name
    - Add Windows 10 support
    - Update to latest virtio-win/vdagent releases
    - Install virtio-rng driver

oVirt Cockpit Plugin

  • [BZ 1325864](https://bugzilla.redhat.com/1325864) [RFE][HC] Cockpit plugin for gdeploy
    This update adds support for deploying gluster storage during the self-hosted engine deployment through the Cockpit UI. Previously the user needed to first deploy the gluster storage using gdeploy, then deploy the self-hosted engine using the Cockpit UI, and configuration files had to be manually updated.

imgbased

  • [BZ 1361230](https://bugzilla.redhat.com/1361230) [RFE] Simple mechanism to apply rpms after upgrades
    Red Hat Virtualization Host (RHVH) 4.0 allows users to install RPMs, however installed RPMs are lost after upgrading RHVH.

    RHVH 4.1 now includes a yum plugin which saves and reinstalls RPM packages after upgrading, to ensure that installed RPMs are no longer lost after upgrading.

    This will not work when upgrading from RHVH 4.0 to RHVH 4.1.
  • [BZ 1338744](https://bugzilla.redhat.com/1338744) [RFE] Validate pre-conditions during installation
  • [BZ 1331278](https://bugzilla.redhat.com/1331278) [RFE] Raise a meaningful error of the layout can not be created (i.e. no thinpool available)

oVirt Engine SDK 4 Java

oVirt Engine SDK 4 Python

oVirt image transfer daemon and proxy

oVirt Release Package

oVirt Engine

Infra
Integration
SLA
Virt

VDSM

oVirt Hosted Engine Setup

Release Note

oVirt Hosted Engine Setup

  • [BZ 1343882](https://bugzilla.redhat.com/1343882) Now, with the appliance flow, drop the virt-viewer dependency and just document this requirement
    Curently hosted-engine-setup requires virt-viewer. This is pulling in a graphics stack (and many megabytes of packages).
    With the appliance flow in place the virt-viewer will no longer be required by default.

Unclassified

oVirt image transfer daemon and proxy

oVirt Engine

Gluster
Infra
Integration
Network
SLA
Storage
UX
Virt

oVirt Host Deploy

Gluster
Integration

OTOPI

VDSM JSON-RPC Java

oVirt Engine Dashboard

VDSM

Gluster
Infra
Network
SLA
Storage
Virt

oVirt Hosted Engine Setup

Storage
  • [BZ 1397305](https://bugzilla.redhat.com/1397305) [hosted-engine-setup] Deployment is broken for FC: "Failed to execute stage 'Environment customization': 'Plugin' object has no attribute '_customize_mnt_options'"

oVirt Hosted Engine HA

Integration
SLA

oVirt Windows Guest Agent

oVirt Cockpit Plugin

Gluster
Node
Virt

oVirt Engine SDK 4 Ruby

imgbased

oVirt Engine SDK 4 Python

Bug fixes

oVirt image transfer daemon and proxy

oVirt Engine

Gluster

Infra

Integration

Network

SLA

Storage

UX

Virt

oVirt Host Deploy

oVirt Engine DWH

oVirt Setup Lib

VDSM

Infra

Network

Storage

Virt

oVirt Hosted Engine Setup

oVirt Hosted Engine HA

oVirt Cockpit Plugin

imgbased

Deprecated Functionality

oVirt Host Deploy

  • [BZ 1372237](https://bugzilla.redhat.com/1372237) Remove workaround for vdsm-jsonrpc deprecation warning
    This release removes a no-longer-needed workaround for the vdsm-jsonrpc deprecation warning.