Feature pages are design documents that developers have created while collaborating on oVirt.

Most of them are outdated, but provide historical design context.

They are not user documentation and should not be treated as such.

Documentation is available here.

Cinder Integration


OpenStack Cinder(/ceph) Integration


Detailed Description

Managing OpenStack Cinder volumes provisioned by ceph storage through oVirt engine. For initial phase, the integration should ensure support for creating/deleting volumes from a Cinder storage domain, while monitoring relevant statuses using CoCo mechanism. The engine/vdsm should allow running VMs with attached ceph volumes via librbd library using libvirt support (using libvirt with ceph rbd). As for security, when required, running VMs can authenticate using CEPHX protocol (secret management will be handled in engine/vdsm). There is a known issue with OpenStack when deleting a snapshot which has dependent volumes based on it. to avoid this bug the OpenStack Cinder should configure the ceph backend with this attribute rbd_flatten_volume_from_snapshot as True.

  • Woorea - OpenStack Java SDK should be updated and expanded to include cinder-model/cinder-client modules (needed for having an interface to interact with cinder rest-api commands.

Documentation / External references


  • CRUD for OpenStack Volume (Cinder) provider.
  • CRUD for adding/deleting Cinder disks (including monitoring).
  • CRUD for snapshots with Cinder disks.
  • Fetching Volume Types - ceph/lvm/etc.
  • Running VMs with Cinder disks attached.
  • CEPHX integration for using volumes securely.
  • Import (from Cinder to engine DB).
  • Permissions (MLA).
  • Templates
    • Add template - clone volume/create volume from snapshot - use clone volume and flat volume (if available).
    • Add VM from template - create volume from source volume (thin).

Future Work?

  • Move VM disk/Copy Template disk (cinder-to-cinder?/cinder-to-vdsm?/vdsm-to-cinder?).
  • Retype volume volume-retype (not supported for rbd yet).
  • Upload to Image (glance).
  • CRUD for volume types.
  • Quota (Cinder/Engine).
  • Import/Export (VMs/Templates).
  • Disk profiles.
  • Live snapshots.
  • Live storage migration.
  • Sync Cinder data with engine DB.
  • Cinder storage domain monitoring.
  • Support multiple backends (lvm/etc).
  • OVF disk / disaster recovery support

Relevant Flows

  • Add/Remove/Edit OpenStack volume provider
  • Add/Remove/Update/Extend Cinder Disk
  • Attach/Detach Storage Domain
  • Activate/Deactivate Storage Domain
  • Remove VM
  • Add Template
  • Remove Template
  • Add VM from Template
  • Add VM Pool
  • Attach Cinder Disks
  • Plug/Unplug (Cold/Hot)
  • List Cinder Disks
  • Register Cinder Disks
  • Run VM - [multiple ceph monitors support / Cephx auth (secrets)]
  • Add/Remove Snapshot
  • Preview/Undo/Commit Snapshot
  • Custom Preview Snapshot
  • Clone VM from Snapshot
  • Clone VM
  • Remove Disk Snapshots

Open Issues

  • Verify limits/quota against Cinder on Disk creation.
  • VM removal - deleted disks remain in ‘locked’ status (as opposed to images which are deleted immediately). I.e. failure would caused to disks in status ‘illegal’.
  • There is a known issue with OpenStack when deleting a snapshot which has dependent volumes based on it. to avoid this bug the OpenStack Cinder should configure the ceph backend with this attribute rbd_flatten_volume_from_snapshot as True.


Topic Branch: Cinder


Flow Illustration



Add Provider: POST /api/openstackvolumeproviders


Get Volume Provider: GET /api/openstackvolumeproviders/{provider_id} (All-Content: true)

    <openstack_volume_provider href="/api/openstackvolumeproviders/{id}" id="{id}">
        <data_center href="/api/datacenters/{id}" id="{id}">

Get Volume Type: GET /api/openstackvolumeproviders/{provider_id}/volumetypes

    <openstack_volume_type href="/api/openstackvolumeproviders/{id}/volumetypes/{volume_type_id}" id="{id}">
        <openstack_volume_provider href="/api/openstackvolumeproviders/{provider_id}" id="{id}"/>

Get Authentication Keys: GET /api/openstackvolumeproviders/{provider_id}/authenticationkeys

    <description>my ceph secret</description>

Create an Authentication Key: POST /api/openstackvolumeproviders/{provider_id}/authenticationkeys

    <description>my ceph secret</description>

Create a Cinder disk on a specific Volume Type: POST /api/vms/{vm_id}/disks


Get Unregistered Disks: GET /api/storagedomains/{storage_domain_id}/disks;unregistered


Register Disk: POST /api/storagedomains/{storage_domain_id}/disks;unregistered

<disk id="{disk_id}"></disk>

Delete Entity (DIsk/VM/Template)

Cinder disks are deleted asynchronously, hence ‘;async’ flag could be passed as part of the URL for getting 202-Accepted return status.

E.g. DELETE /api/disks/{disk_id};async



  • Add librbd1 package as dependency to vdsm.spec file.
  • Refactor Drive -> getXML() to support multiple hosts (represents Ceph monitors) in disk’s source element:
<disk type="network" device="disk">
    <host name="{monitor-host}" port="6789"/>

    <target dev="vda" bus="virtio"/>


OpenStack Volume Providers

OpenStack Volume Provider Dialog

Cinder Storage Domains

Cinder Disk Dialog

Cinder Disks attached to a VM

Cinder Disks List (under Storage)

Register Cinder Disks (under Storage)

Cinder Disks List

Cinder Authentication Keys

Authentication Key Dialog

Authentication Keys

When client Ceph authentication (Cephx) is enabled, authentication keys should be configured as follows:

  • (1) Create a new secret key on ceph using ‘ceph auth get-or-create’ - see example in Configuring client for Nova/Cinder
    • E.g.1. ceph auth get-or-create client.cinder ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
    • E.g.2. ceph auth get-or-create client.vdsm tee ‘my_pass’
  • (2) Navigate to ‘Authentication Keys’ sub-tab (under ‘Providers’ main-tab): Authentication Keys
  • (3) Click ‘New’ to open the create dialog: Screenshot
  • (4) In ‘Value’ text-box, enter the value of the secret key created on step (1).
    • Can be retrieved by ‘ceph auth get client.cinder’
  • (5) From ‘UUID’ text-box, copy the automatically generated UUID (or create a new one), and add to cinder.conf.

     E.g. '/etc/cinder/cinder.conf':
     rbd_secret_uuid = 148eb4bc-c47c-4ffe-b14e-3a0fb6c76833
     rbd_user = cinder

Note: client authentication keys are only used upon running a VM; i.e. authentication for ceph volume manipulation should be configured solely on Cinder side.