Monday, December 15, 2014

Setting up a minimal rbd/ceph server for libvirt testing

In my last post I talked about setting up a minimal gluster server. Similarly this will describe how I set up a minimal single node rbd/ceph server in a VM for libvirt network storage testing.

I pulled info from a few different places and a lot of other reading, but things just weren't working on F21; trying systemctl start ceph just wasn't producing any output, and all the ceph cli commands just hung. I had better success with F20.

The main difficulty was figuring out a working ceph.conf. My VM's IP address is 1902.168.124.101, and its hostname is 'localceph', so here's what I ended up with:

[global]
auth cluster required = none
auth service required = none
auth client required = none
osd crush chooseleaf type = 0
osd pool default size = 1

[mon]
mon data = /data/$name
[mon.0]
mon addr = 192.168.124.101
host = localceph

[mds]
keyring = /data/keyring.$name
[mds.0]
host = localceph

[osd]
osd data = /data/$name
osd journal = /data/$name/journal
osd journal size = 1000
[osd.0]
host = localceph

Ceph setup steps:
  • Cloned an existing F20 VM I had kicking around, using virt-manager's clone wizard. I called it f20-ceph.
  • In the VM, disable firewalld and set selinux to permissive. Not strictly required but I wanted to make this as simple as possible.
  • Setup the ceph server:
    • yum install ceph
    • I needed to set a hostname for my VM, ceph won't accept 'localhost': hostnamectl set-hostname localceph
    • mkdir -p /data/mon.0 /data/osd.0
    • Overwrite /etc/ceph/ceph.conf with the content listed above.
    • mkcephfs -a -c /etc/ceph/ceph.conf
    • service ceph start
    • Prove it works from my host with: sudo mount -t ceph $VM_IPADDRESS:/ /mnt
  • Add some storage for testing:
    • Libvirt only connects to Ceph's block device interface, RBD. The above mount example is _not_ what libvirt will see, it just proves we can talk to the server.
    • Import files within the VM like: rbd import $filename
    • List files with: rbd list
Notable here is that no ceph auth is used. Libvirt supports ceph auth but at this stage I didn't want to deal with it for testing. This setup doesn't match what a real deployment would ever look like.

Here's the pool definition I passed to virsh pool-define on my host:

<pool type='rbd'>
  <name>f20-ceph</name>
  <source>
    <host name='$VM_IPADDRESS'/>
    <name>rbd</name>
  </source>
</pool>

Thursday, December 11, 2014

Setting up a minimal gluster server for libvirt testing

Recently I've been working on virt-install/virt-manager support for libvirt network storage pools like gluster and rbd/ceph. For testing I set up a single node minimal gluster server in an F21 VM. I mostly followed the gluster quickstart and hit only a few minor hiccups.

Steps for the gluster setup:
  • Cloned an existing F21 VM I had kicking around, using virt-manager's clone wizard. I called it f21-gluster.
  • In the VM, disable firewalld and set selinux to permissive. Not strictly required but I wanted to make this as simple as possible.
  • Setup the gluster server
    • yum install glusterfs-server
    • Edit /etc/glusterfs/glusterd.vol, add: option rpc-auth-allow-insecure on
    • systemctl start glusterd; systemctl enable glusterd
  • Create the volume:
    • mkdir -p /data/brick1/gv0
    • gluster volume create gv0 $VM_IPADDRESS:/data/brick1/gv0
    • gluster volume start gv0
    • gluster volume set gv0 allow-insecure on
  • From my host machine, I verified things were working: sudo mount -t glusterfs $VM_IPADDRESS:/gv0 /mnt
  • I added a couple example files to the directory: a stub qcow2 file, and a boot.iso.
  • Verified that qemu can access the ISO: qemu-system-x86_64 -cdrom gluster://$VM_IPADDRESS/gv0/boot.iso
  • Once I had a working setup, I used virt-manager to create a snapshot of the running VM config. So anytime I want to test gluster, I just start the VM snapshot and I know things are all nicely setup.
The bits about 'allow-insecure' is so that an unprivileged client can access the gluster share, see this bug for more info. The gluster docs also have a section about it but the steps don't appear to be complete.

The final bit is setting up a storage pool with libvirt. The XML I passed to virsh pool-define on my host looks like:

<pool type='gluster'>
  <name>f21-gluster</name>
  <source>
    <host name='$VM_IPADDRESS'/>
    <dir path='/'/>
    <name>gv0</name>
  </source>
</pool>

Wednesday, December 10, 2014

qemu-2.2.0 in rawhide, virt-preview disabled for F20

qemu-2.2.0 was released yesterday, check the release announcement and fine grained changelog. Packages are available in rawhide and fedora-virt-preview for Fedora 21.

But now that Fedora 21 is out, there won't be any new builds for F20 virt-preview. If you want to play with the latest and greatest virt bits, you'll need to update to F21.

Friday, December 5, 2014

virt-manager 1.0 creates qcow2 images that don't work on RHEL6

One of the big features we added in virt-manager 1.0 was snapshot support. As part of this change, we switched to using the QCOW2 disk image format for new VMs. We also enable the QCOW2 lazy_refcounts feature that improves performance of some snapshot operations.

However, not all versions of QEMU in the wild can handle lazy_refcounts, and will refuse to use the disk image, particularly RHEL6 QEMU. So by default, a disk image from a VM created with Fedora 20 virt-manager will not run on RHEL6 QEMU, throwing an error like:

... uses a qcow2 feature which is not supported by this qemu version: QCOW version 3 

This has generated some user confusion.

The 'QCOW version 3' is a bit misleading here: while indeed using lazy_refcounts sets a bit in the QCOW2 file header calling it QCOW3, qemu-img and libvirt still call it QCOW2, but with a different 'compat' setting.

Kevin Wolf, one of the QEMU block maintainers, explains it in this mail:
qemu versions starting with 1.1 can use [lazy_refcounts] which require an incompatible on-disk format. Between version 1.1 and 1.6, they needed to be specified explicitly during image creation, like this:

qemu-img create -f qcow2 -o compat=1.1 test.qcow2 8G

Starting with qemu 1.7, compat=1.1 became the default, so that newly created images can't be read by older qemu versions by default. If you need to read them in older version, you now need to be explicit about using the old format:

qemu-img create -f qcow2 -o compat=0.10 test.qcow2 8G

With the same release, qemu 1.7, a new qemu-img subcommand was introduced that allows converting between both versions, so you can downgrade your existing v3 image to the format known by RHEL 6 like this:

qemu-img amend -f qcow2 -o compat=0.10 test.qcow2
As explained, qemu-img with QEMU 1.7+ also defaults to lazy_refcounts/compat=1.1, but also provides a 'qemu-img amend' tool to easily convert between the two formats.

Unfortunately that command is not available on Fedora 20 and older, however you can use the pre-existing 'qemu-img convert' command:

qemu-img convert -f qcow2 -O qcow2 -o compat=0.10 $ORIGPATH $NEWPATH

Beware though, converting between two qcow2 images will drop all internal snapshots in the new image, so only use that option if you don't need to preserve any snapshot data. 'qemu-img amend' will preserve snapshot data.

(Unfortunately at this time virt-manager doesn't provide any way in the UI to _not_ use lazy_refcounts, but you could always use qemu-img/virsh to create a disk image outside of virt-manager, and select it when creating a new VM.)

Friday, November 28, 2014

x2apic on by default with qemu 2.0+, and some history

x2apic is a performance and scalability feature available in many modern Intel CPUs. Though regardless of whether your host CPU supports it, KVM can unconditionally emulate it for x86 guests, giving an easy performance win with no downside. This feature has existed since 2009 and been a regular recommendation for tuning a KVM VM.

As of qemu 2.0.0 x2apic is enabled automatically (more details at the end).
Priot to that, actually benefiting from x2apic required a tool like virt-manager to explicitly enable the flag, which has had a long bumpy road.

x2apic is exposed on the qemu command line as a CPU feature, like:

qemu -cpu $MODEL,+x2apic ...

And there isn't any way to specify a feature flag without specifying the CPU model. So enabling x2apic required hardcoding a CPU model where traditionally tools (and libvirt) deferred to qemu's default.

A Fedora 13 feature page was created to track the change, and we enabled it in python-virtinst for f13/rawhide. The implementation attempted to hardcode the CPU model name that libvirt detected for the host machine, which unfortunately has some problems as I explained in a previous post. This led to some issues installing 64bit guests, and after trying to hack around it, I gave up and reverted the change.

(In retrospect, we likely could have made it work by just trying to duplicate the default CPU model logic that qemu uses, however that might have hit issues if the CPU default ever changed, like on RHEL for example.)

Later on virt-manager and virt-install gained UI for enabling x2apic, but a user had to know what they were doing and hunt it down.

As mentioned above, as of qemu 2.0.0 any x86 KVM VM will have x2apic automatically enabled, so there's no explicit need to opt in. From qemu.git:
commit ef02ef5f4536dba090b12360a6c862ef0e57e3bc
Author: Eduardo Habkost
Date:   Wed Feb 19 11:58:12 2014 -0300

    target-i386: Enable x2apic by default on KVM

Sunday, November 23, 2014

Updated instructions for using QEMU, UEFI, and Secureboot

Last year I started a wiki page about testing Fedora's Secureboot support with KVM. Just now I've cleaned up the page and modernized it for the current state of virt packages in F21:

https://fedoraproject.org/wiki/Using_UEFI_with_QEMU

The Secureboot steps are now at:

https://fedoraproject.org/wiki/Using_UEFI_with_QEMU#Testing_Secureboot_in_a_VM

The main change is that nowadays the virt tools know how to create persistent configuration storage for UEFI, so you can setup Secureboot once. Previously you had to do all sorts of crazy things to turn on Secureboot for each restart of the VM.

Friday, November 21, 2014

Running F21 aarch64 with QEMU, libvirt, and UEFI

I just wrote up a wiki page describing how to run F21 aarch64 bits with QEMU, libvirt, and UEFI:

https://fedoraproject.org/wiki/Architectures/AArch64/Install_with_QEMU

This was tested on x86 but the same steps should work if running on real aarch64 HW.

Wednesday, November 19, 2014

Run linaro aarch64 images with f21 virt-install + libvirt

Linaro generates some minimal openembedded based aarch64 disk images, which are useful for virt testing. There's simple instructions over here for running them with qemu on an x86 host. But with Fedora 21 packages, you can also these images with virt-install + libvirt + qemu.

Output looks like:
 Starting install...
Creating domain...                                          |    0 B  00:00   
Connected to domain linaro-aarch64
Escape character is ^]
[    0.000000] Linux version 3.17.0-1-linaro-vexpress64 (buildslave@x86-64-07) (gcc version 4.8.3 20140401 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.04 - Linaro GCC 4.8-2014.04) ) #1ubuntu1~ci+141022120835 SMP PREEMPT Wed Oct 22 12:09:19 UTC 20
[    0.000000] CPU: AArch64 Processor [411fd070] revision 0
[    0.000000] Detected PIPT I-cache on CPU0
[    0.000000] Memory limited to 1024MB
...
Last login: Wed Nov 19 17:16:22 UTC 2014 on tty1
root@genericarmv8:~#
(Maybe you're wondering, what about fedora images? They are a bit different, since they expect to run with UEFI. I'll blog about that soon once I finish some testing)

Tuesday, November 18, 2014

Booting Fedora 21 ARM with QEMU and U-Boot

Running Fedora ARM with qemu is a bit of a pain because you need to pull the kernel and initrd out of the disk image and manually pass them to qemu; you can't just point qemu at the disk image and expect it to boot. The latter is how x86 qemu handles it (via a bundles seabios build).

On physical arm hardware, the bit that typically handles fetching the kernel/initrd from disk is U-Boot. However there are no U-Boot builds shipped with qemu for us to take advantage of.

Well that's changed a bit now. I was talking to Gerd about this at KVM Forum last month, and after some tinkering he got a working U-Boot build for the Versatile Express board that qemu emulates.

Steps to use it:
  • Grab a Fedora 21 ARM image (I used the F21 beta 'Minimal' image from here)
  • Enable Gerd's upstream firmware repo
  • Install u-boot.git-arm (this just installs some binaries in /usr/share, doesn't mess with any host boot config)
To use it with libvirt, you can do:

sudo virt-install --name f21-arm-a9-uboot \
  --ram 512 \
  --arch armv7l --machine vexpress-a9 \
  --os-variant fedora21 \
  --boot kernel=/usr/share/u-boot.git/arm/vexpress-a9/u-boot \
  --disk Fedora-Minimal-armhfp-21_Beta-1-sda.raw

For straight QEMU, you can do:

qemu-system-arm -machine vexpress-a9
  -m 512 \
  -nographic \
  -kernel /usr/share/u-boot.git/arm/vexpress-a9/u-boot \
  -sd Fedora-Minimal-armhfp-21_Beta-1-sda.raw

Monday, September 22, 2014

Fedora 21 Virt Test Day is Thu Sep 25!

The Fedora 21 Virt Test Day is this coming Thu Sep 25. Check out the test day landing page:

https://fedoraproject.org/wiki/Test_Day:2014-09-25_Virtualization

If you're interested in trying out some new virt functionality, there's step by step instructions for:
  • Q35 Chipset
  • Import AArch64 image as a VM on x86
  • Install VM using OVMF/UEFI
Even if you aren't interested in testing new features, we still need you! The test day is the perfect time to make sure your virt workflow is working fine on Fedora 21, as there will be several developers on hand to answer any questions, help with debugging, provide patches, etc. No requirement to run through test cases on the wiki, just show up and let us know what works (or breaks).

And to be clear, while it is preferred that you have a physical machine running Fedora 21, participating in the test day does NOT require it: you can test the latest virt bits on the latest Fedora release courtesy of the virt-preview repo. For more details, as well as easy instructions on updating to Fedora 21, see:

https://fedoraproject.org/wiki/Test_Day:2014-09-25_Virtualization#What.27s_needed_to_test

If you can't make the date of the test day, adding test case results to the wiki anytime next week is fine as well. Though if you do plan on showing up to the test day, add your name to the participant list on the wiki, and when the day arrives, pop into #fedora-test-day on freenode and give us a shout!

Sunday, September 7, 2014

virt-manager 1.1.0 released!

virt-manager 1.1.0 is released! ... and should show up in F21 and rawhide shortly.

This release includes:
  • Switch to libosinfo as OS metadata database (Giuseppe Scrivano)
  • Use libosinfo for OS detection from CDROM media labels (Giuseppe Scrivano)
  • Use libosinfo for improved OS defaults, like recommended disk size (Giuseppe Scrivano)
  • virt-image tool has been removed, as previously announced
  • Enable Hyper-V enlightenments for Windows VMs
  • Revert virtio-console default, back to plain serial console
  • Experimental q35 option in new VM 'customize' dialog
  • UI for virtual network QoS settings (Giuseppe Scrivano)
  • virt-install: --disk discard= support (Jim Minter)
  • addhardware: Add spiceport UI (Marc-André Lureau)
  • virt-install: --events on_poweroff etc. support (Chen Hanxiao)
  • cli --network portgroup= support and UI support
  • cli --boot initargs= and UI support
  • addhardware: allow setting controller model (Chen Hanxiao)
  • virt-install: support setting hugepage options (Chen Hanxiao)

Friday, September 5, 2014

Fedora 21 virt test day moved yet again, now Thursday September 25

Third times the charm! With the ongoing F21 alpha delay, I requested the virt test day to be pushed back yet again. Now it's on Thursday September 25th. Check out the landing page for more info on the test day:

https://fedoraproject.org/wiki/Test_Day:2014-09-25_Virtualization

Since I'm tired of making these announcements, even if F21 is delayed some more I won't be rescheduling the test day again, so September 25th is the super official date. :)

Tuesday, August 19, 2014

Fedora 21 virt test day moved one day to September 11

To avoid two back to back test days, we've moved the Fedora 21 virt test day to September 11th. Landing page is now here:

https://fedoraproject.org/wiki/Test_Day:2014-09-11_Virtualization

Wednesday, August 6, 2014

Speaking at Flock 2014





I'm in Prague for Fedora Flock 2014. I'll be giving a talk on Saturday at 11am titled 'Virtualization for Fedora Packagers and Developers'. The talk will explain some intermediate virt tips and tricks to simplify certain common Fedora developer tasks like testing packages across Fedora versions, reproducing reported bugs, and others.

If you see me at Flock, feel free to ask me anything virt related!

Wednesday, July 30, 2014

Fedora 21 virt test day rescheduled to September 10th

Due to Fedora 21 slipping 3 weeks, the virt test day has been rescheduled to September 10th. Landing page is now here:

https://fedoraproject.org/wiki/Test_Day:2014-09-10_Virtualization

Sunday, July 27, 2014

virt-install: create disk image without an explicit path

For most of its life, virt-install has required specifying an explicit disk path when creating storage, like:

virt-install --disk /path/to/my/new/disk.img,size=10 ...

However there's a shortcut since version 1.0.0, just specify the size:

virt-install --disk size=10 ...

virt-install will create a disk image in the default pool, and name it based on the VM name and disk image format, typically $vmname.qcow2

Tuesday, July 15, 2014

virt-manager: changing the default storage path and default virtual network

When creating a new virtual machine via virt-manager or virt-install, the tools make some assumptions about the default location for disk images, and the default network source.

For example, in the 'New VM' wizard, the storage page will offer to create a disk image in the default location:


The default location for most uses of virt-manager is /var/lib/libvirt/images, which is created by libvirt and has the expected selinux labelling and permission to run QEMU/KVM VMs.

Behind the scenes, virt-manager is using a libvirt storage pool for creating disk images. When the 'New VM' wizard is first run, virt-manager looks for a storage pool named 'default'; if it doesn't find that it will create a storage pool named 'default' pointing to /var/lib/libvirt/images. It then uses that 'default' pool for the disk provisioning page.

The default virtual network works similarly. The libvirt-daemon-config-network package will dynamically create a libvirt virtual network named 'default'. You can see the XML definition over here in libvirt.git.

When virt-manager reaches the last page of the 'New VM' wizard, if there's a virtual network named 'default', we automatically select it as the network source:


It's also the network source used when no explicit network configuration is passed to virt-install.

Every now and then someone asks how to make virt-manager/virt-install use a different storage pool or network as the default. As the above logic describes, just name the desired virtual network or storage pool 'default', and the tools will do the right thing.

You can rename storage pools and virtual networks using virt-manager's UI from Edit->Connection Details. It only works on a stopped object though. Here's an example renaming a virtual network 'default' to 'new-name':


Monday, July 7, 2014

Enabling Hyper-V enlightenments with KVM

Windows has support for several paravirt features that it will use when running on Hyper-V, Microsoft's hypervisor. These features are called enlightenments. Many of the features are similar to paravirt functionality that exists with Linux on KVM (virtio, kvmclock, PV EOI, etc.)

Nowadays QEMU/KVM can also enable support for several Hyper-V enlightenments. When enabled, Windows VMs running on KVM will use many of the same paravirt optimizations they would use when running on Hyper-V. For detailed info, see Vadim's presentation from KVM Forum 2012.

From the QEMU/KVM developers, the recommended configuration is:

 -cpu ...,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time  

Which maps to the libvirt XML:

 <features>  
  <hyperv>  
   <relaxed state='on'/>  
   <vapic state='on'/>  
   <spinlocks state='on' retries='8191'/>  
  </hyperv>  
 <features/>  
   
 <clock ...>  
  <timer name='hypervclock' present='yes'/>  
 </clock>  

Some details about the individual features:
  • relaxed/hv_relaxed. Available in libvirt 1.0.0+ (commit) and qemu 1.1+ (commit). This bit disables a Windows sanity check that commonly results in a BSOD when the VM is running on a heavily loaded host (example bugs here, here, and here). Sounds similar to the Linux kernel option no_timer_check, which is automatically enabled when Linux is running on KVM.
  • vapic/hv_vapic. Available in libvirt 1.1.0+ (commit) and qemu 1.1+ (commit).
  • spinlocks/hv_spinlocks. Available in libvirt 1.1.0+ (commit) and qemu 1.1+ (commit)
  • hypervclock/hv_time. Available in libvirt 1.2.2+ (commit) and qemu 2.0+ (commit). Sounds similar to kvmclock, a paravirt time source which is used when Linux is running on KVM.

It should be safe to enable these bits for all Windows VM, though only Vista/Server 2008 and later will actually make use of the features.

(In fact, Linux also has support for using these Hyper-V features, like the paravirt device drivers and hyperv_clocksource. Though these are really only for running Linux on top of Hyper-V. With Linux on KVM, the natively developed paravirt extensions are understandably preferred).

The next version of virt-manager will enable Hyper-V enlightenments when creating a Windows VM (git commit). virt-xml can also be used to enable these bits easily from the command line for an existing VM:

 sudo virt-xml $VMNAME --edit --features hyperv_relaxed=on,hyperv_vapic=on,hyperv_spinlocks=on,hyperv_spinlocks_retries=8191
 sudo virt-xml $VMNAME --edit --clock hypervclock_present=yes  

The first invocation will work with virt-manager 1.0.1, the second invocation requires virt-manager.git. In my testing this didn't upset my existing Windows VMs and they worked fine after a reboot.

Other tools aren't enabling these features yet, though there are bugs tracking this for the big ones:
(edit 2014-09-08: This change was released in virt-manager-1.1.0)

Saturday, July 5, 2014

Fedora 21 Virt Test Day scheduled for August 20th 2014

Just a quick note that the Fedora 21 Virt Test Day is scheduled for Wednesday, August 20th 2014. The inprogress landing page is at:

https://fedoraproject.org/wiki/Test_Day:2014-08-20_Virtualization

So if you're interested in helping test new virt features, or you want to make sure that the stuff you care about isn't broken, please mark your calendars.

Sunday, June 1, 2014

python-bugzilla 1.1.0 released

I just released python-bugzilla 1.1.0, you can see the release announcement over here. Updates in progress for f19, f20, rawhide, and epel6.

This release includes:

- Support for bugzilla tokens (Arun Babu Nelicattu)
- bugzilla: Add query/modify --tags
- bugzilla --login: Allow to login and run a command in one shot
- bugzilla --no-cache-credentials: Don't use or save cached credentials
  when using the CLI
- Show bugzilla errors when login fails
- Don't pull down attachments in bug.refresh(), need to get
  bug.attachments manually
- Add Bugzilla bug_autorefresh parameter.

Some of these changes deserve a bit more explanation. This is just adapted from the release announcement, but I wanted to give these changes a bit more attention here on the blog.

Bugzilla tokens

 

Sometime later in the year, bugzilla.redhat.com will be updated to a new version of bugzilla that replaces API cookie authentication with a non-cookie token system. I don't fully understand the reasoning so don't ask :) Regardless, this release supports the new token infrastructure along side the existing cookie handling.

Users shouldn't notice any difference: we cache the token in ~/.bugzillatoken so things work the same as the current cookie auth.

If you use cookiefile=None in the API to tell bugzilla _not_ to cache any login credentials, you will now also want to specify tokenfile=None (hint hint fedora infrastructure).

bugzilla --login and bugzilla --no-cache-credentials

 

Right now the only way to perform an authenticated bugzilla command on a new machine requires running a one time 'bugzilla login' to cache  credentials before running the desired command.

Now you can just do 'bugzilla --login ' and the login process will be initiated before invoking the command.

Additionally, the --no-cache-credentials option will tell the bugzilla tool to _not_ save any credentials to ~/.bugzillacookies or ~/.bugzillatoken.


Bugzilla.bug_autorefresh

 

When interacting with a Bug object, if you attempt to access a property (say, bugobj.component) that hasn't already been fetched from the bugzilla instance, python-bugzilla will do an automatic 'refresh'/getbug to pull down every property for said bug in an attempt to satisfy the request.

This is convenient for a one off API invocation, but for recurring scripts this is a waste of time and bugzilla bandwidth. The autorefresh can be avoided by passing a properly formatted include_fields to your query request, where include_fields contains every Bug property you will access in your script.

However it's still quite easy to extend a script with a new property usage and forget to adjust include_fields. Things will continue to work due to the autorefresh feature but your script will be far slower and wasteful.

A new Bugzilla property has been added, bug_autorefresh. Set this to False to disable the autorefresh feature for newly fetched bugs. This will cause an explicit error to be raised if your code is depending on autorefresh.

Please consider setting this property for your recurring scripts. Example:

  bzapi = Bugzilla("bugzilla.redhat.com")
  bzapi.bug_autorefresh = False
  ...

autorefresh can be disabled for individual bugs with:

  bug.autorefresh = False

Tuesday, May 20, 2014

Invoking a bugzilla query URL from the command line

The /usr/bin/bugzilla tool provided by python-bugzilla is quite handy for managing batch actions on bugs or quickly performing simple queries. However when you want to perform a crazy query with all sorts of boolean rules that the web UI exposes, the command line gets pretty unwieldy.

However, there's a simple workaround for this. Generate a query URL using the bugzilla web UI, and pass it to /usr/bin/bugzilla like:

 bugzilla query --from-url "$url"  

That's it! Then you can tweak the output as you want using --outputformat or similar. This also works for savedsearch URLs as well.

So, say I go to the bugzilla web UI and say 'show me all open, upstream libvirt bugs that haven't received a comment in since 2013'. It generates a massive URL. Here's what the URL looks like:

https://bugzilla.redhat.com/buglist.cgi?bug_status=NEW&bug_status=ASSIGNED&bug_status=POST&bug_status=MODIFIED&bug_status=ON_DEV&bug_status=ON_QA&bug_status=VERIFIED&bug_status=RELEASE_PENDING&classification=Community&component=libvirt&f1=longdesc&list_id=2372821&n1=1&o1=changedafter&product=Virtualization%20Tools&query_format=advanced&v1=2013-12-31

Just pass that that thing as $url in the above command, and you should see the same results as the web search (if your results aren't the same, you might need to do 'bugzilla login' first to cache credentials).

This is also easy to do via the python API:

#!/usr/bin/python

import bugzilla
bzapi = bugzilla.Bugzilla("bugzilla.redhat.com")
buglist = bzapi.query(bzapi.url_to_query("URL"))

for bug in buglist:
    print bug.summary
    ....

The caveat: as of now this only works with bugzilla.redhat.com, which has an API extension that allows it to interpret the URL syntax as regular query parameters. My understanding is that this may be available upstream at some point, so other bugzilla instances will benefit as well.

Tuesday, May 13, 2014

virt-manager 1.0: reduced polling and CPU usage

A lot of work was done for virt-manager 1.0 to reduce the amount of libvirt polling and API calls we make for common operations. Up until this point, virt-manager had to poll libvirt at regular intervals to update the domain list, domain status, and domain XML. By default we would poll once a second (configurable in the Preferences dialog).

Although this burned far more CPU than necessary, things generally worked fine when talking to libvirtd on the local machine. However things really fell apart when connecting to a remote host with a lot of VMs, or over a high latency link: the polling would just saturate the connection and the app would be quite sluggish. Since the latter scenario is a pretty common setup for some of my remote colleagues at Red Hat, I heard about this quite a bit over the years :)

One of the major hurdles to reducing needless polling was that virt-manager and virtinst were separate code bases, as I explained in a previous post. For example, there is one routine in virtinst that will check if a new disk path is already in use by another VM: it does this by checking the path against the XML of every VM. Since virtinst was separate code, it had to do all this polling and XML fetching from scratch, despite the fact that we had this information cached in virt-manager. We could have taught virtinst about the virt-manager cache or some similar solution, but it was cumbersome to make changes like that while maintaining back compatibility with older virtinst users.

Well, with virt-manager 0.10.0 we deprecated the public virtinst API and merged the code into virt-manager git. This allowed us to do a ton of code cleanup and simplification during the virt-manager 1.0 cycle to remove much of the API spamming.

The other major piece we added in virt-manager 1.0 is use of asynchronous libvirt events. The initial events support in libvirt was added way back in October 2008 by a couple folks from VirtualIron. That's quite a while ago, so supporting this in virt-manager was long overdue. Though waiting a long time had the nice side effect of letting other projects like oVirt shake all the bugs out of libvirt's event implementations :)

Regardless, virt-manager 1.0 will use domain (and network) events now if connected to a sufficiently new version of libvirt and the driver supports it. We still maintain the old polling code for really old libvirt, and libvirt drivers that don't support the event APIs. Even on latest libvirt some polling is still needed since not all libvirt objects support event APIs, although now we poll on demand which reduces our CPU and network usage.

Tuesday, May 6, 2014

A brief-ish history of virtinst and virt-install

virt-install is a command line tool for creating new virtual machines via libvirt. It's an important piece of the libvirt ecosystem that has shipped in RHEL5.0 and up, and over a dozen Fedora versions.

It wasn't always called virt-install though: it started life as xenguest-install.py written by Jeremy Katz. I think it was just an internal Red Hat only thing for a brief period, until it first surfaced as part of the Fedora 'xen' package in January 2006:

 commit 02687b4e3f7fa0db5de34280a2cb7e1a8eb8ff18  
 Author: Stephen Tweedie <sct@fedoraproject.org>  
 Date:  Tue Jan 31 16:59:19 2006 +0000  
   
   Add xenguest-install.py in /usr/sbin  

(Strangely, the file isn't actually in git... I assume this is some accident of the CVS->git conversion. You can see the version shipped with Fedora Core 5 in the archived RPM).

Check out the original set of arguments

 Options:  
  -h, --help      show this help message and exit  
  -n NAME, --name=NAME Name of the guest instance  
  -f DISKFILE, --file=DISKFILE  
             File to use as the disk image  
  -s DISKSIZE, --file-size=DISKSIZE  
             Size of the disk image (if it doesn't exist) in  
             gigabytes  
  -r MEMORY, --ram=MEMORY  
             Memory to allocate for guest instance in megabytes  
  -m MAC, --mac=MAC   Fixed MAC address for the guest; if none is given a  
             random address will be used  
  -v, --hvm       This guest should be a fully virtualized guest  
  -c CDROM, --cdrom=CDROM  
             File to use a virtual CD-ROM device for fully  
             virtualized guests  
  -p, --paravirt    This guest should be a paravirtualized guest  
  -l LOCATION, --location=LOCATION  
             Installation source for paravirtualized guest (eg,  
             nfs:host:/path, http://host/path, ftp://host/path)  
  -x EXTRA, --extra-args=EXTRA  
             Additional arguments to pass to the installer with  
             paravirt guests  

All those bits are still working with virt-install today, although many are are deprecated and hidden from the --help output by default.

In early 2006, libvirt barely even existed, so the xenguest-install.py was generating xen xm config files (basically just raw python code) in /etc/xen.
Fedora CVS was the canonical home of the script.

In March 2006, Dan Berrangé started work on virt-manager. It was very briefly called gnome-vm-manager, then settled into gnome-virt-manager until July 2006 when it was renamed to virt-manager.

In August 2006, xenguest-install moved to its own repo, python-xeninst:

 commit 1e2e1aa0ca0b5ed8669be61aa4271a3e8c1d7333  
 Author: Jeremy Katz <katzj@redhat.com>  
 Date:  Tue Aug 8 21:37:49 2006 -0400  
   
   first pass at breaking up xenguest-install to have more of a usable API.  
   currently only works for paravirt and some of the bits after the install  
   gets started are still a little less than ideal  

Much of logic was moved to a 'xeninst' module. There were some initial bits for generating libvirt XML, but the primary usage was still generating native xen configuration.

(Both repositories were hosted in mercurial at hg.et.redhat.com for many years. We eventually transitioned to git in March 2011. Actually it's amazing it was only 3 years ago: I've pretty much entirely forgotten how to use mercurial despite using it for 4 years prior.)

In October 2006, the project was renamed python-virtinst and the tool renamed to virt-install. By this point virt-manager was using the xeninst module for guest creation and needed to handle the rename as well.

So now python-virtinst was its own standalone package, providing virt-install and a python library named virtinst. Over the next couple years the repo accumulated a few more tools: virt-clone in May 2007, virt-image in June 2007, and virt-convert (originally virt-unpack) in July 2008.

However over the next few years we had some growing pains with the virtinst module. It wasn't exactly a planned API, rather a collection of code that grew organically from a quick hack of a script. It never received too much thought for future compatibility. The fact that it ended up as a public API was more historic accident than anything. Once we accumulated external users (cobbler in March 2007 and koji in July 2010) we were stuck with the API in the name of back compatibility.

Then there was the general frustration of doing virt-manager development when it evolved in lockstep with virtinst: running upstream virt-manager always required running up to date python-virtinst, which was a barrier to upstream contribution.

So in February 2012 I layed out some reasons for dropping virtinst as a public API and merging the code into virt-manager.git, though it didn't fully happen until april 2013 during the virt-manager 0.10.0 cycle. In the intervening year, I sent patches to koan and koji to move off virtinst to calling the needed virt-* tool directly.

So nowadays virtinst, virt-install, etc. all live with virt-manager.git. If you're looking for a library that helps handle libvirt XML or create libvirt VMs, check out libvirt-designer and libvirt-gobject/libvirt-gconfig.

Tuesday, April 29, 2014

Github doesn't support pull-request notifications to mailing lists

Recently I played around with github a bit, with the intention of finding a useful setup for hosting python-bugzilla (and possibly virt-manager). However I want to preserve the traditional open source mailing list driven development model.

github's whole development model is built around pull requests. Personally I kinda like the setup, but for a pre-existing project built around patches on a mailing list it's quite a different workflow.

github doesn't allow disabling the pull-request option. This is understandable since it's pretty central to their entire model. However my main sticking point is that github doesn't provide a straightforward way to send pull request notifications to a mailing list. I don't want everyone on a mailing list to have to opt in to watching the repo on github to be notified of pull requests. I want non-github users to be able to contribute to pull request discussions on a mailing list. I want pull requests on the project mailing list since it's already a part of my workflow. I don't want my project to be one of those that accumulates ignored pull-requests because it isn't part of the project workflow and no one is watching the output.

Googling about this was quite frustrating, it was difficult to find a clear answer. I eventually found an abandoned pull request to github-services that made everything clear. But not before I tried quite a few things. Here's what I tried:
  • Opening a github account using the mailing list address, 'watch' the repository. It works, but yeah, not too safe since anyone can just trigger a 'forgot password' reset email.
  • Put the repo in an 'organization', add the mailing list as a secondary address to your account, have all notifications for the organization go to the mailing list. But even secondary accounts work for the password reset, so that's out.
  • 'email' webhook/service: At your github repo, go to settings->webhooks & services->configure services->email. Hey, this looks promising. The problem is it's quite limited in scope, only supporting email notifications for repo pushes and when  a public repo is added.
  • The actual webhook configuration is quite elaborate and allows notifying of pull-requests and everything else you would want to know, but that requires running an actual web service somewhere. But I have no interest in maintaining a public service just to proxy some email.
There's a public repo for the bit of github that the 'email' webhook lives under. I stuck some thoughts on an open issue that more or less tracks the RFE to extend the email capabilities.

Someone out there with some spare time and ruby-fu want to take a crack at this? I think many old school open source projects would be thankful for it :)

Tuesday, April 22, 2014

virt-convert command line has been reworked


One of the changes we made with virt-manager 1.0 was a large reworking of the virt-convert command line interface.

virt-convert started life as a tool for converting back and forth between different VM configuration formats. Originally it was just between vmx and virt-image(5), but it eventually grew ovf input support. However, for the common usage of trying to convert a vmx/ovf appliance into a libvirt guest, this involved an inconvenient two step process:

* Convert to virt-image(5) with virt-convert
* Convert to a running libvirt guest with virt-image(1)

Well, since virt-image didn't really have any users, it's planned for removal. So we took the opportunity to improve virt-convert in the process. Running virt-convert is now as simple as:

 virt-convert fedora18.ova  

or

 virt-convert centos6.tar.gz  

And a we convert directly to libvirt XML and launch the guest. Standard libvirt options are allowed, like --connect for specifying the libvirt driver.

The tool hasn't been heavily used and there's definitely still a lot of details we are missing, so if you hit any issues please file a bug report.

(Long term it sounds like gnome-boxes may grow a similar feature as mentioned over here, so maybe virt-convert isn't long for this world since there will likely be a command line interface for it as well)

Thursday, April 17, 2014

qemu 2.0.0 built for rawhide, but no qemu-system-aarch64 in fedora yet

qemu 2.0.0 was released today, and I've built it for rawhide now. Which means it will be in the fedora-virt-preview repo tomorrow when my scripts pick it up (need to convert to copr one of these days...).

The 2.0 release number is fairly arbitrary here and not related to any particular feature or development. Though each qemu release tends to have some newsworthy goodies in it :)

The top highlighted change in the announcement is about qemu-system-aarch64, though it isn't built in the Fedora package yet. Right now qemu-system-aarch64 _only_ works on aarch64 machines, but most people that will try and give it a spin now will be trying to do aarch64 emulation on x86. So I decided not to build it yet to save myself some bug reports :)

I suspect the next qemu release will have something usable for x86 usage, so in roughly 2 months time (if things stay on schedule) when qemu 2.1 rc0 is out, I'll add the qemu-system-aarch64 sub package. This is being tracked as a 'change' for Fedora 21: https://fedoraproject.org/wiki/Changes/Virt_64bit_ARM_on_x86

If anyone working on Fedora aarch64 actually cares about qemu-system-aarch64 at this stage and wants it packaged up, ping me and we can talk.

Tuesday, April 15, 2014

Deprecating little used tool virt-image(1)

In the recent virt-manager 1.0 release, we've taken a step towards deprecating one of the command line tools we ship, virt-image(1). This shouldn't have any real end user effect because I'm pretty sure near zero people are using it.

virt-image was created in June 2007 as an XML schema and command line tool for distributing VM images as appliances. The format would describe the fundamental needs of a VM image, like how many disk devices it wants, but leave the individual configuration details up to the user. The virt-image(1) tool would take the XML as input and kick off a libvirt VM.

While the idea was reasonable, the XML format would only be useful if people actually used it, which never happened. All desktop VM appliance usage nowadays is shipped with native VMWare config as .vmx files, or with .ovf configuration.

In the past, appliance-tools generated virt-image XML, but that was dropped. Same with boxgrinder.

But most users of it in the past few years have been way of the (also little used) virt-convert tool that we ship with virt-manager (which I'll cover in a later post). Historically virt-convert would output virt-image XML by default. Well, that too was changed in 1.0: virt-convert generates direct libvirt XML now. This makes virt-convert more convenient, and left us with no good reason to keep virt-image around anymore.

So the plan is to drop virt-image before the next major release of virt-manager. Likely sometime in the next 6 months.

(edit 2014-09-08: virt-image was removed in virt-manager-1.1.0)

Tuesday, April 8, 2014

pylint in Fedora 20 supports gobject introspection

GObject introspection is the magical plumbing that enables building multiple language bindings for a GObject-based library using not much more than API documentation. This is used by PyGObject to give us python access to gtk3, gdk, glib, etc. Pretty sweet.

Unfortunately the automagic nature of this confused pylint, claiming your 'from gi.repository import Gtk' import didn't exist, and losing many of its nice features when interacting with objects provided by introspection derived bindings.

I love pylint and it's a critical part of my python development workflow. So last year I decided to do my part and submit a patch to add some gobject introspection knowledge to pylint.

It works by actually importing the module (like 'Gtk' above), inspecting all its classes and functions, and building stub code that pylint can analyze. It's not perfect, but it will catch things like misspelled method names. (Apparently newer python-astroid has some infrastructure to inspect living objects, so likely the plugin will use that one day).

This support was released in python-astroid-1.0.1, which hit Fedora 20 at the beginning of March. Unfortunately a a bug was causing a bunch of false positives with gobject-introspection, but that should be fixed with python-astroid-1.0.1-3 heading to F20.

Tuesday, April 1, 2014

Spice USB redirection in virt-manager

A new feature we added in virt-manager 1.0 is out of the box support for Spice USB redirection.

When connected to a VM's graphical display, any USB device plugged in to your physical host will be automatically redirected to the VM. This is great for easily sharing a usb drive with your VM. Existing devices can also be manually attached via the VM window menu option 'Virtual Machine->Redirect USB Device'

The great thing about Spice USB redirection is that it doesn't require configuring the spice agent or any special drivers inside the VM, so for example it will 'just work' for your existing windows VMs. And since the streaming is done via the spice display widget, you can easily share a local USB device with a VM on a remote host.

This feature is only properly enabled for KVM VMs that are created with virt-manager 1.0 or later. Configuring an existing VM requires 3 changes:
  1. Set the graphics type to Spice
  2. Set the USB controller model to USB2
  3. Add a 'Redirection USB' device to the VM. Add multiple redirection devices to allow redirecting multiple host USB devices simultaneously.
All those bits should be fairly straight forward to do with the UI in virt-manager 1.0.

For more details, like how to do this using libvirt XML or the qemu command line, check the documentation over here:

http://people.freedesktop.org/~teuf/spice-doc/html/

Wednesday, March 26, 2014

python-bugzilla 1.0.0 released

I released python-bugzilla 1.0.0 yesterday. Since Arun led the charge to get python 3 support merged, it seemed like as good a time as any to go 1.0 :)

python-bugzilla provides the /usr/bin/bugzilla command line tool that is heavily used in Fedora and internally at Red Hat. The library itself is used by parts of Fedora infrastructure like pkgdb and bodhi.

This major changes in this release:

- Python 3 support (Arun Babu Neelicattu)
- Port to python-requests (Arun Babu Neelicattu)
- bugzilla: new: Add --keywords, --assigned_to, --qa_contact (Lon
  Hohberger)
- bugzilla: query: Add --quicksearch, --savedsearch
- bugzilla: query: Support saved searches with --from-url
- bugzilla: --sub-component support for all relevant commands

The sub component stuff is a recent bugzilla.redhat.com extension that hasn't been used much yet. I believe the next bugzilla.redhat.com update will add sub components for the Fedora kernel package, so bugs can be assigned to things like 'Kernel->USB', 'Kernel->KVM", etc. Quite helpful for complicated packages.

But I don't know if it's going to be opened up as an option for any Fedora package, I guess we'll just have to wait and see. I assume there will be an announcement about it at some point.

Tuesday, March 25, 2014

virt-manager: Improved CPU model default

In virt-manager 1.0 we improved many of the defaults we set when creating a new virtual machine. One of the major changes was to how we choose the CPU model that's exposed to the VM OS.

CPU model here means something like 'Pentium 3' or 'Opteron 4' and all the CPU flags that go along with that. For KVM, every CPU flag that we expose to the VM has to be something provided by the host CPU, so we can't just unconditionally tell the VM to use the latest and greatest features. The newer the CPU that's exposed to the guest, the more features the guest kernel and userspace can use, which improves performance.

There's a few trade offs however: if live migrating your VM, the destination host CPU needs to be able to represent all the features that have been exposed to the VM. So if your VM is using 'Opteron G5', but your destination host is only an 'Opteron G4', the migration will fail. And changing the VM CPU after OS installation can cause Windows VMs to require reactivation, which can be problematic. So in some instances you are stuck with the CPU model the VM was installed with.

Prior to virt-manager 1.0, new VMs received the hypervisor default CPU model. For qemu/kvm, that's qemu64, a made up CPU with very few feature flags. This leads to less suboptimal guest OS performance but maximum migration compatibility.

But the reality is that cross host migration is not really a major focus of virt-manager. Migration is all about preserving uptime of a server VM, but most virt-manager users are managing VMs for personal use. It's a bigger win to maximize out of the box performance.

For virt-manager 1.0, we wanted new VMs to have an identical copy of the host CPU. There's two ways to do that via libvirt:
  1. mode=host-passthrough: This maps to the 'qemu -cpu host' command line. However, this option is explicit not recommended for libvirt usage. libvirt wants to fully specify a VM's hardware configuration, to insulate the VM from any hardware layout changes when qemu is updated on the host. '-cpu host' defers to qemu's detection logic, which violates that principle.
  2. mode=host-model: This is libvirt's attempted reimplementation of '-cpu host', and is the recommended solution in this case. However the current implementation has quite a few problems. The issues won't affect most users, and they are being worked on, but for now host-model isn't safe to use as a general default.
So for virt-manager 1.0, we compromised to using the nearest complete CPU model of the host CPU. This requires a bit of explanation. There are multiple CPUs on the market that are labelled as 'core2duo'. They all share a fundamental core of features that define what 'core2duo' means. But some models also have additional features. virt-manager 1.0 will ignore those extra bits and just pass 'core2duo' to your VM. This is the best we can do for performance at the moment without hitting the host-model issues mentioned above.

However this default is configurable: if you want to return to the old method that maximizes migration compatibility, or you want to try out host-model, you can change the default for new VMs in Edit->Preferences:

Saturday, March 22, 2014

virt-manager 1.0.1 release

I've just released virt-manager 1.0.1. This was mostly a bug fix release to gather up the sizeable number of bug fixes that accumulated since virt-manager 1.0.0 was released last month.

Though there were a few mini features added:
  • - virt-install/virt-xml: New --memorybacking option (Chen Hanxiao)
  • - virt-install/virt-xml: New --memtune option (Chen Hanxiao)
  • - virt-manager: UI for LXC (Chen Hanxiao)
  • - virt-manager: gsettings key to disable keygrab (Kjö Hansi Glaz)
  • - virt-manager: Show domain state reason in the UI (Giuseppe Scrivano)
  • - Support for many more device live update operations, like changing the network source for a running VM, etc.
Builds inprogress for F20 and rawhide

Monday, March 17, 2014

Snapshot support in virt-manager


The biggest feature we added in virt-manager 1.0 is VM snapshot support. Users have been asking us to expose this in the UI for quite a long time. In this post I'll walk you through the new UI.

Let's start with some use cases for VM snapshots:
  1. I want to test some changes to my VM, and either throw them away, or use them permanently.
  2. I want to have a single Fedora 20 VM, but multiple snapshots with mutually exclusive OS changes in each. One snapshot might have F20 libvirt installed, but another snapshot will have libvirt.git installed. I want to switch between them for development, bug triage, etc.
  3. I encountered a bug in the VM. I want to save the running state of the VM incase developers want further debugging information, but I also want to restart the VM and continue to use it in the meantime.
The libvirt APIs support two different types of snapshots with qemu/kvm.

Internal snapshots


Internal snapshots are the snapshots that QEMU has supported for a long time. Libvirt refers to them as 'internal' because all the data is stored as part of the qcow2 disk image: if you have a VM with a single qcow2 disk image and take 10 snapshots, you still have only one file to manage. This is the default snapshot mode if using the 'virsh snapshot-*' commands.

These snapshots can be combine disk and VM memory state for 'checkpointing', so you can jump back and forth between a saved running VM state. A snapshot of an offline VM can also be performed, and only the disk contents will be saved.

Cons:
  • Only works with qcow2 disk images. Since virt-manager has historically used raw images, pre-existing VMs may not be able to work with this type.
  • They are non-live, meaning the VM is paused while all the state is saved. For end users this likely isn't a problem, but if you are managing a public server, minimizing downtime is essential.
  • Historically they were quite slow, but this has improved quite a bit with QEMU 1.6+

External snapshots


External snapshots are about safely creating copy-on-write overlay files for a running VM's disk images. QEMU has supported copy-on-write overlay files for a long time, but the ability to create them for a running VM is only a couple years old. They are called 'external' because every snapshot creates a new overlay file.

While the overlay files have to be qcow2, these snapshots will work with any base disk image. They can also be performed with very little VM downtime, at least under a second. The external nature also allows different use cases like live backup: create a snapshot, back up the original backing image, when backup completes, merge the overlay file changes back into the backing image.

However that's mostly where the benefits end. Compared to internal snapshots, which are an end to end solution with all the complexity handled in QEMU, external snapshots are just a building block for handling the use cases I described above... and the many of the pieces haven't been filled in yet. Libvirt still needs a lot of work to reach feature parity with what internal snapshots already provide. This is understandable, as the main driver for external snapshot support was for features like live backup that internal snapshots weren't suited for. Once that point was reached, there hasn't been much of a good reason to do the difficult work of filling in the remaining support when internal snapshots already fit the bill.

virt-manager support


Understandably we decided to go with internal snapshots in virt-manager's UI. To facilitate this, we've changed the default disk image for new qemu/kvm VMs to qcow2.

The snapshot UI is reachable via the VM details toolbar and menu:


That button will be disabled with an informative tool tip if snapshots aren't supported, such as if the the disk image isn't qcow2, or using a libvirt driver like xen which doesn't have snapshot support wired up.

Here's what the main screen looks like:


It's pretty straight forward. The column on the left lists all the snapshots. The 'VM State' means the state the VM was in when the snapshot was taken. So running/reverting to a 'Running' snapshot means the VM will end up in a running state, a 'Shutoff' snapshot will end up with the VM shutoff, etc.

The check mark indicates the last applied snapshot, which could be the most recently created snapshot or the most recently run/reverted snapshot. The VM disk contents are basically 'the snapshot contents' + 'whatever changes I've made since then'. It's just an indicator of where you are.

Internal snapshots are all independent of one another. You can take 5 successive snapshots, delete 2-4, and snapshot 1 and 5 will still be completely independent. Any notion of a snapshot hierarchy is really just metadata, and we decided not to expose it in the UI. That may change in the future.

Run/revert to a snapshot with the play button along the bottom. Create a new snapshot by hitting the + button. The wizard is pretty simple:


That's about it. Give it a whirl in virt-manager 1.0 and file a bug if you hit any issues.

Monday, March 10, 2014

Extending the virt-xml command line

As previously explained, virt-manager 1.0.0 shipped with a tool called virt-xml, which enables editing libvirt XML from the command line in one shot. This post will walk through an example of patching virt-xml to support a new libvirt XML value.

A bit of background: libvirt VM configuration is in XML format. It has quite an extensive XML schema. For QEMU/KVM guests, most of the XML attributes map to qemu command line values. QEMU is always adding new emulated hardware and new features, which in turn require the XML schema to be extended. Example: this recent libvirt change to allow turning off Spice drag + drop support with a <filetransfer enable='no'/> option.

For this example, we are going to expose a different property: defaultMode, also part of the graphics device. defaultMode can be used to tell qemu to open all spice channels in secure TLS mode. But for the purpose of this example, what defaultMode actually does and how it works isn't important. For virt-xml, the only important bit is getting the value for the command line, writing it correctly as XML, and unit testing the XML generation.

You can see the completed virt-xml git commit over here.



Step 0: Run the test suite

The virt-manager test suite aims to always 100% pass, but depending on your host libvirt version things can occasionally be broken. Run 'python setup.py test' and note if any tests fail. The important bit here is that after we make all the following changes, the test suite shouldn't regress at all.


Step 1: XML generation

 diff --git a/virtinst/devicegraphics.py b/virtinst/devicegraphics.py  
 index 37f268a..a87b71c 100644  
 --- a/virtinst/devicegraphics.py  
 +++ b/virtinst/devicegraphics.py  
 @@ -204,6 +204,7 @@ class VirtualGraphics(VirtualDevice):  
    passwdValidTo = XMLProperty("./@passwdValidTo")  
    socket = XMLProperty("./@socket")  
    connected = XMLProperty("./@connected")  
 +  defaultMode = XMLProperty("./@defaultMode")

    listens = XMLChildProperty(_GraphicsListen)  
    def remove_listen(self, obj):  

Starting with virt-manager git, first we extend the internal API to map a python class property to its associated XML value.

The virtinst/ directory contains the internal XML building API used by all the tools shipped with virt-manager. There's generally a single file and class per XML object, examples
  • devicegraphics.py: <graphics> device
  • cpu.py: <cpu> block
  • osxml.py: <os> block
  • And so on
If you aren't sure what file or class you need to alter, try grepping for a property you know that virt-install already supports. So, for example, using virt-install --graphics=? I see that there's a property named passwdValidTo. Doing 'git grep passwdValidTo' will point to virtinst/devicegraphics.py

'XMLProperty' is some custom glue that maps a python class property to an XML value, for both reading and writing. The value passed to XMLProperty is an XML xpath. If you don't know how xpaths work, google around, or try to find an existing example in the virtinst code.

Notice that this doesn't do much else, like validate that the value passed to defaultMode is actually valid. The general rule is to leave this up to libvirt to complain.


Step 2: Command line handling

 diff --git a/virtinst/cli.py b/virtinst/cli.py  
 index 826663a..41d6a8c 100644  
 --- a/virtinst/cli.py  
 +++ b/virtinst/cli.py  
 @@ -1810,6 +1810,7 @@ class ParserGraphics(VirtCLIParser):  
      self.set_param("passwd", "password")  
      self.set_param("passwdValidTo", "passwordvalidto")  
      self.set_param("connected", "connected")  
 +    self.set_param("defaultMode", "defaultMode")  
    
    def _parse(self, opts, inst):  
      if opts.fullopts == "none":  

The next step is to set up command line handling. In this case we are adding a sub option to the --graphics command. Open up virtinst/cli.py and search for '--graphics', you'll find a comment with a ParserGraphics class defined after it. That's where we plug in new sub options.

The 'self.set_param' registers the sub option: first argument is the name on the cli, second argument is the property name we defined above. In this case they are the same.

Some options do extra validation or need to do special handling. If you need extra functionality, look at examples that pass setter_cb to set_param.

After this bit is applied, you'll see defaultMode appear in the --graphics=? output, and everything will work as expected. But we need to add a unit test to validate the XML generation.

An easy way to test that this is working is with a command line like:

./virt-install --connect test:///default --name foo --ram 64 \
               --nodisks --boot network --print-xml \
               --graphics spice,defaultMode=secure

That will use libvirt's 'test' driver, which is made for unit testing, and doesn't affect the host at all. The --print-xml command will output the new XML. Verify that your new command line option works as expected before continuing. See the HACKING file for additional tips for using the test driver.


Step 3: Unit test XML generation

 diff --git a/tests/xmlparse.py b/tests/xmlparse.py  
 index a2448d2..397da45 100644  
 --- a/tests/xmlparse.py  
 +++ b/tests/xmlparse.py  
 @@ -559,6 +559,7 @@ class XMLParseTest(unittest.TestCase):  
      check("channel_cursor_mode", "any", "any")  
      check("channel_playback_mode", "any", "insecure")  
      check("passwdValidTo", "2010-04-09T15:51:00", "2011-01-07T19:08:00")  
 +    check("defaultMode", None, "secure")  
    
      self._alter_compare(guest.get_xml_config(), outfile)  

tests/xmlparse.py tests reading and writing, so it will test the change we made in virtinst/devicegraphics.py. Before you make any tests/, run 'python setup.py test --testfile xmlparse.py', you should see a new error: this is because xmlparse.py will emit a test failure if a new XML property in virtinst/ that isn't explicitly tested!

Similar to how you found what virtinst/ file to edit by grepping for a known graphics property like passwdValidTo, do the same in xmlparse.py to find the pre-existing graphics test function. The check() invocation is a small wrapper for setting and reading a value: the first argument is the python property name we are poking, the second argument is what the initial value should be, and the final argument is the new value we are setting.

The initial XML comes from tests/xmlparse-xml/*, and is initialized at the start of the function. But in our case, we don't need to manually alter that. So make the change, and rerun 'python setup.py test --testfile xmlparse.py' and...

Things broke! That's because the generated XML output changed, and contains our new defaultMode value. So we need to update the known-good XML files we compare against. The easiest way to do that is to run 'python setup.py test --testfile xmlparse.py --regenerate-output'. Run 'git diff' afterwards to ensure that only the graphics file was changed.

Finally, run 'python setup.py test' and ensure the rest of the test suite doesn't regress compared to the initial run you did in Step 0.

For cases where you added non-trivial command line handling, take a look at tests/clitest.py, where we run a battery of command line parsing tests. You likely want to extend this to verify your command line works as expected.

Also, if you want to add an entirely new command line option that maps to an XML block, this commit adding the --memtune option is a good reference.


Step 4) Documentation?

For each new option sub property, the general rule is we don't need to explicitly list it in the man page or virt-install/virt-xml --help output. The idea is that command line introspection and libvirt XML documentation should be sufficient. However, if your command line option has some special behavior, or is particularly important, consider extending man/virt-install.pod.


Step 5) Submit the patch!

So your patch is done! git commit -a && git send-email -1 --to virt-tools-list@redhat.com or simply drop it in a bug report. If you have any questions or need any assitance, drop us a line.

(update 2015-09-04: Point git links to github)

Tuesday, March 4, 2014

virt-xml: Edit libvirt XML from the command line

We shipped a new tool with virt-manager 1.0.0 called virt-xml. virt-xml uses virt-install's command line options to allow building and editing libvirt domain XML. A few basic examples:

Change the <description> of domain 'example':
# virt-xml example --edit --metadata description="my new description"  

Enable the boot device menu for domain 'example':
# virt-xml example --edit --boot bootmenu=on  

Hotplug host USB device 001.003 to running domain 'fedora19':
# virt-xml f19 --add-device --host-device 001.003 --update

The virt-xml man page also has a comprehensive set of examples.

While I doubt anyone would call it sexy, virt-xml fills a real need in the libvirt ecosystem. Prior to virt-xml, a question like 'how do I change the cache mode for my VM disk' had two possible answers:

1) Use 'virsh edit'

'virsh edit' drops you into $EDITOR and allows you to edit the XML manually. Now ignoring the fact that editing XML by hand is a pain, 'virsh edit' requires the user to know the exact XML attribute or property name, and where to put it. And if its in the wrong place or mis-named, in most cases libvirt will happily ignore it with no feedback (this is actually useful at the API level but not very friendly for direct user interaction).

But more than that, have you ever seen what happens when you drop a less than savvy user into vim for the first time? It doesn't end well :) And this happens more than you might expect.

2) Use virt-manager

The more newbie friendly option for sure, the UI is intuitive enough that people can usually find the XML bit they want to twiddle... provided it actually exists in virt-manager.

And that's the problem: over time these types of requests put pressure on virt-manager to expose many kind-of-obscure-but-not-so-obscure-that-virsh-edit-is-an-acceptable-answer XML properties in the UI. It was unclear where to draw the line on what should be in the UI and what shouldn't, and we ended up with various UI bits that very few people were actually interacting with.

Enter virt-xml

So here's virt-xml, that allows us to easily make these types of XML changes with a single command. This takes the pressure off virt-manager, and provides a friendly middle ground between the GUI and 'virsh edit'. It also greatly simplifies documentation and wiki pages (like fedora test day test cases).

The CLI API surface is huge compared to virt-manager's UI. There's no reason that virt-xml can't expand to support every XML property exposed by libvirt. And we've worked on making it trivially easy to to extend the tool to handle new XML options: in many cases, it's only 3 lines of code to add a new --disk/--network/... sub option, including unit testing, command line introspection, and virt-install support.