Wednesday, January 20, 2016

Tips for querying git tags


With package maintenance, bug triage, and email support, I often need to look at a project's git tags to know about the latest releases, when they were released, and what releases contain certain features. Here's a couple workflow tips that make my life easier.

Better git tag listing

Based on Peter Hutterer's 'git bi' alias for improved branch listing (which is great and highly recommended), I made one for improved tags output that I mapped as 'git tags'. Output looks like:

  • Shows tag name, commit message, commit ID, and date, all colorized. Commit message is redundant for many projects that tag the release commit, but it's interesting in some cases.
  • Tags are listed by date rather than alphabetically. Some projects change tag string formats, or versioning schemes, that then don't sort correctly when listed alphabetically. Sorting by date makes it easy to see the latest tag. Often I just want to know what the latest tag or the latest stable release is, this makes it easy.
The alias code is:
 [alias]  
   tags = "!sh -c ' \  
 git for-each-ref --format=\"%(refname:short)\" refs/tags | \  
 while read tag; do \  
   git --no-pager log -1 --format=format:\"$tag %at\" $tag; echo; \  
 done | \  
 sort -k 2 | cut -f 1 --delimiter=\" \" | \  
 while read tag; do \  
   fmt=\"%Cred$tag:%Cblue %s %Cgreen%h%Creset (%ai)\"; \  
   git --no-pager log -1 --format=format:\"$fmt\" $tag; echo; \  
 done'"  

Find the first tag that contains a commit

This seems to come up quite a bit for me. An example is here; a user was asking about a virt-install feature, and I wanted to tell them what version it appeared in. I grepped git log, found the commit, then ran:

 $ git describe --contains 87a611b5470d9b86bf57a71ce111fa1d41d8e2cd  
 v1.0.0~201  

That shows me that v1.0.0 was the first release with the feature they wanted, just take whatever is to the left of the tilde.

This often comes in handy with backporting as well: a developer will point me at a bug fix commit ID, I run git describe to see what upstream version it was released in, so I know what fedora package versions are lacking the fix.

Another tip here is to use the --match option to only search tags matching a particular glob. I've used this to filter out matching against a maintenance or bugfix release branch, when I only wanted to search major version releases.

Don't pull tags from certain remotes

For certain repos like qemu.git, I add a lot of git remotes pointing to individual developer's trees for occasional patch testing. However if trees have lots of non-upstream tags, like for use with pull-requests, they can interfere with my workflow for backporting patches. Use the --no-tags option for this:

 $ git remote add --no-tags $repo

Friday, January 15, 2016

Using CPU host-passthrough with virt-manager

I described virt-manager's CPU model default in this post. In that post I explained the difficulties of using either of the libvirt options for mirroring the host CPU: mode=host-model still has operational issues, and mode=host-passthrough isn't recommended for use with libvirt over supportability concerns.

Unfortunately since writing that post the situation hasn't improved any, and since host-passthrough is the only reliably way to expose the full capbilities of the host CPU to the VM, user's regularly want to enable it. This is particularly apparent if trying to do nested virt, which often doesn't work on Intel CPUs unless host-passthrough is used.

However we don't explicitly expose this option in virt-manager since it's not generally recommended for libvirt usage. You can however still enable it in virt-manager:
  • Navigate to VM Details->CPU
  • Enter host-passthrough in the CPU model field
  • Click Apply

Wednesday, January 13, 2016

github 'hub' command line tool

I don't often need to contribute patches to code hosted on github; most of the projects I contribute to are either old school and don't use github for anything but mirroring their main git repo, or are small projects I entirely maintain so I don't submit pull-requests.

But when I do need to submit patches, github's hub tool makes my life a lot simpler, allowing you to fork repositories and submit pull-requests very easily from the command line.

The 'hub' tool wants to be installed as an alias for 'git'. I originally tried that, but it made my bash prompt insanely slow since I show the current git branch and dirty state in my bash prompt. When I first encountered this, I filed a bug against the hub tool (with a bogus workaround), and nowadays it seems they have a disclaimer in their README.

Their recommended fix is to s/git/command git/g in git-prompt.sh, which doesn't work too well if you use the linked fedora suggestion of pointing at the package installed file in /usr/share, so I avoid the alias. You can run 'hub' standalone, but instead I like to do:

 sudo dnf install hub  
 ln -s /usr/bin/hub /usr/libexec/git-core/git-hub  

Then I can git hub fork and git hub pull-request all I want :)

Monday, January 11, 2016

qemu:///system vs qemu:///session

If you've spent time using libvirt apps like virt-manager, you've likely seen references to libvirt URIs. The URI is how users or apps tell libvirt what hypervisor (qemu, xen, lxc, etc) to connect to, what host it's on, what authentication method to use, and a few other bits. 

For QEMU/KVM (and a few other hypervisors), there's a concept of system URI vs session URI:
  • qemu:///system: Connects to the system libvirtd instance, the one launched by systemd. libvirtd is running as root, so has access to all host resources. qemu VMs are launched as the unprivileged 'qemu' user. Daemon config is in /etc/libvirt, VM logs and other bits are stored in /var/lib/libvirt. virt-manager and big management apps like Openstack and oVirt use this by default.
  • qemu:///session: Connects to a libvirtd instance running as the app user, the daemon is auto-launched if it's not already running. libvirt and all VMs run as the app user. All config and logs and disk images are stored in $HOME. This means each user has their own qemu:///session VMs, separate from all other users. gnome-boxes and libguestfs use this by default.

That describes the 'what', but the 'why' of it is a bigger story. The privilege level of the daemon and VMs have pros and cons depending on your usecase. The easiest way to understand the benefit of one over the other is to list the problems with each setup.


qemu:///system runs libvirtd as root, and access is mediated by polkit. This means if you are connecting to it as a regular user (like when launching virt-manager), you need to enter the host root password, which is annoying and not generally desktop usecase friendly. There are ways to work around it but it requires explicit admin configuration.

Desktop use cases also suffer since VMs are running as one user, but the app (like virt-manager) is running as your local user. For example, say you download an ISO to $HOME and want to attach it to a VM. The VM is running as unprivileged user=qemu that can't access your $HOME, so libvirt has to change the ISO file owner to qemu:qemu and virt-manager has to give search access to $HOME for the user=qemu. It's a pain for apps to handle, and it's confusing for users, but after dealing with it for a while in virt-manager we've made it generally work. (Though try giving a VM access to a file on a fat32 USB drive that was automounted by your desktop session...)


qemu:///session runs libvirtd as your unprivileged user. This integrates better with desktop use cases since permissions aren't an issue, no root password is required, and each user has their own separate pool of VMs.

However because nothing in the chain is privileged, any VM setup tasks that need host admin privileges aren't an option. Unfortunately this includes most general purpose networking options.

The default qemu mode in this case is usermode networking (or SLIRP). This is an IP stack implemented in userspace. This has many drawbacks: the VM is not accessible by the outside world, the VM can access the outside world but only over a limited number of networking protocols, and it's very slow.

There is an option for qemu:///session VMs to use a privileged networking setup, via the setuid qemu-bridge-helper. Basically the host admin sets up a bridge, adds it to a whitelist at /etc/qemu/bridge.conf, then it's available for unprivileged qemu instances. By default on Fedora this contains 'virbr0' which is the default virtual network bridge provided by the system libvirtd instance, and what qemu:///system VMs typically use.

gnome-boxes originally used usermode networking, but switching around Fedora 21 timeframe to use virbr0 via the bridge-helper. But that's dependent on virbr0 being set up correctly by the host admin, or via package install (libvirt-daemon-config-network package on Fedora).


qemu:///session also misses some less common features that require host admin privileges, like host PCI device assignment. Also VM autostart doesn't work as expected either because the session daemon itself isn't autostarted.


Apps have to decide for themselves which libvirtd mode to use, depending on their use cases.

qemu:///system is completely fine for big apps like oVirt and Openstack that require admin access to the virt hosts anyways.

virt-manager largely defaults to qemu:///system because that's what it has always done, and that default long precedes qemu-bridge-helper. We could switch but it would just trade one set of issues for another. virt-manager can be used with qemu:///session though (or any URI for that matter).

libguestfs uses qemu:///session since it avoids all the permission issues and the VM appliance doesn't really care about networking.

gnome-boxes prioritized desktop integration from day 1, so qemu:///session was the natural choice. But they've struggled with the networking issues in various forms.

Other apps are in a pickle: they would like to use qemu:///session to avoid the permission issues, but they also need to tweak the network setup. This is the case vagrant-libvirt currently finds itself in.

Friday, January 8, 2016

Running KVM arm 32 on AArch64

Just a little tip: libvirt 1.2.17 fixed the last bits necessary to run 32bit arm VMs on AArch64 hosts with KVM acceleration. We just needed to make sure that libvirt advertised the capability, all the lower level qemu and kernel bits were already in place.

Just select armv7l in virt-manager's UI when creating a new VM, or pass --arch armv7l to virt-install, if on an aarch64 host, and KVM will be used if it's available.

In my (very brief) testing the VM seems to be much faster than 32-on-32 KVM, but I don't think that's a surprise given the speed difference between the host machines.

Update: Marcin posted some virt-manager screenshots and performance info.