For using a windows VM via libvirt/KVM/qemu/virt-manager
Here is a thread about excessive context switching. There is a solution here (supposedly). Additionally it is proposed in this thread to remove the USB tablet device to reduce context switching (so I did that).
Only packages really needed (Ubuntu Server 22.04) are libvirt
, qemu-kvm
, libvirt-daemon-system
.
Add users who will use the VM to libvirt
group.
If libvirtd.service
is running (it should be) then just connect and create using virt-manager
(either locally or remotely via ssh, e.g. virt-manager -c qemu+ssh://[ip]/system
)
I also needed the virtio Windows drivers to add as a disc when booting so that Windows could see the virtio hard drive. After install I ran the setup file inside which enabled some GPU stuff.
Install SPICE guest tools to allow console resizing on Window resizing, and clipboard integration.
I found that virtio video driver still has some bugs - SPICE will stop resizing after rebooting after installing spice guest tools and will never work again. QXL video works perfectly.
By default libvirt/virt-manager/something puts the disk .qcow2s in /var/lib/libvirt/images
(root access required to read).
Windows is a nightmare and sucks up craploads of CPU all the time because of course it does.
Useful commands to debug some of this:
# perf kvm --host top -p `pidof qemu-system-x86_64`
[k]
for kernel space and [.]
for user space.vmx_vcpu_run
but it might differ.# perf stat -e 'kvm:*' -a -- sleep 1
VM_EXIT
. Values should be roughly “less than 1000”.# perf kvm --host stat live
HLT
otherwise it's not idle.HLT
time indicates the machine is waking up and going to sleep a lot (context switching?).
It may also be a good idea to use powercfg
to find out why the OS is waking up lots, if it is.
Match the host CPU topology with the VM (cores can be changed as desired). Enable host-passthrough
to disable CPU emulation as much as possible.
<domain type="kvm"> ... <cpu mode="host-passthrough" check="none" migratable="on"> <topology sockets="1" dies="1" cores="3" threads="2"/> </cpu> ... </domain>
Add one or more threads dedicated to I/O (libvirt recommends max 1 per core).
<domain type="kvm"> ... <iothreads>1</iothreads> ... </domain>
Set up the vCPUs for the guest by pinning them to the host. See virsh capabilities
. Note that Intel CPU sets are specified as e.g. (0,4); (1,5); (2,6); (3,7) for a 4-core CPU (i.e. the threading is split across the bottom and top halves of the cpuset range). For example this is how I set up the i7-3770 keeping core 0 for emulation/io/host.
<domain type="kvm"> ... <vcpu placement="static">6</vcpu> <cputune> <vcpupin vcpu="0" cpuset="1"/> <vcpupin vcpu="1" cpuset="2"/> <vcpupin vcpu="2" cpuset="3"/> <vcpupin vcpu="3" cpuset="5"/> <vcpupin vcpu="4" cpuset="6"/> <vcpupin vcpu="5" cpuset="7"/> <emulatorpin cpuset="0,4"/> <iothreadpin cpuset="0,4"/> </cputune> ... </domain>
These tell Windows it's running in a VM.
In the XML for the machine, change the <features>
and <clock>
sections to the following:
Features:
<features> <acpi/> <apic/> <pae/> <hyperv> <relaxed state="on"/> <vapic state="on"/> <spinlocks state="on" retries="8191"/> <vpindex state="on"/> <runtime state="on"/> <synic state="on"/> <stimer state="on"/> <reset state="on"/> <vendor_id state="on" value="KVM Hv"/> <frequencies state="on"/> <reenlightenment state="on"/> <tlbflush state="on"/> <ipi state="on"/> <evmcs state="on"/> </hyperv> </features>
Clock:
<clock offset="utc"> <timer name="hpet" present="no"/> <timer name="hypervclock" present="yes"/> </clock>
hpet
as yes
causes higher idle usage, so I opted for no
.
Additionally, some sources say to keep the following lines:
<timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/>
However, some recommend removing them, which I opted for.
For me these enlightenments reduced idle CPU by 4-5x (from 45%+ to ~10-12% while running Milestone XProtect in the background, with no cams set up).
Modify the netplan configuration to add a bridge, as I understand it the host will now connect via the bridge rather than directly through the interface:
/etc/netplan/01-netcfg.yaml
:
network: version: 2 renderer: networkd ethernets: enp2s0: dhcp4: no dhcp6: no #addresses: [192.168.0.20/24] #gateway4: 192.168.0.1 #nameservers: # addresses: [192.168.0.10,1.1.1.1] bridges: br0: dhcp4: no dhcp6: yes addresses: [192.168.0.20/24] gateway4: 192.168.0.1 nameservers: addresses: [192.168.0.10,1.1.1.1] parameters: stp: true forward-delay: 4 interfaces: - enp2s0
Apply with # netplan apply
Configuration to disable netfilter for bridges:
/etc/sysctl.d/99-netfilter-bridge.conf
net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0
Also ensure br_netfilter
is loaded at boot:
/etc/modules-load.d/br_netfilter.conf
br_netfilter
Apply with:
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/99-netfilter-bridge.conf
Create a new network in virt-manager (right click connection, details, virtual networks) and replace the xml with:
<network> <name>bridged-network</name> <forward mode="bridge" /> <bridge name="br0" /> </network>
Start it and set it to autostart. Now in guests just set the network to “Virtual network 'bridged-network'”.
Apparently Windows Server doesn't auto-update by default (just downloads). Run sconfig
to configure this behaviour.