linux:server:windows_vm

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
linux:server:windows_vm [2022/09/07 20:54] – fix link text separator michaelbromilowlinux:server:windows_vm [2023/09/04 21:08] (current) – add more hyperv enlightenments michaelbromilow
Line 5: Line 5:
 FIXME FIXME
 See also [[https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts-msi-tool.378044/|this guru3d thread]] for something to investigate about interrupts. See also [[https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts-msi-tool.378044/|this guru3d thread]] for something to investigate about interrupts.
 +
 +[[http://forum.proxmox.com/threads/5770-Windows-guest-high-context-switch-rate-when-idle|Here is a thread about excessive context switching]]. There is a solution [[https://docs.microsoft.com/en-GB/troubleshoot/windows-server/performance/programs-queryperformancecounter-function-perform-poorly|here]] (supposedly). Additionally it is proposed in this thread to remove the USB tablet device to reduce context switching (so I did that).
 </callout> </callout>
  
Line 28: Line 30:
 Useful commands to debug some of this: Useful commands to debug some of this:
  
-  * ''# perf kvm --host top -p `pidof qemu-system-x86_64`''+  * ''# perf kvm %%--%%host top -p `pidof qemu-system-x86_64`''
     * Shows how often qemu is executing various functions, ''[k]'' for kernel space and ''[.]'' for user space.     * Shows how often qemu is executing various functions, ''[k]'' for kernel space and ''[.]'' for user space.
     * There is also one function used for making the switch to guest space and it accounts for all time spent there. On a 4.14 kernel with intel cpu that function is ''vmx_vcpu_run'' but it might differ.     * There is also one function used for making the switch to guest space and it accounts for all time spent there. On a 4.14 kernel with intel cpu that function is ''vmx_vcpu_run'' but it might differ.
-  * ''# perf stat -e 'kvm:*' -a -- sleep 1''+  * ''# perf stat -e 'kvm:*' -a %%--%% sleep 1''
     * This should show the reason a VM is doing ''VM_EXIT''. Values should be roughly "less than 1000".     * This should show the reason a VM is doing ''VM_EXIT''. Values should be roughly "less than 1000".
-  * ''# perf kvm --host stat live''+  * ''# perf kvm %%--%%host stat live''
     * This one should show what the VM is doing? Apparently most time % should be ''HLT'' otherwise it's not idle.     * This one should show what the VM is doing? Apparently most time % should be ''HLT'' otherwise it's not idle.
 +    * High ''HLT'' time indicates the machine is waking up and going to sleep a lot (context switching?).
  
 It may also be a good idea to use ''powercfg'' to find out why the OS is waking up lots, if it is. It may also be a good idea to use ''powercfg'' to find out why the OS is waking up lots, if it is.
 +
 +===== Tweaks for performance =====
 +See https://libvirt.org/formatdomain.html and [[https://leduccc.medium.com/improving-the-performance-of-a-windows-10-guest-on-qemu-a5b3f54d9cf5|this guide]].
 +
 +==== CPU Topology & Configuration ====
 +Match the host CPU topology with the VM (cores can be changed as desired). Enable ''host-passthrough'' to disable CPU emulation as much as possible.
 +
 +  <domain type="kvm">
 +    ...
 +    <cpu mode="host-passthrough" check="none" migratable="on">
 +      <topology sockets="1" dies="1" cores="3" threads="2"/>
 +    </cpu>
 +    ...
 +  </domain>
 +
 +Add one or more threads dedicated to I/O (libvirt recommends max 1 per core).
 +
 +  <domain type="kvm">
 +    ...
 +    <iothreads>1</iothreads>
 +    ...
 +  </domain>
 +
 +Set up the vCPUs for the guest by pinning them to the host. See ''virsh capabilities''. Note that Intel CPU sets are specified as e.g. (0,4); (1,5); (2,6); (3,7) for a 4-core CPU (i.e. the threading is split across the bottom and top halves of the cpuset range). For example this is how I set up the i7-3770 keeping core 0 for emulation/io/host.
 +
 +  <domain type="kvm">
 +    ...
 +    <vcpu placement="static">6</vcpu>
 +    <cputune>
 +      <vcpupin vcpu="0" cpuset="1"/>
 +      <vcpupin vcpu="1" cpuset="2"/>
 +      <vcpupin vcpu="2" cpuset="3"/>
 +      <vcpupin vcpu="3" cpuset="5"/>
 +      <vcpupin vcpu="4" cpuset="6"/>
 +      <vcpupin vcpu="5" cpuset="7"/>
 +      <emulatorpin cpuset="0,4"/>
 +      <iothreadpin cpuset="0,4"/>
 +    </cputune>
 +    ...
 +  </domain>
  
 ==== Enlightenments ==== ==== Enlightenments ====
Line 49: Line 92:
     <pae/>     <pae/>
     <hyperv>     <hyperv>
-      <relaxed state='on'/> +      <relaxed state="on"/> 
-      <vapic state='on'/> +      <vapic state="on"/> 
-      <spinlocks state='onretries='8191'/> +      <spinlocks state="onretries="8191"/> 
-      <vpindex state='on'/> +      <vpindex state="on"/> 
-      <synic state='on'/> +      <runtime state="on"/> 
-      <stimer state='on'/> +      <synic state="on"/> 
-      <reset state='on'/>+      <stimer state="on"/> 
 +      <reset state="on"/>
       <vendor_id state="on" value="KVM Hv"/>       <vendor_id state="on" value="KVM Hv"/>
       <frequencies state="on"/>       <frequencies state="on"/>
 +      <reenlightenment state="on"/>
 +      <tlbflush state="on"/>
 +      <ipi state="on"/>
 +      <evmcs state="on"/>
     </hyperv>     </hyperv>
   </features>   </features>
Line 76: Line 124:
  
 For me these enlightenments reduced idle CPU by 4-5x (from 45%+ to ~10-12% while running Milestone XProtect in the background, with no cams set up). For me these enlightenments reduced idle CPU by 4-5x (from 45%+ to ~10-12% while running Milestone XProtect in the background, with no cams set up).
 +
 +===== Networking =====
 +Modify the netplan configuration to add a bridge, as I understand it the host will now connect via the bridge rather than directly through the interface:
 +
 +''/etc/netplan/01-netcfg.yaml'':
 +<file>
 +network:
 +  version: 2
 +  renderer: networkd
 +  ethernets:
 +    enp2s0:
 +      dhcp4: no
 +      dhcp6: no
 +      #addresses: [192.168.0.20/24]
 +      #gateway4: 192.168.0.1
 +      #nameservers:
 +      #  addresses: [192.168.0.10,1.1.1.1]
 +  bridges:
 +    br0:
 +      dhcp4: no
 +      dhcp6: yes
 +      addresses: [192.168.0.20/24]
 +      gateway4: 192.168.0.1
 +      nameservers:
 +        addresses: [192.168.0.10,1.1.1.1]
 +      parameters:
 +        stp: true
 +        forward-delay: 4
 +      interfaces:
 +        - enp2s0
 +</file>
 +
 +Apply with ''# netplan apply''
 +
 +Configuration to disable netfilter for bridges:
 +
 +''/etc/sysctl.d/99-netfilter-bridge.conf''
 +<file>
 +net.bridge.bridge-nf-call-ip6tables = 0
 +net.bridge.bridge-nf-call-iptables = 0
 +net.bridge.bridge-nf-call-arptables = 0
 +</file>
 +
 +Also ensure ''br_netfilter'' is loaded at boot:
 +
 +''/etc/modules-load.d/br_netfilter.conf''
 +<file>
 +br_netfilter
 +</file>
 +
 +Apply with:\\
 +''# modprobe br_netfilter''\\
 +''# sysctl -p /etc/sysctl.d/99-netfilter-bridge.conf''
 +
 +<callout>
 +FIXME Removed networks and replaced with bridge to device directly.
 +</callout>
 +
 +Create a new network in virt-manager (right click connection, details, virtual networks) and replace the xml with:
 +<file>
 +<network>
 +    <name>bridged-network</name>
 +    <forward mode="bridge" />
 +    <bridge name="br0" />
 +</network>
 +</file>
 +
 +Start it and set it to autostart. Now in guests just set the network to "Virtual network 'bridged-network'".
 +
 +===== Updates =====
 +Apparently Windows Server doesn't auto-update by default (just downloads). Run ''sconfig'' to configure this behaviour.
  • linux/server/windows_vm.1662584051.txt.gz
  • Last modified: 2022/09/07 20:54
  • by michaelbromilow