: Isolate CPU Resources in a NUMA Node on KVM
Focus
Focus

Isolate CPU Resources in a NUMA Node on KVM

Table of Contents
End-of-Life (EoL)

Isolate CPU Resources in a NUMA Node on KVM

You can improve performance of VM-Series on KVM by isolating the CPU resources of the guest VM to a single non-uniform memory access (NUMA) node. On KVM, you can view the NUMA topology virsh. The following example is from a two-node NUMA system:
  1. View the NUMA topology. In the example below, there are two NUMA nodes (sockets), each with a four-core CPU with hyperthreading enabled. All the even-numbered CPU IDs belong to one node and all the odd-numbered CPU IDs belong to the other node.
    % virsh capabilities
    <…>
      <topology>
        <cells num='2'>
          <cell id='0'>
            <memory unit='KiB'>33027228</memory>
            <pages unit='KiB' size='4'>8256807</pages>
            <pages unit='KiB' size='2048'>0</pages>
            <distances>
              <sibling id='0' value='10'/>
              <sibling id='1' value='20'/>
            </distances>
            <cpus num='8'>
              <cpu id='0' socket_id='1' core_id='0' siblings='0,8'/>
              <cpu id='2' socket_id='1' core_id='1' siblings='2,10'/>
              <cpu id='4' socket_id='1' core_id='2' siblings='4,12'/>
              <cpu id='6' socket_id='1' core_id='3' siblings='6,14'/>
              <cpu id='8' socket_id='1' core_id='0' siblings='0,8'/>
              <cpu id='10' socket_id='1' core_id='1' siblings='2,10'/>
              <cpu id='12' socket_id='1' core_id='2' siblings='4,12'/>
              <cpu id='14' socket_id='1' core_id='3' siblings='6,14'/>
            </cpus>
          </cell>
          <cell id='1'>
            <memory unit='KiB'>32933812</memory>
            <pages unit='KiB' size='4'>8233453</pages>
            <pages unit='KiB' size='2048'>0</pages>
            <distances>
              <sibling id='0' value='20'/>
              <sibling id='1' value='10'/>
            </distances>
            <cpus num='8'>
              <cpu id='1' socket_id='0' core_id='0' siblings='1,9'/>
              <cpu id='3' socket_id='0' core_id='1' siblings='3,11'/>
              <cpu id='5' socket_id='0' core_id='2' siblings='5,13'/>
              <cpu id='7' socket_id='0' core_id='3' siblings='7,15'/>
              <cpu id='9' socket_id='0' core_id='0' siblings='1,9'/>
              <cpu id='11' socket_id='0' core_id='1' siblings='3,11'/>
              <cpu id='13' socket_id='0' core_id='2' siblings='5,13'/>
              <cpu id='15' socket_id='0' core_id='3' siblings='7,15'/>
           </cpus>
        </cell>
    </cells>
  2. Pin vCPUs in a KVM guest to specific physical vCPUs, use the cpuset attribute in the guest xml definition. In this example, all 8 vCPUs are pinned to physical CPUs in the first NUMA node. If you do not wish to explicitly pin the vCPUs, you can omit the cputune block, in which case, all vCPUs will be pinned to the range of CPUs specified in cpuset, but will not be explicitly mapped.
    <vcpu cpuset='0,2,4,6,8,10,12,14'>8</vcpu> 
    <cputune> 
      <vcpupin vcpu='0' cpuset='0'/> 
      <vcpupin vcpu='1' cpuset='2'/> 
      <vcpupin vcpu='2' cpuset='4'/> 
      <vcpupin vcpu='3' cpuset='6'/> 
      <vcpupin vcpu='4' cpuset='8'/> 
      <vcpupin vcpu='5' cpuset='10'/> 
      <vcpupin vcpu='6' cpuset='12'/> 
      <vcpupin vcpu='7' cpuset='14'/> 
    </cputune>