I need a little clarification on how Hyper-v utilizes the hardware, particularly the CPU.
If I understand this correctly -and correct me if im wrong- the hypervisor uses a round-robin approach to give all VM’s equal CPU time. So for example, say I have 2 quad-core processors (socket A and B) with HT support, meaning, 8 cores and 16 logical processors. I setup 2 VM’s and assign each VM 16 VP’s (overcommitting), the hypervisor will first give VM1 access to all 16 LP’s, then take it away from VM1 and give all 16 LP’s to VM2, then back to VM1 and the cycles continues in an equal round-robin fashion (unless I increase or decrease the relative weight of a given VM).
So I’ve got a couple of questions:
1. What if I assign only 8 VP’s to each VM, when VM1 is given CPU time, it only uses half the CPU load (8 LP’s). Does this mean that both VM1 and VM2 will/can now continuously be given equal CPU time at the “same” time?
2. If the above is not true, when I turn on VM1 with its 8 VP’s, it (say) grabs LP 16-8. Let’s say VM1 is a very high CPU demander, keeping (all 8 LP of) the CPU spiked to 90%. The remaining LP’s (8-1) are practically idle. When VM2 is given CPU time (for its 8 VP’s), which LP’s will it use? How does the hypervisor ensure that VM1 can continuously meet its 90% usage demand without affecting VM2?
3. What is considered healthy for the Context Switches (Hyper-v Hypervisor Logical Processor) counter?
4. Is there a “proc queue length” counter for the VP? If not, is the System\Processor Queue Length counter reflecting the VM or only the parent partition? If not, how can I monitor or determine whether a VM has a bottleneck/queuing issue?
5. And one last question about the network. Is the \Network Interface\Output Queue Length and Network Interface\Bytes Total/sec counters (when adding the counter only for the physical NIC that was added the virtual switch) reflecting the reality of what the VM’s are experiencing?
If I understand this correctly -and correct me if im wrong- the hypervisor uses a round-robin approach to give all VM’s equal CPU time. So for example, say I have 2 quad-core processors (socket A and B) with HT support, meaning, 8 cores and 16 logical processors. I setup 2 VM’s and assign each VM 16 VP’s (overcommitting), the hypervisor will first give VM1 access to all 16 LP’s, then take it away from VM1 and give all 16 LP’s to VM2, then back to VM1 and the cycles continues in an equal round-robin fashion (unless I increase or decrease the relative weight of a given VM).
So I’ve got a couple of questions:
1. What if I assign only 8 VP’s to each VM, when VM1 is given CPU time, it only uses half the CPU load (8 LP’s). Does this mean that both VM1 and VM2 will/can now continuously be given equal CPU time at the “same” time?
2. If the above is not true, when I turn on VM1 with its 8 VP’s, it (say) grabs LP 16-8. Let’s say VM1 is a very high CPU demander, keeping (all 8 LP of) the CPU spiked to 90%. The remaining LP’s (8-1) are practically idle. When VM2 is given CPU time (for its 8 VP’s), which LP’s will it use? How does the hypervisor ensure that VM1 can continuously meet its 90% usage demand without affecting VM2?
3. What is considered healthy for the Context Switches (Hyper-v Hypervisor Logical Processor) counter?
4. Is there a “proc queue length” counter for the VP? If not, is the System\Processor Queue Length counter reflecting the VM or only the parent partition? If not, how can I monitor or determine whether a VM has a bottleneck/queuing issue?
5. And one last question about the network. Is the \Network Interface\Output Queue Length and Network Interface\Bytes Total/sec counters (when adding the counter only for the physical NIC that was added the virtual switch) reflecting the reality of what the VM’s are experiencing?