Quantcast
Channel: Hyper-V forum
Viewing all articles
Browse latest Browse all 8743

Poor Performance over Converged Networks using LBFO Team

$
0
0

Hi,

I have  built a Hyper-v 2012 R2 cluster with converged networking and correct logical networks. 

However, I notice that I am not able to utilise more than apprx 1gb over the converged network for live migration, cluster, management etc even though they have a total of 8 x 1gb physical adapters available (switch independant, dynamic load balancing).

e.g

Hyper-v host1 and hyper-v host2 have 4 x 1gb pNics connected to switch1 and 4x1gb pNics connected to switch2. Switch1 and switch2 has a portchannel configured between them.

I can see if I carry out a file copy between Hyper-v host1 and Hyper-v host2 (both configured exactly the same) then the Management, Live-Migration and Cluster vNics on the host use approx 300mbps on each vNic during the copy but never get anywhere near their full utilization (here I would be expecting to see approximately the total of the team available bandwidth which is 8gbps as there is no other traffic using these networks).

Is it because the host vNICs cannot use vRSS (VMQ)? Although I struggle to see this as the issue because I don't see any CPU cores maxing out during the copy! Or maybe someone could recommend anything else to check?


Microsoft Partner


Viewing all articles
Browse latest Browse all 8743

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>