Hi hoping someone can point me in the right direction here. I have an underperforming external vswitch.
I am testing using Lantest (Client Server) so using memory to test throughput so storage is taken out of the equation. Setup is Hyper V 2012 r2 cluster (2 node) patched. I have removed antivirus and firewall is disabled. I have been through all turn off VMQ turn of offload etc over the past few days, and nothing makes a difference. Here are the figures that are odd and not up to par with what should be happening. NIC's are Intel I350-T4 using latest Intel Driver, VMQ disabled but all other offload enabled. All integration services are up to date. I have tried using the latest drivers, the 2nd latest and the MS one.
Test physical server (not HV Host) to NIC on Hyper V Server (assigned to external vswitch) - 800Mbps read 800Mbps write
Test physical server to virtual guest (2008 R2) 120Mbps read 750Mbps write (odd the write is different here) so its my guest receiving traffic which seems sluggish.
Test physical server to both nodes external vswitch NIC (after enabling allow management to share) 800Mbps both ways
Test VM to VM on same host on private network 125Mbps both ways.
Test VM to VM each on separate host using external vswitch around 125Mbps both ways.
VM performance is great within RDP session, low cpu low diskq etc. Networking latency is good, but throughput is terrible,
There is no bandwidth management set within guest settings, there are no QOS policies set. If I do throttle the guests using bandwidth management I can see that working. If I remove it I get the figures above.
Seeing I can get great performance using the NIC that I have assigned (its actually now a team, tested it single and teamed) the only factor that sits in the middle is the vswitch. Considering the poor performance with a private switch which should absolutely fly, there is something horribly wrong, or something I have overlooked.
Thanks for reading, hopefully someone can point me in the right direction.