Quantcast
Channel: Hyper-V forum
Viewing all articles
Browse latest Browse all 8743

Hyper V Virtual Network Switch = 50% Drop in Network Performance

$
0
0

I am experiencing a very frustrating problem with Windows 2008 R2 Hyper Visor. When I deployed 10gb SAN and 10gb LAN environment, I was expecting a SIGNIFICANT increase over 4gb FC and 1GB LAN. What I see is a VM environment that is little better than what I had.

Physical servers are Dell M610 with dual X5570 processors, 64GB RAM, Broadcom 57711 iSCSI cards (latest firmware/driver), backplane consists of Dell M8024 10GB switches (latest firmware). SAN and Servers are connected via 10GB switching, Jumbo frames enabled,<1ms latency, using default global netsh options, and default advanced NIC card driver properties (tcp offload, large send, tcp checksum, etc). Using CAT6a cabling. All cabling is under 10 ft.

Sustained throughput from non hyperv physical server to physical server peaks at 8.2gbit/s. It's fast!

Hyper V not so much.

I have three Windows 2008 R2 Datacenter servers on this hardware setup. When I enable the Hyper Visor Virtual Switch, the physical servers throughput drops to about 2.5gbit/s. All VM machines respond with about the same sustainable throughput of 1.5-2.2 gbit/s.

I took one of the HyperV servers and tweaked with all the advanced settings. enable/disable large send offload, tcp checksum offload, etc. Every change either did nothing, or dropped the network throughput.

I have been reading many articles, some say that LSO V2 is not supported on Windows 2008 R2 HV, so I tried disabling it, and there was no change. Same with TCP Connection Offloading, no change. Disabling TCP/UDP checksums seemed to slow things down again by half (1.2gb/s). Setting jumbo frames on/off didn't make a significant change. Disable/enable VMQ didn't make a significant change.

Also read where it's recommended to uncheck the "allow management os..." should be unchecked, but this made no difference in the throughput test to the VMs. With this unchecked I obviously can't test the physical machine on that 10gb nic.

To perform the tests I use iperf.exe with the default settings. I've also done large file copies to sustain throughput.

All problems relate to the MS Virtual Network Switch being enabled. If I remove the External Virtual network card from the Host server, I can again sustain an average of 7-8gb/s. However, the guest server's all require external network access.

Does ANYONE have any idea how I can get better performance out of this setup? I'm phasing out old file servers next, and I want to see as least 7-8gb/s sustainable file transfers if possible.

K



Viewing all articles
Browse latest Browse all 8743

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>