Hi, maybe someone can point me in the right direction, I have 10 servers 5 Dell r210s and 5 Dell R320's, I have basically converted these servers to standalone hyper-v 2012 servers, so there is no clustering on any at the moment.
Each server is configured with 2 1Gb nics teamed via a virtual switch, now when I copt files between server 1 and 2 for example I see 100MBs throughput, but if I copy a file to server 3 at the same time the file copy load splits the 100MBs throughput between the 2 copy processes. I was under the impression if I copied 2 files to 2 totally different servers the load would basically be split across the 2 nics effectively giving me 2Gbs throughput but this does not seem to be the case. I have played around with tcpip large send offloads, jumbo packets, disabled vmq on the cards, they are broadcoms. :-( but it doesn't really seem to make a difference with all of these settings.
The other issue is If I live migrate a 12Gb vm machine running only 2gb ram, effectively just an o/s it takes between 15 to 20 minutes to migrate, I have played around with the advanced settings, smb, compression, tcpip not real game changers, BUT if I shut town the vm and migrate it, it takes, just under 3 and a half minutes to move across.
I am really stumped here, I am busy in a test phase of hyper-v but cant find any definitive documents relating to this stuff.