Having a heck of a time trying to find the best solution so while I wait for Microsoft to get back to me, figured I would ask.
Built a 3 node cluster, all running 2012R2. Each node has an LSI HBA connected with 6Gb/s SAS cables to a DataOn DNS 1640D Jbod. The Jbod is loaded with a combanation of WD 6Gb/s SAS and Hitachi SSD drives. Everything is on the Windows HCL for storage spaces.
In storage spaces, I have one storage pool with a couple of tiered virtual disks. The disks have volumes that were added as CSV for the Hyper V cluster storage. In addition to the Hyper V guests running on the cluster, there is also a SoFS role running, configured as a generic file share.
What I can't figure out is why my read and writes speeds are so bad from the guest vm's. When I test speeds from any of the 3 host nodes to the shared storage, I get great numbers. Almost 900Mb/s at times, especially when testing against a 6 column 2 way mirror virtual disk. But when I try to go into the guest VM and try and do a file copy, the speed drops to 20-30Mb/s. The vhdx is stored on the same virtual disk. No matter what I try, I can't get near the speed that the host can achieve. The biggest problem I run into is running a VM with Sql on it. Again, the VM is just a 2012R2 box with Sql2012 loaded. It's a small, 3gb database. The database is on a small vhdx stored on a CSV along with the other Guest image VHDX files. I have tried to pin the sql vhdx to the SSD tier and I still can't break 50Mb/s read or writes to that volume.
I have read a lot about how CSV is to blame but i'm not sure why. Why can the host read and write to the cluster storage with huge numbers but the guest vm's on that same host get only 5% of the speed the host gets? Where to go next?