We recently decided to purchase a new server to run our VM's on since the old one did not have redundant drives. Unfortunately our new and improved server has resulted in far worse performance now that I've moved the VM's over to it! I am not really sure why, but can only assume it is to do with the drive configuration.
Basically when the VM's are running on the new virtual server host, it takes a long time just to login to the VM's (with our without a roaming profile). General use once you are logged in seems OK, but it is much slower than before. We have a DC as one of the VM's and when it was on the new server it was taking a very long time to provide AD group membership information resulting in some applications that request this information to perform very badly. As a result, I had to move that VM back to the old server.
The new server has the following specs:
Dell PowerEdge R620
2x Xeon E5-2630v2 2.6GHz processors
64GB RAM
PERC H710 integrated RAID controller, 512MB NV CACHE
6x 1TB 7.2K RPM 6Gps SAS 2.5" HDD
O/S (C drive) configured on RAID 1 using 2x 1TB drives, GPT partition
Hyper-V storage (D drive) configured on RAID 10 using 4x 1TB drives on 2 spans, GPT partition on simple volume in Windows formatted with 64KB cluster size
Running Windows Server 2012 Datacenter R2 with Hyper-V
The old server has the following specs:
Dell PowerEdge 2950 (2007 model)
2x Xeon E5335 2.0GHz
24GB RAM
Unknown RAID controller
4x 300GB 15K SAS drives running with NO RAID. Each drive holds 2-3 separate VM's.
Running Windows Server 2012 Datacenter with Hyper-V
The new server is currently only running 6 VM's and already displaying the performance problems. We have over 10 VM's that need to be hosted ideally. The current 6 VM's which are running include our production Exchange 2010 server (only 10 staff with an 80GB VHD), Windows Update Server (100GB VHD), SQL 2000 server with limited use (100GB VHD), FTP server with minor use, build server for .NET development with infrequent use and a basic server which is used for licensing/connection services for our own applications we develop (low use).
The only thing I can think of is the difference in the disk/RAID configuration. But it seems this configuration I am using is quite popular and therefore I would expect it to easily handle 6 VM's given the specs of the server?
Any help much appreciated.