I have a Windows Server 2016 Datacenter Hyper-V Cluster with 4 nodes and 181 VMs running on it. With the newest node that I just added yesterday, I have switched to a converged networking model utilizing 2 - 10Gb NICs for all Ethernet traffic. Storage traffic is done over Fibre Channel. I created a Team of the 2 - 10Gb NICs using the Teaming mode: Switch Independent, Load balancing mode: Dynamic, and Standby adapter: None (all adapters Active). I then created a virtual switch in Hyper-V and created virtual NICs via PowerShell for Management, Cluster, and Live Migration.
Everything is setup and working so I decided to do some testing with Live Migration to see how fast it would go and how much network bandwidth would be used during the Live Migrations. I setup a performance monitor to monitor the Live Migration vNIC to see what the peak bandwidth utilization would be. When Live Migrating multiple VMs with a large amount of RAM assigned to them, I hit 14Gb of utilization at one point which tells me that Load Balancing in the team is working. Since I now have all my Ethernet traffic flowing over the same NIC team and knowing that the Cluster vNIC is extremely latency sensitive in regards to maintaining consistent heartbeats between nodes, I want to guarantee that the Cluster vNIC will always have the necessary bandwidth to maintain communication between nodes.
I found Microsoft Documentation here about network QoS in Server 2012 but nothing for network QoS in Server 2016 or 2019. QoS Overview in 2012 Is this no longer recommended in Server 2016? What is the correct/best practice method for setting up networking in general for Hyper-V Clusters in Server 2016?