Hello,
I'm having issues with iSCSI MPIO on ALUA-enabled dual controller SAN (Promise VessR2600tiD).
The device is 2012 R2 certified (http://www.windowsservercatalog.com/item.aspx?idItem=752f327a-05c8-ec2a-95c5-671460e6b7a9&bCatID=1282) with current firmware. We are running VM workloads on 2 LUN's atm over dedicated 10G iSCSI connections (Intel X540-T1/T2 NIC's) in a 2-node Hyper-V cluster.
The NIC's have stripped down protocol support (ipv4 only with jumbo frames) and dedicated subnet.
Everything works fine until mpio kicks in. When I try to failover a controller on the SAN (change preferred one), I get strange BSOD's msdsm.sys.
What's stranger is that I basically tried this recently:
2x Hyper-V servers in a cluster, no MPIO, on LUN1, LUN2 on CTRL 1
1x STANDALONE server WITH MPIO on LUN3 (Lun masking turned on) on CTRL2
If I change ALUA preference on LUN3 -> CTRL1, Hyper-V cluster servers get BSOD, not the standalone server. Also,
on the standalone server, the Path State stays Active/Optimized, although TPG State changes to standby and the second portal becomes active.
The vendor wasn't able to help me and I'm kinda at wit's end here. It seems that no matter what server I try to setup MPIO on, the cluster servers get BSOD, even if the server with MPIO is not a part of the cluster nor it shares LUN with the cluster. There's nothing in the server nor SAN logs.
I'm having issues with iSCSI MPIO on ALUA-enabled dual controller SAN (Promise VessR2600tiD).
The device is 2012 R2 certified (http://www.windowsservercatalog.com/item.aspx?idItem=752f327a-05c8-ec2a-95c5-671460e6b7a9&bCatID=1282) with current firmware. We are running VM workloads on 2 LUN's atm over dedicated 10G iSCSI connections (Intel X540-T1/T2 NIC's) in a 2-node Hyper-V cluster.
The NIC's have stripped down protocol support (ipv4 only with jumbo frames) and dedicated subnet.
Everything works fine until mpio kicks in. When I try to failover a controller on the SAN (change preferred one), I get strange BSOD's msdsm.sys.
What's stranger is that I basically tried this recently:
2x Hyper-V servers in a cluster, no MPIO, on LUN1, LUN2 on CTRL 1
1x STANDALONE server WITH MPIO on LUN3 (Lun masking turned on) on CTRL2
If I change ALUA preference on LUN3 -> CTRL1, Hyper-V cluster servers get BSOD, not the standalone server. Also,
on the standalone server, the Path State stays Active/Optimized, although TPG State changes to standby and the second portal becomes active.
The vendor wasn't able to help me and I'm kinda at wit's end here. It seems that no matter what server I try to setup MPIO on, the cluster servers get BSOD, even if the server with MPIO is not a part of the cluster nor it shares LUN with the cluster. There's nothing in the server nor SAN logs.