I have a SOFS cluster with two servers and two volumes on Server 2012 R2. One of the two volumes does not show any physical disks associated with it via either PowerShell or in Server Manager, although it shows status as healthy and seems to be working fine in all regards.
This volume shows what I would expect:
PS C:\> get-virtualdisk -friendlyname "volume 1" | get-physicaldisk
FriendlyName CanPool OperationalStatus HealthStatus Usage Size
------------ ------- ----------------- ------------ ----- ----
PhysicalDisk10 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk2 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk3 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk16 False OK Healthy Auto-Select 1.82 TB
PhysicalDisk7 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk14 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk1 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk6 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk11 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk9 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk13 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk4 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk12 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk8 False OK Healthy Auto-Select 930.75 GB
PhysicalDisk5 False OK Healthy Auto-Select 930.75 GB
PS C:\>
But this one, not so. It should be pretty much the same drives as above:
PS C:\> get-virtualdisk -friendlyname "volume 2" | get-physicaldisk PS C:\>
However, this is interesting, as it seems to know from the other way around:
PS C:\> get-physicaldisk physicaldisk1 | get-virtualdisk FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size me ------------ ------------------- ----------------- ------------ -------------- ---- Volume 2 Mirror OK Healthy True 4 TB Quorum Disk Mirror OK Healthy True 4 GB Volume 1 Mirror OK Healthy True 2.5 TB PS C:\>
I don't know if this is just a problem with the tools and don't know how to diagnose further. I don't see any event messages that seem interesting with regard to this. I have tried rebooting each SOFS cluster node (one at a time) with no change. The cluster and volumes did fail over and continue to work across the reboots as expected.
Any ideas?