VMFS Datastores failing to create on StarWind VSAN storage
Posted: Sun Mar 28, 2021 2:30 am
Hello everyone, I'm a new user and currently setting up a demo for my group to experiment with but I'm coming up against an odd issue that didn't occur during my small scale tests before.
I've provisioned two Starwind 8 servers using Windows 2019 CORE for storage, my compute side is two ESXi 6.5 servers in a cluster. I would like to create VMFS datastores using StarWind.
I have it to the point that both of my ESXi servers can "see" the target IQN however when I attempt to make a VMFS datastore my ESXi servers both kick back with "Cannot change the host configuration" and fails.
To test I've made a connection using iscsi initiator on a spare Windows 2019 VM and I was able to connect (through the management address) and create a usable NTFS drive.
I attempted to create the VMFS datastore using the same management address instead of the iSCSI again just to test and it wouldn't bind. I'm a bit perplexed as to what could be causing this, has anyone come across an issue like this before?
I've gone through what I believe are all of the recommendations and specifications, virtual disks are set to 512 block size as 4096 wouldn't even show in the discovery pool on my ESXi servers. I've attempted to use L1 and L2 (again none at all), JumboFrames are on all ISCSI and SYNC interfaces for ESXi and Starwind boxes.
I've provisioned two Starwind 8 servers using Windows 2019 CORE for storage, my compute side is two ESXi 6.5 servers in a cluster. I would like to create VMFS datastores using StarWind.
I have it to the point that both of my ESXi servers can "see" the target IQN however when I attempt to make a VMFS datastore my ESXi servers both kick back with "Cannot change the host configuration" and fails.
To test I've made a connection using iscsi initiator on a spare Windows 2019 VM and I was able to connect (through the management address) and create a usable NTFS drive.
I attempted to create the VMFS datastore using the same management address instead of the iSCSI again just to test and it wouldn't bind. I'm a bit perplexed as to what could be causing this, has anyone come across an issue like this before?
I've gone through what I believe are all of the recommendations and specifications, virtual disks are set to 512 block size as 4096 wouldn't even show in the discovery pool on my ESXi servers. I've attempted to use L1 and L2 (again none at all), JumboFrames are on all ISCSI and SYNC interfaces for ESXi and Starwind boxes.