Logical Sector Size Flash and HDD Storage Pool Virtual Disks

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
User avatar
ticktok
Posts: 21
Joined: Sat Mar 24, 2012 2:12 am

Fri Apr 21, 2017 2:03 pm

I am part way through the purchase of Starwind VSAN Standard edition.
Running Windows Server 2016
I have 2 storage pools one with virtual disk of 1TB of 2 SSDs mirrored and the other virtual disk of 2Tb of 2 mirrored HDDs. Does it matter that the SSD that I will use for the Flash cache have a 512 logical sector while the HDD's have a 4096 locical sector?
Does it matter if all the disks both SSD and HDD have a 512 byte physical sector?
I do not have 10Gb NICs but have 4 * 1Gb ports on each server, how would I configure this set up for a small office lab?
Ivan (staff)
Staff
Posts: 172
Joined: Thu Mar 09, 2017 6:30 pm

Fri Apr 21, 2017 4:59 pm

Hello Ticktok,
Thanks for interested in StarWind solution.
If you asking about StarWind virtual disk sector size, I would recommend you choose 4K for Microsoft HyperV environment and 512 for ESXi or Xen ones.
As for the network configuration, I think it makes sense using 2x 1Gb dedicated networks for StarWind Synchronization and 1x 1Gb for iSCSI/Heartbeat. I will recommend using Failover-only MPIO policy when the loopback (127.0.0.1) connection will be active and partner connection will be standby.
You can find all information about HyperConverged HyperV over here: https://www.starwindsoftware.com/starwi ... -v-cluster
Also, you can find all technical documentation by following link: https://www.starwindsoftware.com/resource-library
Alexey Sergeev
Posts: 26
Joined: Mon Feb 13, 2017 12:48 pm

Thu Apr 27, 2017 6:33 am

And what about this KB article https://knowledgebase.starwindsoftware. ... lock-size/

"The logical block size should basically be equal or more than the physical block size of the drive".

I'm asking because I had some strange results using 4096 bytes sector size for Starwind device:
sequential reads with 64k blocks almost 3 times lower against the same device with 512 bytes sector size.
Random reads with 64 blocks were equal.
We used RAID 10 with 8 SAS HDD's 146GB, 10K, 6 Gbs, 512b. Stripe size 64K, read-ahead, write-back, disk cache disabled. OS: 2012 R2
Ivan (staff)
Staff
Posts: 172
Joined: Thu Mar 09, 2017 6:30 pm

Wed May 03, 2017 7:40 pm

Hello Alexey,
Thanks for your interest in StarWind solution.
I assume the reason of drop on 64K seq blocks with StarWind 4k virtual block size is that the blocks are not aligned while operating with the drive.
Therefore, when the logical block size is equal to the physical block size, the performance is better.
Alexey Sergeev
Posts: 26
Joined: Mon Feb 13, 2017 12:48 pm

Thu May 04, 2017 6:29 am

Yeah, looks like our controllers based on LSI 2208 are compatible with 4096 sector disks. But disks we're using in RAID are 512 bytes.
So in our case "best practice" for Starwind device logical block size would be 512 bytes.

Also we've noticed another performance issue with 4K blocks operations on Starwind devices.
Even without synchronization when device connected only through the loop-back address performance drop is significant in comparison with local storage.
We've had a remote session with your engineer and couldn't find a solution for this problem.

As I understand Starwind using 64K block for reads and writes during synchronization between nodes.
Could this performance degradation with 4K become a serious problem for our HA storage?
Ivan (staff)
Staff
Posts: 172
Joined: Thu Mar 09, 2017 6:30 pm

Mon May 08, 2017 7:46 pm

Hello Alexey,
Could you provide me more information about this 4K blocks performance degradation? (MPIO policy, NICs, Disks, scenario etc.)
Yes, StarWind Synchronization works on 64K blocks. By the way, please give me more information about your environment.
Alexey Sergeev
Posts: 26
Joined: Mon Feb 13, 2017 12:48 pm

Wed May 24, 2017 11:42 am

I've attached diskspd results. As you can see the performance between local storage and Starwind device placed on it is nearly equal.
Only sequential operations with 4k block size have strange degradation.

If Starwind synchronization works with 64k blocks and Hyper-V also using it.
I wonder, is it a real problem for us or we can live with it?

We're planning to use Virtual SAN in hyperconverged scenario with only 2 Hyper-V hosts.
Both nodes have 8 SAS HDD, 300 GB, 10K, 512b in RAID 10 (64K stripe size, write-back, read-ahead, disk cache disabled).
iSCSI/Heartbeat and Synchronization channels are connected through 10 GB dual-port NIC.
MPIO policy set to failover-only with active loop-back address.
Attachments
vs01.7z
(7.89 KiB) Downloaded 289 times
Ivan (staff)
Staff
Posts: 172
Joined: Thu Mar 09, 2017 6:30 pm

Wed May 24, 2017 5:06 pm

Hello Alexey,
If you will use 1 10 Gb port as iSCSI try to change MPIO policy from Failover only to Least Queue Depth.
Also, can try to add one more session by loopback connection.
I would suggest you checking logical CPU cores load during the test, in some cases it can be a bottleneck.
Please keep us updated about it.
Post Reply