Slow writes good reads

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
jay1
Posts: 3
Joined: Sat Jul 15, 2023 7:41 am

Sat Jul 15, 2023 7:50 am

Hi,

I have 2 identical ESXi 8 hosts that get the same issues

both are using an active/active (syncrhronus) starwinds vSan with 2.5Gbps sync network (gigabits per second)
The drives are 990 Pro NVMe drives

Directly on the drive in VMWare using crystal disk mark I get about 2.5GBps (GigaBytes per second) read and write, if I then add this disk to Starwind vSan and do the same test I get the same reads but only 300MBps write speeds.

Any idea what this may be?
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Sun Jul 16, 2023 4:00 am

Hi,

Welcome to StarWind Forum. 2.5 Gbps ~ 315 MBps.
Several suggestions and questions from my side:
1. What is the underlying storage (Pass-through, RDM, VMDK)?
2. What is the iSCSI link bandwidth? Make sure you are not mixing the traffic.
3. What is the testing scenario? (likely to be a Windows-based VM)
4. Do not use teaming (see more at https://forums.starwindsoftware.com/vie ... f=5&t=5285 and https://www.starwindsoftware.com/best-p ... practices/)
5. iSCSI Initiator can be a bottleneck (see more at https://www.starwindsoftware.com/blog/v ... ne-better/
6. How do you benchmark StarWind's VSAN performance? For Windows-based VM, connect the disk over iSCSI via Loopback (at least 3x loopback sessions+3x Partner connections; try local paths to be preferred while MPIO is set to Round Robin with Subset). For VMware VM, carry out cumulative tests in 4 VMs (should have at least 8xvCPUs and 8GB RAM each) simultaneously on StarWind HA datastore. See more at https://www.starwindsoftware.com/best-p ... practices/.
7. Any caching involved?
8. Please use DiskSPD as it has way better flexibility of settings. What is the test file size by the way?
9. How do you add storage to StarWind VSAN VMs? (RDM, VMDK)
10. Remark to 6 and 9. Please note that a single VM's VMDK cannot saturate disk performance. Carry out cumulative tests or connect the storage from StarWind VM over iSCSI into the client VM using multiple iSCSI sessions.
jay1
Posts: 3
Joined: Sat Jul 15, 2023 7:41 am

Tue Jul 18, 2023 2:20 pm

Hi, thanks for responce

1. What is the underlying storage (Pass-through, RDM, VMDK)?

I have tried both passthrough and VMDK

2. What is the iSCSI link bandwidth? Make sure you are not mixing the traffic.

1Gb between the hosts but the hosts only use their local appliance so no cross hosts iSCSi traffic

3. What is the testing scenario? (likely to be a Windows-based VM)

Windows VM

4. Do not use teaming (see more at viewtopic.php?f=5&t=5285 and https://www.starwindsoftware.com/best-p ... practices/)

No

5. iSCSI Initiator can be a bottleneck (see more at https://www.starwindsoftware.com/blog/v ... ne-better/
6. How do you benchmark StarWind's VSAN performance? For Windows-based VM, connect the disk over iSCSI via Loopback (at least 3x loopback sessions+3x Partner connections; try local paths to be preferred while MPIO is set to Round Robin with Subset). For VMware VM, carry out cumulative tests in 4 VMs (should have at least 8xvCPUs and 8GB RAM each) simultaneously on StarWind HA datastore. See more at https://www.starwindsoftware.com/best-p ... practices/.

7. Any caching involved?

No

8. Please use DiskSPD as it has way better flexibility of settings. What is the test file size by the way?

4GB

9. How do you add storage to StarWind VSAN VMs? (RDM, VMDK)

VMDK

10. Remark to 6 and 9. Please note that a single VM's VMDK cannot saturate disk performance. Carry out cumulative tests or connect the storage from StarWind VM over iSCSI into the client VM using multiple iSCSI sessions.


The issues us 100% related to the HA sync in some way, if I have a stand alone disk I get full read and write speeds, as soon as it is changed to a HA with sync the write speeds drop to 300MBps, this is with VMDK and also just passing the NVMe SSD to the VM. The fact that the sync interface is 2.5Gbps would suggest that writes are throttled to the max link speed of the sync to make sure that they don't get out of sync. I can't see this in the documentation though.

Just read this

"Synchronization channel recommendations
The synchronization channel is a critical part of the HA configuration. The synchronization channel is used to mirror every write operation addressing the HA storage. It is mandatory to have synchronization link throughput equal or higher than the total throughput of all links between the client servers and the Virtual SAN Cluster."


Image
jay1
Posts: 3
Joined: Sat Jul 15, 2023 7:41 am

Sat Jul 22, 2023 7:39 pm

I added a 25Gbit/s network for the sync and the speeds are now ok.

Image
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Sat Jul 22, 2023 8:15 pm

Hi,

thanks for your feedback!
Post Reply