Really poor write performance?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
ale624
Posts: 4
Joined: Mon Mar 07, 2022 11:05 pm

Mon Mar 07, 2022 11:40 pm

Hi,

I am trialling vSan as a potential replacement for an aging equalogic i have in lab.
Currently i have vSan Free setup on a two node failover cluster.

i have a disk setup that exists between 2 4 disk arrays on each node (Raid 5). I set it up as per the powershell templates, no other settings were changed other than networking.
I have the three networks, sync, heartbeat and ISCSI traffic. these are all seperate Vlans on 2 teamed 10Gig connections. So 20Gig max throughput, upto 10Gig per single transfer.

I have configured the iscsi connections as shown in the guides, but yet i get really terrible write performance:

vSan on the left, Raw RAID on the right
Image

MPIO is clearly functioning, as the vSan Read speeds are far higher than the raw RAID drive performance.

Is there any way i can at least get the reads within the region of the raw RAID stats?

If anyone is curious, these are my EQ stats VS the vSan stats, so its not a network issue as these use the same NIC's
Image

Thanks!
Alex
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Tue Mar 08, 2022 7:48 am

Welcome to StarWind Forum.
We do not recommend using any teaming or VLANs for iSCSI and sync networks.
Could you also benchmark the NTFS (i.e., not the CSVFS) performance?
ale624
Posts: 4
Joined: Mon Mar 07, 2022 11:05 pm

Tue Mar 08, 2022 9:48 pm

Hi,

Ok, i was unaware that Teaming and VLANs were unsupported for Sync and ISCSI networks.
I have thrown up a couple of 10Gig links between these servers now and run some additional testing with a fresh disk setup.

Left: new 4Gig disk with the new "raw" links
Middle: 400Gig disk (tested yesterday with CSVFS, now NTFS)
Right: Raw RAID disk
Image

performance as NTFS is a lot better, not entirely sure why this would be the case?
These disks need to be able to support VMs, so will need to be CSVFS eventually.

Removing VLANs and Teaming seems to have made performance slightly worst actually, performance on my original network setup peaked at 95MB/s~ and the new setup struggles to get close to 75MB/s~
A far cry from the raw disk performance of an average of around 150MB/s and peaks of 210MB/s~

Is there anything else i should be looking at?
These are both Dell PowerEdge R610's with dual 6 core Xeons. 72GB RAM in each
https://i.imgur.com/At3WSuc.png
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Wed Mar 09, 2022 11:16 am

performance as NTFS is a lot better, not entirely sure why this would be the case?
CSVFS is a cluster-aware file system. Performance drop might be related to some internal Windows Server aspects.
Try the following
1. varying MPIO policies
- local-to-local iSCSI Connections (e.g., 172.16.10.10 to 172.16.10.10) set to the preferred connections with Round Robin with Subset.
- setting 127.0.0.1 as preferred connection.
- setting partner connection as preferred.
2. Tweak HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\000X\Parameters by setting FirstBurstLength to 262144 (decimal)
3. Disable VMQ, SR-IOV, and Energy Efficient Ethernet for iSCSI & Sync.
4. Set power profile to high performance.
5. Optimize RSS
- The recommendation is to setup dedicated processor cores for each NIC. I set the RSS Max Processors to “4” and set the RSS Base Processor so that each NIC had dedicated CPU cores. BUT one day we ended up setting the RSS Max Processors to the default of “8”. I set the RSS Base Processor to “2” instead of the default of “0”. This seemed to work properly without any further failures.
6. Set maximum value for receive & send buffers.
ale624
Posts: 4
Joined: Mon Mar 07, 2022 11:05 pm

Tue Mar 22, 2022 1:35 am

Thanks for the suggestions. Have not had a huge amount of time to play around with this setup, but did get a chance to have a fiddle with my NIC settings and that reg key. And got slightly better results with the last few of your suggestions.

As of right now i cannot change the MPIO policy on either node as i get an error "The Parameter is incorrect". Not sure if this is a vSan issue or a Windows issue.
So i will have to troubleshoot that before looking at changing those settings.
Image

Fresh benchmark:
Image
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Tue Mar 22, 2022 2:42 pm

I believe there is some misconfiguration in the setup that prevents you from setting the custom MPIO policy.
Please make sure to have the system configured as we suggest here https://www.starwindsoftware.com/best-p ... practices/ and https://www.starwindsoftware.com/resour ... rver-2016/.
ale624
Posts: 4
Joined: Mon Mar 07, 2022 11:05 pm

Wed Apr 13, 2022 8:38 pm

as an update to this.

I realised the partner server had a different RAID card which was what was causing the poor performance.
I totally forgot these were different and overlooked it when setting up this initially.

I am now getting much, much better write performance, in line with what i would expect! :D

Left Raw, Right vSan. (different disks so slightly different raw numbers than before)
Image
yaroslav (staff)
Staff
Posts: 2361
Joined: Mon Nov 18, 2019 11:11 am

Thu Apr 14, 2022 2:13 pm

Greetings,

Thanks for your update. Please let me know if you need additional assistance.
Post Reply