Horrendous performance on LSFS HA

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
jes.brouillette
Posts: 4
Joined: Tue Aug 28, 2018 2:19 pm

Tue Aug 28, 2018 2:55 pm

I am having horrible performance issues with storage on my 2-node hyper-converged cluster. My NVMe L2 cache drive is getting 680+MB/s, Data drive is 400+MB/s, but the VSAN only 2.3 MB/s. I followed the setup for https://www.starwindsoftware.com/resour ... erver-2016 when I initially built the cluster.

Both servers are identical:
2x Xeon E5-2620v4
128GB Ram
1x Intel 256G M.2 (OS)
1x Intel RS3DC080 backed by 6 Seagate ST3000NM0025 3TB SAS 12GB S ES 7200RPM 128MB in RAID 10 (Data)
1x Intel 400GB DC P3500 NVMe (L2 Cache)
2x Intel X550-T2 (one port from each for storage replication; one port from each for OS/VMs)
StarWind Virtual SAN v8.0.0 (Build 11456, [SwSAN], Win64)

I have an 8 TB single image file for data on the 8.2 TB RS3DC080 setup as LSFS in HA as an iSCSI device RAID with 300G L2 Cache on the P3500 NVMe.
D:\ = NVMe
E:\ = RS3DC080
V:\ = StarWind LSFS image on iSCSI

Attached are the diskspd results:
diskspd.zip
diskspd results
(5.18 KiB) Downloaded 289 times
NVMe:

Code: Select all

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |     21631303680 |      5281080 |     687.64 |  176036.67 |    0.011 |     0.023 | D:\TestFile.dat (1024MB)
-----------------------------------------------------------------------------------------------------
total:       21631303680 |      5281080 |     687.64 |  176036.67 |    0.011 |     0.023
RS3DC080:

Code: Select all

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |     12700012544 |      3100589 |     403.71 |  103349.74 |    0.019 |     0.215 | E:\TestFile.dat (1024MB)
-----------------------------------------------------------------------------------------------------
total:       12700012544 |      3100589 |     403.71 |  103349.74 |    0.019 |     0.215
StarWind:

Code: Select all

Total IO
thread |       bytes     |     I/Os     |     MB/s   |  I/O per s |  AvgLat  | LatStdDev |  file
-----------------------------------------------------------------------------------------------------
     0 |        74895360 |        18285 |       2.38 |     609.50 |    3.281 |    30.480 | V:\TestFile.dat (1024MB)
-----------------------------------------------------------------------------------------------------
total:          74895360 |        18285 |       2.38 |     609.50 |    3.281 |    30.480
**EDIT**
All diskspd tests were run directly on the Hyper-V host.

Attached Configs:
storage_config.zip
storage_config
(2.31 KiB) Downloaded 290 times
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Tue Aug 28, 2018 3:39 pm

Since you are running trial StarWind version please contact your account manager or submit a ticket here.
jes.brouillette
Posts: 4
Joined: Tue Aug 28, 2018 2:19 pm

Tue Aug 28, 2018 3:44 pm

Oleg(staff) wrote:Since you are running trial StarWind version please contact your account manager or submit a ticket here.
I'm not running the trial. I'm running Free. The trial expired several months ago.
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Tue Aug 28, 2018 4:22 pm

Oh, I got it.
As far as I can see from your explanation, your system configuration does not meet LSFS requirements. Please check this article.
You have 8 TB LSFS device on 8.2 TB underlying storage. If LSFS device cannot find enough space to put segment file on the disk, the device will be put in read-only mode.
Please check performance with thick-provisioned StarWind image file.
jes.brouillette
Posts: 4
Joined: Tue Aug 28, 2018 2:19 pm

Tue Aug 28, 2018 4:32 pm

Oleg(staff) wrote:Oh, I got it.
As far as I can see from your explanation, your system configuration does not meet LSFS requirements. Please check this article.
You have 8 TB LSFS device on 8.2 TB underlying storage. If LSFS device cannot find enough space to put segment file on the disk, the device will be put in read-only mode.
Please check performance with thick-provisioned StarWind image file.
Is there a way to convert LSFS to thick via the powershell modules without destroying the data as this is a production cluster?
Oleg(staff)
Staff
Posts: 568
Joined: Fri Nov 24, 2017 7:52 am

Tue Aug 28, 2018 4:56 pm

No, unfortunately it is not possible to convert LSFS to a thick-provisioned image.
You can create a thick image and migrate data from LSFS to StarWind thick image.
jes.brouillette
Posts: 4
Joined: Tue Aug 28, 2018 2:19 pm

Tue Aug 28, 2018 4:58 pm

Just to clarify one point: The LSFS disk is set as 8 TB max, but is only using 3.4TB currently, and has 4.9 TB of .spspx files. So, It's not actually utilizing 8T of space. This puts it within the 2x limit.
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Tue Aug 28, 2018 6:25 pm

Independently of how much data you have got there currently, in the future with data growing you risk running into a situation where the whole disk will become read-only unless you comply with the specifications indicated. This means that for an 8.2TB physical disk, the maximum size of an LSFS disk hosted by it should not exceed 2.7TB.
Post Reply