StarWind iSCSI SAN
StarWind Native SAN for Hyper-V
 

LSFS over-provisioning

Pure software-based VM-centric and flash-friendly VM storage (iSCSI, SMB3, NFS, iSER and NVMe over Fabrics) including free version. Same software embedded on StarWind ready nodes.

Moderators: art (staff), anton (staff), Anatoly (staff), Max (staff)

LSFS over-provisioning

Postby DWHITTRED » Mon Dec 02, 2019 12:01 am

Hi,

I have been using Starwind vSAN free v8.0.13182 for the last few months. I am very impressed by the speed of drive, and once I learned the correct powershell commands, administering it was fine as well.

I have used the advice here and setup a ReFS mirrored array, then a deduped LSFS device on top of that. I have adequate RAM, and I have created my LSFS device based on 200% over-provisioning.

Attached is a screenshot of the LSFS device showing just over 1TB of user data, that it is taking up 7.5TB of physical disk space. Is this expected behaviour? Is there some sort of manual garbage collection that I am meant to be running?
Annotation 2019-12-02 105229.png
LSFS Device
Annotation 2019-12-02 105229.png (26.96 KiB) Viewed 149 times

How do I keep this disk usage under control?

Regards,
Daniel
DWHITTRED
 
Posts: 5
Joined: Sun Dec 01, 2019 11:38 pm

Re: LSFS over-provisioning

Postby DWHITTRED » Mon Dec 02, 2019 6:01 am

As troubleshooting, I flushed the cache and then restarted the Starwind service (in a hope that it would force try defragging that device) and after 6 hours the device is now remounted, but with the same size and amount of fragmentation (820%).

Looking at the end of the server log is this:
Code: Select all
12/2 16:30:54.784133 22b0 error: Sp: --- CStarPackCoreNew::CStarPackFileAsync::SetFileSize(541065216) (line #1287) 112
12/2 16:30:54.784357 22b0 error: Sp: --- CStarPackCoreNew::CStarPackSpaceSegmentFile::OpenSuperPageFile([S:\vSAN_LSFS1\\lsfs1_15213.spspx], 4194304, 128, 0, _, 0, 0x000000D4F13FF470) (line #1699) 112
12/2 16:30:54.784381 22b0 error: Sp: *** CStarPackCoreNew::CStarPackSpaceSegmentFile::PrepareSegment (252) Failed to open file 'S:\vSAN_LSFS1\\lsfs1_15213.spspx' for segment '15213'! Status (112)
12/2 16:30:54.784399 22b0 error: Sp: --- CStarPackCoreNew::CStarPackSpaceSegmentFile::PrepareSegment(Segment index - 15213, Size in pages - 128, Page size - 4194304, 0x000001E0F25B94B0) (line #254) 112
12/2 16:30:54.784416 22b0 error: Sp: *** CStarPackCoreNew::CStarPackSpaceSegmented::PrepareSegment (2381) Failed to prepare segment '15213'! Status (112)
12/2 16:30:54.784432 22b0 error: Sp: *** CStarPackCoreNew::CStarPackSpaceSegmented::AccessSegmentOptimizerWorker (4193) Failed to prepare space segment '15213'! Status (112)
12/2 16:30:54.784443 22b0 General: MonitorStarPackSpace: Out of free space on storage. LSFS disk switched to the 'No space' IO mode. Device is read-only now.
12/2 16:30:55.383362 27fc General: *** DDDisk_Dispatch_WriteAll: Write  offset 0x3576B8 is rejected due to out of space!
12/2 16:30:55.383513 27fc General: *** DDDisk_Dispatch_WriteAll: Write  offset 0x357694 is rejected due to out of space!
12/2 16:30:55.383762 27fc General: *** DDDisk_Dispatch_WriteAll: Write  offset 0x357798 is rejected due to out of space!

Does that mean that because the underlying storage is already full, that it can't perform a defragmentation?
DWHITTRED
 
Posts: 5
Joined: Sun Dec 01, 2019 11:38 pm

Re: LSFS over-provisioning

Postby DWHITTRED » Mon Dec 02, 2019 11:48 pm

Hi,

Any thoughts on what to do? At the moment the device is in read-only and I cannot use it as is. I would like to continue my evaluation, but this is a bit of a show-stopper.

Thanks
DWHITTRED
 
Posts: 5
Joined: Sun Dec 01, 2019 11:38 pm

Re: LSFS over-provisioning

Postby Boris (staff) » Tue Dec 03, 2019 1:00 pm

Do you have any snapshots for that device?
Boris (staff)
Staff
 
Posts: 764
Joined: Fri Jul 28, 2017 8:18 am

Re: LSFS over-provisioning

Postby DWHITTRED » Tue Dec 03, 2019 1:45 pm

Hi,
No. I checked using the
Code: Select all
#
# get snapshots list
#
if ( $device.lsSnapshotsSupported )
{
  foreach($snapshot in $device.Snapshots)
  {
    Write-Host "Snapshot Id : $($snapshot.Id) Name : $($snapshot.name) Date : $($snapshot.$Date)"
  }
}

portion of the "removesnapshot.ps1" script and nothing was enumerated.
DWHITTRED
 
Posts: 5
Joined: Sun Dec 01, 2019 11:38 pm

Re: LSFS over-provisioning

Postby Boris (staff) » Wed Dec 04, 2019 2:05 pm

In the view of that, you will either need to increase the capacity of the current disk, if possible. If not, perform data migration to another location and then recreate the disk. Yet keep in mind an imagefile based one could be better for your environment. If you prefer using LSFS, you can create a disk of a smaller size, so that it meets the double storage overprovisioning requirement, which now is not completely met with you 2.49TB device.
Boris (staff)
Staff
 
Posts: 764
Joined: Fri Jul 28, 2017 8:18 am

Re: LSFS over-provisioning

Postby DWHITTRED » Thu Dec 05, 2019 6:01 am

Hi Boris,

My underlying storage is 7.48 TB in capacity, and the LSFS device is one-third of that at 2.49 TB, so I have left 200% additional capacity on the underlying storage.

The concern is that at the moment the LSFS device only contains 1.01 TB of data, yet is consuming 7.39 TB of storage; by those numbers I would have to be over-provisioned by more than 600%! From what I can see there has been none of the automatic defragmentation that is mentioned in the https://knowledgebase.starwindsoftware.com/explanation/lsfs-container-technical-description/ page:
How Defragmentation works:
Defragmentation works continuously in the background. Each file segment will be defragmented when data capacity exceeds the allowed value. Maximum allowed junk rate before defragmentation process is initiated equals 60%. This value can be changed using the context menu of the device.

Data from the old fragmented file segment will be moved to another empty file segment and the old file will be deleted.

If the available disk space on the physical storage is low, the LSFS uses more aggressive defragmentation policy and slows down access speed for the end user. If there is no space on the physical drive, LSFS enters the “read-only” mode.


My user data has stayed pretty consistent over the last 2 months (hovering around 1 TB), but the "Allocated Physical Space" has been growing until it filled the underlying storage and the "Device Framentated, %" has stayed very high (currently at 820%).

I can copy the data off the read-only LSFS device and recreate the LSFS device, but if it keeps growing (the LSFS device, not the user data contained within), then in another two months I will be in the same predicament, and recreating the the LSFS device every two months isn't a sustainable practice.

Regards,
Daniel
DWHITTRED
 
Posts: 5
Joined: Sun Dec 01, 2019 11:38 pm

Re: LSFS over-provisioning

Postby Boris (staff) » Thu Dec 05, 2019 1:07 pm

Daniel,

The thing about LSFS is that it does grow over time as data operations are processed. Your device's size is not just 1TB, it's 2.49TB. And taking into account the 2x overprovisioning ratio for an LSFS device you would need at least 7.47TB of the physical storage. Your value of 7.48TB is pretty close to that, so you may want to consider recreating the device with a smaller size. Otherwise, you may want to use flat image files for the whole capacity of your storage.
Boris (staff)
Staff
 
Posts: 764
Joined: Fri Jul 28, 2017 8:18 am


Return to StarWind Virtual SAN / StarWind Virtual SAN Free / StarWind HyperConverged Appliance / StarWind Storage Appliance

Who is online

Users browsing this forum: Google [Bot] and 4 guests