Public beta (bugs, reports, suggestions, features and requests)
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
-
glecit
- Posts: 5
- Joined: Tue Jul 29, 2014 4:07 pm
Tue Jan 20, 2015 4:08 pm
anton (staff) wrote:1) Looks like snapshot based sync (well, VERY close...) so good news we've re-worked this code path in an upcoming post-fix.
2) We'd be happy to take a look over remote session as I'm still not 100% sure about 1) so please drop a message to support with a link to this thread and credentials for you RDP or whatever (TeamViewer?) account.
Thank you for cooperation!
glecit wrote:It's stuck during a fast sync again after a clean reboot. I'm going to leave it like this for now in case you want to do a remote session while it's stuck.
It would be nice to hear if this is a known issue or not.
Sorry, after a few days I needed to get the lab cluster running again to test something else. I can provide logs if you're interested, but it's probably not worth the time to debug old code.
Looking forward to testing the new sync code. By "VERY close" did you mean to a new beta build or to a production release?
Thanks, Anton.
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Tue Jan 20, 2015 8:56 pm
We're getting closer so probably you're right and it worth moving to new build eventually
Beta but it would be released in a shorter cycle as we did not change core (with the current update, tons of changes in the trunk for another release) dramatically.
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software
-
wgeddes
- Posts: 2
- Joined: Fri May 30, 2014 4:31 pm
Tue Feb 03, 2015 5:41 pm
I never did give an update, but here is one now...
I created 3 separate lsfs disks. 2 of them 50gb each (for hyper-V OS), and 1 x 300GB (Shared SQL Data).
Actual space being used in the lsfs disks:
1st 50GB - 33GB Used Space
2nd 50GB - 33GB Used Space
300GB - 5GB Used
Actual space the Starwind lsfs disks are using:
1st 50GB - 426.5GB
2nd 50GB - 426.38GB
300GB - 1.42TB
Starwind is using 2.28TB of physical space with 400GB lsfs drives in total.
After creating the disks on the 1st node, I let Starwind create the replica on the 2nd without altering any options. Everything showed fine on setup. Now when logging in, the 2 x 50GB disks arent even showing in the console for the 2nd node, so yellow triangles for both of the HA disks on the 1st node, and the SQL shared 300GB disks shows partner not synchronized.
I did also create a 1GB fixed disks for a cluster witness, and this disk is performing as it should.
Wes Geddes
Windows System Architect
HostGator | Endurance International Group
-
Anatoly (staff)
- Staff
- Posts: 1675
- Joined: Tue Mar 01, 2011 8:28 am
-
Contact:
Wed Feb 11, 2015 1:09 pm
Starwind is using 2.28TB of physical space with 400GB lsfs drives in total.
So 5 times more. Still not good, but already better. We are on the right way!
After creating the disks on the 1st node, I let Starwind create the replica on the 2nd without altering any options. Everything showed fine on setup. Now when logging in, the 2 x 50GB disks arent even showing in the console for the 2nd node, so yellow triangles for both of the HA disks on the 1st node, and the SQL shared 300GB disks shows partner not synchronized.
Any chance you could upload the logs and share the link to download them with us?
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
-
rrbnc
- Posts: 26
- Joined: Mon Nov 24, 2014 3:06 am
Mon Mar 09, 2015 9:03 pm
Is this fixed in v8 build 7774?
https://www.starwindsoftware.com/release-notes-build
"LSFS
Performance optimizations.
Defragmentation issues fixed. In some cases, defragmentation process could stop, and device files take more disk space then required. Currently during the normal operation, device files can take disk space up to 3.5 times more compared to nominal device size. Snapshots increase disk space requirements by unique data size in snapshot.
Deduplication option temporarily disabled for new devices."
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Tue Mar 24, 2015 11:09 am
Yes, that's the most recent build with an updates.
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software
-
jamesalan
- Posts: 2
- Joined: Tue Feb 20, 2024 4:54 pm
- Location: US
-
Contact:
Tue Feb 20, 2024 5:00 pm
Raising the priority of the defrag process to manage disk space more efficiently also sounds like a critical enhancement. Keen to see how this update impacts performance and reliability in real-world setups!
-
darklight
- Posts: 185
- Joined: Tue Jun 02, 2015 2:04 pm
Thu Feb 22, 2024 12:37 pm
jamesalan wrote: ↑Tue Feb 20, 2024 5:00 pm
Raising the priority of the defrag process to manage disk space more efficiently also sounds like a critical enhancement. Keen to see how this update impacts performance and reliability in real-world setups!
Are you still running LSFS in your environment? Could you be so kind as to share details about your workload and experience?
-
anton (staff)
- Site Admin
- Posts: 4021
- Joined: Fri Jun 18, 2004 12:03 am
- Location: British Virgin Islands
-
Contact:
Mon Feb 26, 2024 8:16 pm
LSFS shouldn't be used in production anymore. We pulled it out looooong time ago!
Regards,
Anton Kolomyeytsev
Chief Technology Officer & Chief Architect, StarWind Software