StarWind iSCSI SAN Version 8.0 Beta

StarWind VTL, VTL Free, VTL Appliance

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

User avatar
lohelle
Posts: 141
Joined: Sun Aug 28, 2011 2:04 pm

Tue Sep 24, 2013 3:36 pm

Please remember to fix that to (unmap). Also, send thin provisioning info to vSphere.
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Sep 24, 2013 3:46 pm

Sure we'll do.
lohelle wrote:Remember to also fix the issue with L2-cache always stored on the same location as the main disk file. If I remember right the main storage is also always saved in storage pool location (even if you specify other location). Only tried image-file setting.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Sep 24, 2013 3:48 pm

Yes. LSFS is fully thin-provisioned and does support UNMAP. FLAT is thick-provisioned and obviously does no UNMAP.
awedio wrote:Does this beta v8.0 support UNMAP?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Tue Sep 24, 2013 3:49 pm

Already on the schedule. We may not fit everything into first V8 release however.
lohelle wrote:Please remember to fix that to (unmap). Also, send thin provisioning info to vSphere.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
lohelle
Posts: 141
Joined: Sun Aug 28, 2011 2:04 pm

Tue Sep 24, 2013 3:55 pm

I understand that sending info about free space etc is some work, but you could maybe REPORT the storage as Thin Provisioned to the client.

It would also be great if you could make it possible to report storage as SSD or not. This is important for vSphere at least.
User avatar
fbifido
Posts: 125
Joined: Thu Sep 05, 2013 7:33 am

Thu Oct 03, 2013 2:31 am

How do you tell v8 where to put L2 cache ?

test setup failed on this config:
Windows 2012 R2 1-Raid1(2x146GB SAS) 3-raid5(4x450GB SAS)
raid1=136GB OS-Drive
each raid5 = 1267.5GB

then use windows StorageSpace to create a Raid0 of all 3 disk total 3.68TB
then create a volume from the pool 3680GB
then create a disk from that with same size, with dedup & thin prov (this was why it fail the thin)
I deleted that disk, recreated it with dedup but with thick this time, now I don't get any "mount failed error".

with this setup I can loss 3 drive (one from each raid5 set) and still keep kicking.
starczek
Posts: 13
Joined: Mon Mar 05, 2012 1:30 pm

Thu Oct 03, 2013 9:54 am

Are there any plans to include in V8 option to change L1 cache type and/or size 'on the fly'?
User avatar
Ironwolf
Posts: 59
Joined: Fri Jul 13, 2012 4:20 pm

Thu Oct 03, 2013 4:34 pm

My Experience with LSFS so far

If the partition that the LSFS files sit on is greater than 2TB, only 64k block sizes can be used (all other sizes fail), regardless of the size of the Virtual drive weather 1GB or 500GB; anything but a 64k block size will fail. I initially thought this was related to the cluster sizes used on the NTFS partition the files sit on, but this doesn’t seem to be the case, as I had used 64k, it is more directly related to partition size.

If I create a partition of 2TB or smaller, that the LSFS files will sit on then 4k blocks sizes will work.

L2 Cache also fails if a block size is larger than 4k in the virtual drive.

This makes the drive management a bit hectic to enable LSFS and L2 Caching. I would now need to layer partitions of a large raid 60 into small 2TB chunks.

Also I noticed that LSFS suffers from the same problem DeDup LUNs did, it will only use one CPU Core for the entire processing load. Will this be changed in the future so that different LSFS LUNs will use different cores? As it is now, I can saturate a single CPU core, and only reach about 2 gigabits throughput with the raid 60 nearly idle, and the remaining CPU cores idle.
User avatar
fbifido
Posts: 125
Joined: Thu Sep 05, 2013 7:33 am

Thu Oct 03, 2013 10:59 pm

test setup failed on this config:
Windows 2012 R2 1-Raid1(2x146GB SAS) 3-raid5(4x450GB SAS)
raid1=136GB OS-Drive
each raid5 = 1267.5GB

then use windows StorageSpace to create a Raid0 of all 3 disk total 3.68TB
then create a volume from the pool 3680GB
Also did a few more test:

with the same StorageSpace info from above, I try to create a thin lsfs but "mount failed" on every try, so I did the thick and that works. I don't know why thin not working.
StorageSpace Volume is 4K.

Is there a way to enable options for thick provisioning, so we can test the block size, and diff options like auto defrag ...etc....

Thanks
User avatar
fbifido
Posts: 125
Joined: Thu Sep 05, 2013 7:33 am

Thu Oct 03, 2013 11:06 pm

Is there anyway to edit the properties of an lsfs image ones it has been created?

and

Can I change a lsfs disk from thin to thick ? or the other way around ?

BTW: my StorageSpace pool is formatted as block size 4096 bytes.
User avatar
fbifido
Posts: 125
Joined: Thu Sep 05, 2013 7:33 am

Thu Oct 03, 2013 11:20 pm

Ok, the post by
Ironwolf » Thu Oct 03, 2013 11:34 am
"anything but a 64k block size will fail"
This works for thin provisioning.

What is the disadvantage in using 64k over 4k ?


update:
I got 8K and 16K to work, but 4K always fail with error message "mount failed"
my storage server has 12GB of memory, is this not enough for 4TB of 4k block size?
RaymondTH
Posts: 3
Joined: Sat Oct 05, 2013 6:08 pm

Sat Oct 05, 2013 6:18 pm

Loving v8 one problem though in the new replication manger I don't see any way to define the name of the resulting target.

For example say I want to deploy a DHCP cluster, on starwind server 1 I create DHCP Quorum and DHCP targets/disks then when I enable replication of these disks onto the second starwind server it creates a generic target name lvdsreplica1 or something like that.

This is a pain when I'm connecting the nodes to the targets and quickly becomes confusing with multiple targets on the starwind servers.

Is there any way to specify the replicated target name ?
User avatar
Ironwolf
Posts: 59
Joined: Fri Jul 13, 2012 4:20 pm

Sun Oct 06, 2013 3:43 pm

Thanks for the update fibifido (The smaller the size the better the DeDup)

I just confirmed it... LSFS Thin, 4k can only be used with 2TB or less partitions, and...

it seems the other sizes 8k,16k,32k... are dependent on the underlining NTFS cluster sizes

The drive that I had formatted with NTFS at 64k Clusters was reformatted with 16k, and now I can use 16k and up, but I can not use 4k or 8k. My only reason to use smaller size, I was trying to use the L2 Cache, thinking it required 4k or something as none of the parameters for the L2 Cache are settable, I have yet to get a single virtual drive to work with it, regardless of the parameters or locations I use
failed.PNG
failed.PNG (17.65 KiB) Viewed 21090 times
User avatar
Alex (staff)
Staff
Posts: 177
Joined: Sat Jun 26, 2004 8:49 am

Mon Oct 07, 2013 8:56 am

Ironwolf,
thank you for feedback!
There is an issue with L2 cache settings, it will be fixed with the next update of StarWind v8 beta.
Best regards,
Alexey.
Omnividente
Posts: 16
Joined: Tue Oct 01, 2013 11:40 am

Tue Oct 08, 2013 9:55 am

Wnen will the new beta realise? Loopback acceleration on windows Server 2012 r2 will be fix?
Locked