StarWind iSCSI SAN Version 8.0 RC

Public beta (bugs, reports, suggestions, features and requests)

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

User avatar
fbifido
Posts: 125
Joined: Thu Sep 05, 2013 7:33 am

Mon Mar 03, 2014 2:57 pm

CyanPixie wrote: Is there any way for me to access the Beta-3 installer again? I have updated to RC and cannot mount my LSFS device and would like some Data from it, then upgrade it again.

Thanks,
Brad.
https://www.mediafire.com/?a97ggj15xbidie5 (v8.0.6383)

Not sure if this was the last beta-3.
Please let me know if it works, the recovery!
User avatar
Alex (staff)
Staff
Posts: 177
Joined: Sat Jun 26, 2004 8:49 am

Mon Mar 03, 2014 5:22 pm

Brad, sure.
We have uploaded the previous build to the web site:
http://starwindsoftware.com/tmplink/Sta ... 140206.exe
Best regards,
Alexey.
CyanPixie
Posts: 2
Joined: Mon Jan 13, 2014 1:15 am

Tue Mar 04, 2014 1:56 am

Thank you both. I have gotten the installer and gotten my data back. :D Copying over to a Thick disk and then will do the upgrade to the RC.

Thanks for the awesome software and support StarWind! 8)

Cheers,
Brad.
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Tue Mar 04, 2014 1:37 pm

Thank you for the kind feedback! :D

Please do not hesitate to contact us should you have any additional questions. It would be our pleasure to further assist you!
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
User avatar
fbifido
Posts: 125
Joined: Thu Sep 05, 2013 7:33 am

Thu Mar 20, 2014 6:07 pm

lohelle wrote:A little followup

I created a 100GB LSFS LUN and created a VMFS datastore. I created a 50GB thick provisioned (lazy zeroed) on a VM (on that datastore)
This is not using any space on the SW-box, but it uses 50GB on the vSphere datastore (as expected)

I copied 40GB of data to the VM disk. I see many 1GB spspx files on the SW-box (as expected)

Now I deleted all data and ran sdelete. I saw almost no data transfer (nic) during sdelete (as it is only zero data), and all spspx-files went away, as I hoped it would do.

So it seems that things work as I hoped/it should.
Can you try with an 50GB thin provisioned disk?
User avatar
fbifido
Posts: 125
Joined: Thu Sep 05, 2013 7:33 am

Thu Mar 20, 2014 6:14 pm

lohelle wrote:My observations testing v8 RC

- UNMAP works! (vSphere 5.5). Cloned/migrated 10 VM's to LSFS volumes (with and without dedupe), and when deleting or migrate away VMs I was able to unmap using this command on the vSphere host:
esxcli storage vmfs unmap -l "datastore name"
After all data deleted from datastore + unmap = only single 1GB spspx on SW-box (VMFS metadata)
Can you say how long it took to complete the unmap ?
User avatar
lohelle
Posts: 144
Joined: Sun Aug 28, 2011 2:04 pm

Thu Mar 20, 2014 6:19 pm

The unmap took only a few seconds if I remember right.
User avatar
lohelle
Posts: 144
Joined: Sun Aug 28, 2011 2:04 pm

Thu Mar 20, 2014 6:23 pm

fbifido wrote:
lohelle wrote:A little followup

I created a 100GB LSFS LUN and created a VMFS datastore. I created a 50GB thick provisioned (lazy zeroed) on a VM (on that datastore)
This is not using any space on the SW-box, but it uses 50GB on the vSphere datastore (as expected)

I copied 40GB of data to the VM disk. I see many 1GB spspx files on the SW-box (as expected)

Now I deleted all data and ran sdelete. I saw almost no data transfer (nic) during sdelete (as it is only zero data), and all spspx-files went away, as I hoped it would do.

So it seems that things work as I hoped/it should.
Can you try with an 50GB thin provisioned disk?
I do not have this setup available right now, but I will expect the same results on the Starwind-side. The VMDK would probably not shrink on the VMFS datatore automaticly, but that is not Starwind's fault.
tialen
Posts: 6
Joined: Thu Mar 20, 2014 8:57 pm

Thu Mar 20, 2014 9:04 pm

Hello, I have just installed the latest build of Starwind from your website. This is a new install that we are testing for potential purchase.
3 Servers:
Dual Proc Xeon
Server 2012 R2
64GB RAM
dual SSD's in Raid 1 for OS
5 SSD's in Raid 0 for Starwind HA Server Cluster (2.1TB)

My question is with regards to how to provision the data array. Do I format the drive with NTFS, if so what is the suggested block size for use with LSFS Volumes? Any particular recommendations on the disk setup? (MBR, Dynamic Disk, NTFS, ReFS, Allocation Unit Sizes, etc?)

Thanks
tialen
Posts: 6
Joined: Thu Mar 20, 2014 8:57 pm

Thu Mar 20, 2014 9:50 pm

So, as my previous post indicated, I have just installed this software.

I setup a cluster and created a clustered device..
there are no LSFS options for clustered devices? (IE no HA for your new LSFS??)
There are no thin provisioning options for clustered devices? (no thin provisioning if using a HA configuration?)
There are no Dedup options?

I guess I'm wondering what the point even is with v8 if you can't use any of the new features in a manner that you would in a real production env.. ie HA?

During my investigation, the console crashed a couple times, I have sent you a link to the crashdump file
tialen
Posts: 6
Joined: Thu Mar 20, 2014 8:57 pm

Thu Mar 20, 2014 10:33 pm

Hi, I'm also curious about the purpose of the .swdsk file that is created with an LSFS storage device. The file isn't locked, so can be deleted (where as the .spsp file and .spspx file are locked). What are the ramifications if the .swdsk file were to be deleted from the disk subsystem?
User avatar
Anatoly (staff)
Staff
Posts: 1675
Joined: Tue Mar 01, 2011 8:28 am
Contact:

Mon Mar 24, 2014 11:58 am

We appreciate your interest in our product.

Thank you for the logs and dumps that you`ve dropped. Our R&D confirmed that we will include the fix for your issue into the next public build.

Also I`d like to answer some of your question if you don`t mind:
- no HA for your new LSFS??
Ability to create LSFS based HA device is currently presented into current public build. You need to create stand-alone LSFS device and add the partner to it after.
- There are no thin provisioning options for clustered devices? (no thin provisioning if using a HA configuration?)
Identically to the previous answer you need to create stand-alone LSFS and add synchronous replica to it.
- There are no Dedup options?
For now you can only enable/disable the DD. All the service features, such as defragmentation, UNMAP support, are enabled by default. The DD block size is set to 4KB right now.



I hope that helped
Best regards,
Anatoly Vilchinsky
Global Engineering and Support Manager
www.starwind.com
av@starwind.com
tialen
Posts: 6
Joined: Thu Mar 20, 2014 8:57 pm

Mon Mar 24, 2014 4:36 pm

Thank you for the response, that is how I ended up creating the HA of an LSFS volume, however I wasn't sure if all of the active/active performance features would be active in that case.

Speaking of performance, so far, what I'm seeing is pretty disappointing.
I have two of the above mentioned servers setup in a starwind cluster (server1 and server2), using a 10g link for the backend sync connection.

I have two 10g connection from server3 to server1
I have two 10g connections from server3 to server2

on server 3 I have three drives setup:
drive D:\ consists of 5 internal SSD's in a Raid0
drive E:\ is 80GB and consists of a LSFS volume on the starwind cluster (created on one, then replication partner added), MPIO is configured to use the 4 paths to the storage (two 10G paths per server).
drive F:\ is 80GB and consists of a volume on the starwind cluster created in the cluster configuration section, so it is a standard starwind volume. MPIO is configured to use the 4 paths to the storage (two 10G paths per server).

performance between the 3 volumes is quite different: (fyi, test file is 40GB)

PS C:\SQLIO> .\SQLIO.EXE -s10 -kR -frandom -b8 -t8 -o16 -LS -BN d:\TestFile.DAT
sqlio v1.5.SG
using system counter for latency timings, 2343747 counts per second
8 threads reading for 10 secs from file d:\TestFile.DAT
using 8KB random IOs
enabling multiple I/Os per thread with 16 outstanding
buffering set to not use file nor disk caches (as is SQL Server)
using current size: 40960 MB for file: d:\TestFile.DAT
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 125416.07
MBs/sec: 979.81
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 0
Max_Latency(ms): 2
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 35 65 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0


PS C:\SQLIO> .\SQLIO.EXE -s10 -kR -frandom -b8 -t8 -o16 -LS -BN e:\TestFile.DAT
sqlio v1.5.SG
using system counter for latency timings, 2343747 counts per second
8 threads reading for 10 secs from file e:\TestFile.DAT
using 8KB random IOs
enabling multiple I/Os per thread with 16 outstanding
buffering set to not use file nor disk caches (as is SQL Server)
using current size: 40960 MB for file: e:\TestFile.DAT
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 18754.76
MBs/sec: 146.52
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 6
Max_Latency(ms): 17
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 50 0 0 0 0 0 0 0 0 0 0 0 25 17 5 2 0 0 0 0 0 0 0 0 0


PS C:\SQLIO> .\SQLIO.EXE -s10 -kR -frandom -b8 -t8 -o16 -LS -BN f:\TestFile.DAT
sqlio v1.5.SG
using system counter for latency timings, 2343747 counts per second
8 threads reading for 10 secs from file f:\TestFile.DAT
using 8KB random IOs
enabling multiple I/Os per thread with 16 outstanding
buffering set to not use file nor disk caches (as is SQL Server)
using current size: 40960 MB for file: f:\TestFile.DAT
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 44436.80
MBs/sec: 347.16
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 2
Max_Latency(ms): 8
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 28 14 13 12 12 14 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
PS C:\SQLIO>
User avatar
Bohdan (staff)
Staff
Posts: 435
Joined: Wed May 23, 2007 12:58 pm

Mon Mar 24, 2014 5:24 pm

What is the performance with the same test specification of the underlaying storages on server1 and server2? I mean the disks on which the LSFS volumes are stored.
tialen
Posts: 6
Joined: Thu Mar 20, 2014 8:57 pm

Mon Mar 24, 2014 8:50 pm

that would be the example #1 I showed of the performance of the D: drive. (if you look a few comments up in the thread, you can see all the servers are identical in spec)

Thanks
Post Reply