StarWind iSCSI SAN
StarWind Native SAN for Hyper-V
 

StarWind iSCSI SAN V5.4 Build 20100621

Public beta (bugs, reports, suggestions, features and requests)

Moderators: art (staff), anton (staff), Anatoly (staff), Max (staff)

StarWind iSCSI SAN V5.4 Build 20100621

Postby Bohdan (staff) » Wed Jun 23, 2010 10:41 am

Dear Beta-Team members,

Currently we are working on the new StarWind V5.4.
The beta version is available for download from the following link:

http://www.starwindsoftware.com/beta/St ... 100621.exe

New features and improvements:

High Availability: Caching added. Write-Through and Write-Back modes are supported.

High Availability: Autosync functionality added. When HA node becomes active, it is synchronized and bringed online automatically if autosync option has been set.

IBVolume: Maximum device size limit of 2TB has been removed.

Installation: Adding of Windows firewall allow rule for StarWind service has been implemented.

Fixed bugs:

Core: allow responses to SendTargets=All to be greater than 8kb, up to initiator's parameter MaxRecvDataSegmentLength.


Known issues:

High Availability: Errors with migration of Hyper-V virtual machine on HA node StarWind service restart during initiated vm migration process.

IBVolume: Errors with snapshots for GPT disks.


We are very interested in your input on HA device and IBV (Snapshots and CDP) device test results.
Please do not hesitate to contact us should you have any questions.


Let us remind you that StarWind started open beta testing program and the most active beta testers will win monthly prizes from StarWind: iPads and coupons for $100 from Amazon.com. The names of the first winners will be published on our beta forum page in the end of July.
User avatar
Bohdan (staff)
Staff
 
Posts: 437
Joined: Wed May 23, 2007 12:58 pm

Re: StarWind iSCSI SAN V5.4 Build 20100621

Postby sls » Thu Jul 01, 2010 5:59 pm

I just upgraded the software from 5.3 to 5.4 beta. Here is couple issues that I see and wish to see imporvement.

1. The HA targets that I created on 5.3 show exclamation and status is no syned after the upgrade. I have no idea how to upgrade the software without breaking the sychonization status on both node. I did shut down the service on the partner node and upgrade the partner node fist and then do the same thing on the primary node. When both nodes get upgraded, all HA targets are not insync again. When I try to use the option fast sync, it tells me the partner is not synced. What is the point to give me option to do fast sync? Pleae note, this is a test environment and no initiators connect to any of the StarWind target yet. In the other word, there is no data changes in the .img file during the upgrade process. Why the targets get off sync again and require a full sync. Full sync on 12 teribyte storage took almost 2 weeks. :?

2. The speed of target synchroniztion needs to improve. 2 teribyte targets take two week to full sync is two slow. I can copy the .img file from one node to other node in couple hours. When I created the first HA target, I notice the network utilization on the gigabit interface only shows 1.5%. When I added other other 5 HA targets, the network interface is only 14% utilized. It seems like the StarWind HA sync process only takes 1.5% or it can only 1.5% of the interface bandwidth. I think this is the reason why the full sync process takes so long because it can use only 1.5% of a gigabit network. I was planing to add more network card to load balance the replication interface. In this case, adding more network card is not going to improve the sync speed. I've mentioned this issue in the other post. starwind-f5/initial-full-sync-speed-t2134.html.

3. I can't change the cache mode on the HA targets that I created in 5.3. The 5.3 targets has no cache option. After the upgrade, I don't see an option to enable the cache on the existing targets. I can create a new target with cache option. Plaese add the option to allow target parameter adjustement after it's been created.

4. I still not seeing LUN ID management feature in this version. This feature is very important in larger scale deployment. I've asked this in the other post. starwind-f5/lun-management-t2135.html.

:idea: :idea:
sls
 
Posts: 15
Joined: Fri Jun 18, 2010 6:16 pm

Re: StarWind iSCSI SAN V5.4 Build 20100621

Postby Bohdan (staff) » Fri Jul 02, 2010 9:15 am

1. The 5.4 beta contains truncated Information section. We are sorry about that. The installation procedure is the same as for 5.3.5 version

Installation notes
Install: Previous versions can be updated by installing this version over existing installation.

If you are upgrading from version 5.3, HA devices can be upgraded without taking storage offline:
- Update StarWind on the first HA node.
- Perform synchronization for HA devices. Wait for synchronization process to be finished.
- Update StarWind on the second HA node.
- Perform synchronization for HA devices.

Warning: existing HA devices of version 5.2 and older versions cannot be upgraded to this version without taking storage offline.
To update existing HA devices from previous version, please follow the steps below:
- Take storage offline (disconnect both HA nodes from the client).
- Create backups of data files and StarWind config file (starwind.cfg).
- Update StarWind on the both HA nodes.
- Perform full synchronization for HA devices.

As for the synchronization requirement, you are no the first person who asks us about that... If StarWind service is running it locks the images and we know that no other service or application can change them. But if StarWind was stopped we can not guarantee that image file's data were not changed. In one of the next StarWind releases we are planing to implement advanced option to change device status and avoid synchronization.
For the present the only possible way to avoid synchronization if you are sure about your data consistency is to re-create HA device with "Do not synchronize virtual disks " initialization method.

2. We do not observe this in our testlab. What network adapters do you use? Please try to install the latest drives and apply TCP/IP stack optimized iSCSI settings starwind-f5/tcp-stack-optimized-iscsi-settings-t792.html

3. OK.

4. It is not ready for beta testing yet...
User avatar
Bohdan (staff)
Staff
 
Posts: 437
Joined: Wed May 23, 2007 12:58 pm

Re: StarWind iSCSI SAN V5.4 Build 20100621

Postby sls » Tue Jul 06, 2010 11:03 pm

The other recommanfation.
When creating the target and selecting the size of the disk image file, can you add the option to allow the user to choose from GB or TB? the current option is always MB. I think GB is more commonly used on storage size now. I have to do the MB to TB calulation everytime when I create a new target. :wink:
sls
 
Posts: 15
Joined: Fri Jun 18, 2010 6:16 pm

Re: StarWind iSCSI SAN V5.4 Build 20100621

Postby anton (staff) » Mon Jul 12, 2010 8:30 pm

This is nice concern! I think we'll add it... Thank you for pointing!

sls wrote:The other recommanfation.
When creating the target and selecting the size of the disk image file, can you add the option to allow the user to choose from GB or TB? the current option is always MB. I think GB is more commonly used on storage size now. I have to do the MB to TB calulation everytime when I create a new target. :wink:
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3922
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands

Re: StarWind iSCSI SAN V5.4 Build 20100621

Postby 7121ipbl » Tue Aug 03, 2010 12:59 pm

I would also like to request the same feature to be able to use GB or TB when specifying a drive size.

Alex
7121ipbl
 
Posts: 2
Joined: Tue Mar 23, 2010 5:56 pm

Re: StarWind iSCSI SAN V5.4 Build 20100621

Postby Bohdan (staff) » Tue Aug 03, 2010 1:27 pm

Thanks for your post.
We'll add it in one of the next StarWind releases in the nearest future.
User avatar
Bohdan (staff)
Staff
 
Posts: 437
Joined: Wed May 23, 2007 12:58 pm

Re: StarWind iSCSI SAN V5.4 Build 20100621

Postby jimbul » Tue Mar 01, 2011 10:48 am

Hey there,

We absolutely have to be able to recover from a full shutdown in a more graceful manner than is possible at present.

Please please include some form of shutdown option/or automation for this in future.

We need HA, but at the single site level and do not have power protection beyond ups.

Looking forward to this being included - any idea if/when?

Thanks,

Jim
jimbul
 
Posts: 23
Joined: Tue Mar 01, 2011 10:35 am

Re: StarWind iSCSI SAN V5.4 Build 20100621

Postby anton (staff) » Tue Mar 01, 2011 12:29 pm

How do you see "recover from full shutdown in a more graceful manner"? I mean what exactly you'd like us to add / fix / replace? We'll release journaling with V5.7 so it would be clear what node fall last and whose data is more up to date. Also sync process should not take that many resources so no failure on connect to single active HA node. Any other ideas how to make our customers happy?

jimbul wrote:Hey there,

We absolutely have to be able to recover from a full shutdown in a more graceful manner than is possible at present.

Please please include some form of shutdown option/or automation for this in future.

We need HA, but at the single site level and do not have power protection beyond ups.

Looking forward to this being included - any idea if/when?

Thanks,

Jim
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3922
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands

Re: StarWind iSCSI SAN V5.4 Build 20100621

Postby kmax » Wed Mar 02, 2011 12:03 am

anton (staff) wrote:How do you see "recover from full shutdown in a more graceful manner"? I mean what exactly you'd like us to add / fix / replace? We'll release journaling with V5.7 so it would be clear what node fall last and whose data is more up to date. Also sync process should not take that many resources so no failure on connect to single active HA node. Any other ideas how to make our customers happy?


I'll give my thoughts. In a perfect world we would have a generator of some sort to keep both nodes up. But things aren't perfect so here are my thoughts...

1) Sounds like 5.7 offers improvements in throttling syncs.
2) Also sounds like you are doing journaling to determine which node fails first.

Granted, a lot of stuff would have to happen to find things unusable but I always think of the worst case when it comes to disaster recovery.

What I would like to see is the following:

1) Seems to me you would need a "master clock" in regards to establishing timestamps when journal entries are recorded. So, in a 2 node HA environment, the slave's clock is getting the time from the master. This is crucial in determining which node is in sync.

2) If we really know which node has valid copy of data (based on #1 from above), then when a resync of targets occurs I would like all targets on the "master" node to be set as synchronized so iSCSI initiators can connect right away. Meanwhile, a throttled resync then can take place to at least make a virtualized environment usable while it fast syncs in the background. Bringing up various VM's in a cluster is crucial and waiting for a sync is a bad thing.

3) I know this has been talked about in regards to multiple channels, but the sync channel is quite important and it is imperative to have a backup link so we don't have split brain problem. Consider the case that I have a dual port CX4 network card in each node. I hope if it fails that it will stop listening to the public iSCSI IP address and the sync channel. The ability to use a built in gigabit ethernet interface supplied by the motherboard or whatever would be good. Sure it would be slower, but preventing corruption of the image files is the most improtant.
kmax
 
Posts: 47
Joined: Thu Nov 04, 2010 3:37 pm

Re: StarWind iSCSI SAN V5.4 Build 20100621

Postby anton (staff) » Wed Mar 02, 2011 12:04 pm

1) You need to synchronize both HA node clocks with some external server (Both Windows and Linux can do it if you did not turn this feature OFF). We cannot put this (or any other) service on HA node itself as it will break the whole "full symmetric" idea.

What we *can* do however - make sure clocks on both (or better say ALL) HA nodes are in sync. And fire some kind of event if it's not correct. I'll add this to the schedule if you don't mind.

2) This is already implemented in V5.7 in both automatic and manual modes. When at least one HA node is up and running whole cluster is 100% usable. With all housekeeping (sync, check etc) done in the background.

3) We don't have split brain issue from V5.5 I believe. V5.5 has heart beat cluster implemented and V5.7 will have redundant cross-link channels supported just for performance and flexibility.

kmax wrote:
anton (staff) wrote:How do you see "recover from full shutdown in a more graceful manner"? I mean what exactly you'd like us to add / fix / replace? We'll release journaling with V5.7 so it would be clear what node fall last and whose data is more up to date. Also sync process should not take that many resources so no failure on connect to single active HA node. Any other ideas how to make our customers happy?


I'll give my thoughts. In a perfect world we would have a generator of some sort to keep both nodes up. But things aren't perfect so here are my thoughts...

1) Sounds like 5.7 offers improvements in throttling syncs.
2) Also sounds like you are doing journaling to determine which node fails first.

Granted, a lot of stuff would have to happen to find things unusable but I always think of the worst case when it comes to disaster recovery.

What I would like to see is the following:

1) Seems to me you would need a "master clock" in regards to establishing timestamps when journal entries are recorded. So, in a 2 node HA environment, the slave's clock is getting the time from the master. This is crucial in determining which node is in sync.

2) If we really know which node has valid copy of data (based on #1 from above), then when a resync of targets occurs I would like all targets on the "master" node to be set as synchronized so iSCSI initiators can connect right away. Meanwhile, a throttled resync then can take place to at least make a virtualized environment usable while it fast syncs in the background. Bringing up various VM's in a cluster is crucial and waiting for a sync is a bad thing.

3) I know this has been talked about in regards to multiple channels, but the sync channel is quite important and it is imperative to have a backup link so we don't have split brain problem. Consider the case that I have a dual port CX4 network card in each node. I hope if it fails that it will stop listening to the public iSCSI IP address and the sync channel. The ability to use a built in gigabit ethernet interface supplied by the motherboard or whatever would be good. Sure it would be slower, but preventing corruption of the image files is the most improtant.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
 
Posts: 3922
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands


Return to Beta

Who is online

Users browsing this forum: No registered users and 3 guests