Survey: Are you on v6 or v8?

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

Post Reply
epalombizio
Posts: 67
Joined: Wed Oct 27, 2010 3:40 pm

Wed Nov 05, 2014 7:32 pm

Hi all,
I'm still stuck on v6 as I've been pretty hesitant to upgrade to v8 due to the constant issues that seem to be popping up in the posts.

I'm hoping we can get a tally of who's on v6 vs v8, and if you are on v8, what current issues you are seeing.

Starwind - it would be great if we could get a central post or other location that summarizes the current verified issues in the most recent build. Many companies include this in the release notes, but considering how often issues pop up, it would be great if it was somewhere updated more frequently.. I think having a central location would really help customers like me on v6 holding out.

Thanks,
Elvis
User avatar
lohelle
Posts: 144
Joined: Sun Aug 28, 2011 2:04 pm

Wed Nov 05, 2014 8:32 pm

We are using v8 in HA-mode, but we are using image-files only. No thin-provisioning etc.
No problems there. But had some issues when trying LSFS. Not using L2 SSD-cache, as we are only using SSD-drives.

The primary reason is actually the VAAI-support (Intel 800GB S3700)
epalombizio
Posts: 67
Joined: Wed Oct 27, 2010 3:40 pm

Wed Nov 05, 2014 8:44 pm

Thanks - Regarding VAAI, have you noticed the expected performance benefits?
User avatar
lohelle
Posts: 144
Joined: Sun Aug 28, 2011 2:04 pm

Wed Nov 05, 2014 9:05 pm

Not SO much after going to 10 gigabit networking, but it offloads the host also, not only the network links.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Nov 05, 2014 10:07 pm

It depends on what you actually do, how many VMs you run and provision daily on a routine basis. If amount of VMs is not high and you don't create many of a new ones VAAI (just like ODX) would be pretty much never kicking in...
lohelle wrote:Not SO much after going to 10 gigabit networking, but it offloads the host also, not only the network links.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
kspare
Posts: 60
Joined: Sun Sep 01, 2013 2:23 pm

Thu Nov 06, 2014 10:00 pm

We are running V8, but we're on a special beta to test for performance of thick disks. So far we've had great success, hopefully it will be released soon.

We split our san into 2 LUN' for our iscsi VM's. We are able to receive traffic from 4 servers to the 2 luns and hit about 800MB/s.. it works well!
User avatar
lohelle
Posts: 144
Joined: Sun Aug 28, 2011 2:04 pm

Thu Nov 06, 2014 10:10 pm

Looking forward to testing this version (or the non-beta using this code that is)

We have reached about 2GB/s (16 gigabits/s) using RR and dualport 10gig NICS (Emulex), but that was 100% reads. I do not remember the number for writes, but I THINK it was about 600MB/s.

The image-files where stored on Intel S3700 800GB drives (8 x 800GB in RAID10) on each node in the HA pair.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Nov 06, 2014 10:12 pm

Is this number (600) a limit of a network or of an underlying storage layer?
lohelle wrote:Looking forward to testing this version (or the non-beta using this code that is)

We have reached about 2GB/s (16 gigabits/s) using RR and dualport 10gig NICS (Emulex), but that was 100% reads. I do not remember the number for writes, but I THINK it was about 600MB/s.

The image-files where stored on Intel S3700 800GB drives (8 x 800GB in RAID10) on each node in the HA pair.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
epalombizio
Posts: 67
Joined: Wed Oct 27, 2010 3:40 pm

Thu Nov 06, 2014 10:13 pm

Sounds great gents. I'm using a simple HA setup with IMG files as well so I feel better knowing others are running v8 with a similar setup. Thanks for the responses so far
User avatar
lohelle
Posts: 144
Joined: Sun Aug 28, 2011 2:04 pm

Thu Nov 06, 2014 10:19 pm

anton (staff) wrote:Is this number (600) a limit of a network or of an underlying storage layer?
For us, HA has always been slower for writes than non-HA. But the performance is more than enough for us, so we have not done much to try improving it.
Our workloads are mostly reads.

No parts in our setup should limit to 600MB/s. I think maybe the ESXi iscsi-client might be a part of the problem, but I'm not sure.
We are using 2 x 10Gb/s sync channels (not MPIO, but 2xsync in Starwind), and all arrays should handle at least 2GB/s.

For reads, I think the limit actually is the two 10gig ports on the host. If we add one more dualport nic, and then also run in RR, I think we might be able to do 3GB/s ++
Last edited by lohelle on Thu Nov 06, 2014 10:24 pm, edited 1 time in total.
User avatar
anton (staff)
Site Admin
Posts: 4010
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Thu Nov 06, 2014 10:20 pm

Writes are limited with a performance of a backbone network something you never see with local deployments or non-HA modes.
lohelle wrote:
anton (staff) wrote:Is this number (600) a limit of a network or of an underlying storage layer?
For us, HA has always been slower for writes than non-HA. But the performance is more than enough for us, so we have not done much to try improving it.
Our workloads are mostly reads.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply