Cache on hardware controller VS StarWind cache

Software-based VM-centric and flash-friendly VM storage + free version

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Oct 27, 2014 5:15 pm

There can be many reasons you see what you should not really see (performance degradation). One to start from are you using FLAT or LSFS containers?

Also please check why file copy is not a good performance test. See:

http://blogs.technet.com/b/josebda/arch ... stead.aspx
adamrose045 wrote:We have some concerns, both of which relate to starwind write-back caching, on the disk performance penalty introduced by the SW SAN and shutdown scenarios requiring manual intervention.

1) With SW write-back cache enabled (on a 4 drive RAID 10 array on a Perc 6i with 512MB BBU), our large file 24GB copy write test to the SAN consistently starts out at 130 MB/sec but drops to under 50 MB/sec about halfway through (presumably when the write back cache is flushed). If we perform the same large file copy write on the native disk, it maintains about 100 MB/sec all the way through. File copy read tests with or without the SAN are each about 100 MB/sec. So it seems like the starwind SAN is introducing about a 50% write performance penalty hit vs. native disk - is this normal?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Kayakero
Posts: 6
Joined: Fri Mar 17, 2017 4:34 pm

Fri Mar 17, 2017 11:02 pm

anton (staff) wrote: Law is simple: One node -> Write-Thru, many nodes -> Write-Back.
Let me see if I get this write-back cache policy right.
Let's suppose 2 node hyperconverged scenario. I'm kind of afraid of blackouts in combination with write-back policy.
You are saying no matter if write back cache is enabled or much memory it's allocated as long as one node stays up there will be no problems whatsoever ?
One server can be forcefully unplugged as long the other one is up and viceversa ?
That means that no matter which server or which path some application is writing (can it be loopback normally or opposite server iSCSI ip) there is no ACK to the application write command until ALL SERVERS got that data in their local write-back memory ?
that means receiving the data and synchronizing thru sync network and receiving opposite server ack ?
Ivan (staff)
Staff
Posts: 172
Joined: Thu Mar 09, 2017 6:30 pm

Thu Mar 30, 2017 9:29 pm

Hello Kayakero,
Let's imagine that you have L1 WB cache configured on HA device and one node down.
Another StarWind node will identify that partner is gone, it will flush the cache to underlying storage and will continue its work without cache by desing. At the same time, the targets from failed node will become not available and the application will be not able to write on a failed node. It means that even if the second node down, the data in a cache will be not lost.
As for ACK, it will be received only from the remaining node while in a synchronized state it will be received if data has been written to both nodes.
mrjaco
Posts: 1
Joined: Tue Oct 01, 2019 11:03 am

Tue Oct 01, 2019 11:04 am

Good day, everyone.

I use two node failover cluster with StarWind Native SAN for Hyper-V Free Edition. Servers have onboard SAS-controllers (HP Smart Array P420) with 1GB RAM. And no more tasks running on that servers.

With that configuration i need to enable StarWind cache or HW cache is enought?
Boris (staff)
Staff
Posts: 805
Joined: Fri Jul 28, 2017 8:18 am

Tue Oct 01, 2019 1:04 pm

It depends on your expectations from the system. If your question is whether it is possible to use both at the same time, the answer it "yes".
Post Reply