High Performance Storage

Public beta (bugs, reports, suggestions, features and requests)

Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)

User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Wed Apr 18, 2012 7:57 am

You can modify configuration file to enable caching for any type of device.

So I assume no cache, right?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
TFF_Travis
Posts: 2
Joined: Sun Apr 22, 2012 9:29 am

Sun Apr 22, 2012 9:42 am

Having same issue, StarWind RamDisk Software (non-iScsi) very fast 2+GB/s, but RAM-based iSCSI target on localhost very slow. This is on clean 12-core Win2008r2 128gb server with no other activity.

StarWind RamDisk Software (non-iScsi)
RamDiskSoftware.png
RamDiskSoftware.png (15.62 KiB) Viewed 42262 times

RAM-based iSCSI target on localhost
RamDiskISCSILocalhost.png
RamDiskISCSILocalhost.png (16.04 KiB) Viewed 42263 times
Travis
TFF_Travis
Posts: 2
Joined: Sun Apr 22, 2012 9:29 am

Sun Apr 22, 2012 9:52 am

Also, Win2kR8 will not allow any write-caching on either Starwind RamDisk Software or iSCSI Ram Disk.

When i try to enable write caching and hit OK on first dialog box shown, the alert comes up that says
"Windows could not change . . . ". Same thing happens if I also tick the second checkbox "Turn off Windows write-cache buffer . . . "

Seems nothing I do makes this iSCSI Ram Disk anywhere near the speed of the StarWind RamDisk Software.
writeCaching.png
writeCaching.png (21.18 KiB) Viewed 42258 times
Travis
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Apr 22, 2012 11:43 am

1) Thank you for confirmation! We're aware of this situation and working on a custom memory management for cache layer and built-in RAM disk emulator.

2) Inability to turn ON caching for RAM-based media is a FEATURE and not a BUG :) You don't want to add extra path for increased latency, do you?
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Klas
Posts: 13
Joined: Mon Aug 27, 2012 10:49 am

Fri Apr 12, 2013 1:06 pm

We are having the same problems with extremely slow performance on RAM disk. About 200-300 MB using Atto bench 4-8 MB transfer size on localhost.

Our system is an old HP DL3800 G5 with 22 GB RAM with 10 GB dedicated to RAM disk. 8 x 3 GHz cores. Windows 2012 core.

This old server is used as testing rig, It started as a lab to test if huge WB cache is rewarding.
Bohdan (staff) wrote:Did you check the Large drive (AWE mode) option?
In what way was the RAM disk connected to the computer? Locally (127.0.0.1) ?
What is AWE mode? And how do we change that?
anton (staff) wrote:1) Thank you for confirmation! We're aware of this situation and working on a custom memory management for cache layer and built-in RAM disk emulator.
Don’t fully understand what you write, are you writing that you accnoledge this symptoms with slow performance on RAM drives and if so when will you fix them?
User avatar
Bohdan (staff)
Staff
Posts: 435
Joined: Wed May 23, 2007 12:58 pm

Fri Apr 12, 2013 1:25 pm

Hi
Please update your StarWind installation and make sure that "loopback connection accelerator" component is selected.
Attachments
install.png
install.png (26.17 KiB) Viewed 35878 times
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Fri Sep 06, 2013 8:59 pm

Get V8 Beta. It has LSFS and flash caching now to play with.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Japie
Posts: 3
Joined: Sat Apr 25, 2015 3:50 pm

Wed Feb 03, 2016 4:03 pm

hiya,

We just installed starwind Virtual SAN v8 to support a 2-node hyper-v cluster of 2:

- Dell R630, dual E5-2630 v3, 192GB DDR4 RAM
- Starwind storage fabric: 600 GB SSD (6 x 256Gb Samsung 850 Pro in Raid 10, 30% over provisioned).
- The nodes are connected via ConnectX-3 Cards (2 x 40 GBE, which combined with our storage fabric (that is limiting) gives some 5 Gbps per cable)

So, it is clear that we don't need many VM's, but we do need performance... also for a virtualised SQL server installation.
So we thought of using the starwind v8 RAM disk for TempDB, which - upon checking - does seem to have acceptable performance (though not stellar) in Starwind v8 even as an iSCSI target disk (the performance of all storage metrics is attached (not ATTO, just CrystalDisk)).

Resulting SQL disk layout is as below:

- SQL server DB files on starwind storage fabric (sync'ed)
- SQL server logs: 256 Gb M2 SM951 Samsung drives (30% overprovisioned, so 200 GB or so available) (sync'ed)
- SQL server TempDB: 16 GB Ram Disk NTFS formatted. (not sync'ed)

For now it seems to work with the RAM disk, but we are not yet fully done with all SQL server configurations.
I will post in case of errors of whatever kind... and if I don't post, then please assume that the config worked and that the RAM disk continued to show acceptable performance.
Attachments
Ramdisk.png
Ramdisk.png (80.35 KiB) Viewed 34401 times
SM951.png
SM951.png (79.9 KiB) Viewed 34401 times
samsung 850 pros.png
samsung 850 pros.png (84.49 KiB) Viewed 34401 times
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sat Feb 06, 2016 4:29 pm

Make sure you allocate at least one CSV per physical node (that's MSFT requirement really).

Also I'd suggest to use Intel I/O Meter, DiskSPD or Oracle VDBench (our choice really) instead of CrystalXxx. Steady repeatable numbers :mrgreen:
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Japie
Posts: 3
Joined: Sat Apr 25, 2015 3:50 pm

Sun Feb 07, 2016 4:15 pm

Hi Anton,

Yes, we created 2 local CSV's, however when checking performance of the Ramdisk on second node, it was markedly slower (say 70% slower) than the performance of the node for which I posted the drive performance tests. We could not figure out why this was the case, as the 2 boxes are identical. We checked virtually everything: Switched on the large option (AWE) in the config file, played around with MPIO settings etc., but none of it made any difference (we think it is related to NUMA affinity, as on the 'fast' node we have 8 consecutive threads spiking up, and on the 'slow' node we have CPU activity scattered all over the 32 threads. Also we have loads of: "RAM: *** RamDisk_SscDispatch: SCSIOP_XXX (0x35/0x0) is not supported"-errors, so likely related to that, but we don't know what it means).

So, in the end, our problem does appear to be the same as what the people above described, I.e. when we install the standalone version of Starwind Ramdisk, there are no performance issues. However when putting it as a (non-synced) iscsi target (either as physical drive or as image file on top of an earlier created Ramdisk), the performance drops by a factor 3 or so (it was just because our nodes are so fast, and we use DDR4 memory that the performance was still acceptable on the first node).

But this performance difference made us wonder and we decided to check the performance of the storage on which the hyper-v VM's run (as we had only checked the storage performance before we created "image file"-based iscsi targets on them). Unfortunately, also there, the performance is about a third of what the underlying storage fabric is capable of (the hypervisor should not slow it down to this extent). Now, I doubt we have done anything wrong with turning, as we have build a standard 2-node cluster and all settings are as per the book (jumbo frames, GPT, well everything..), and we are using the latest v8 build which I believe includes the loopback accelerator. So, maybe this is all related to your warning "not to use the free version for performance testing".... but the funny thing is, that when we do the live migrations, it actually is very fast (over 2 x 40 GBE connectx-3, also new 250 GB image file creation is synced in no time). So the replication from node to node as governed by SW appears solid. So maybe our drive performance loss when is something related to iscsi itself or so (we anticipated a bit of performance loss due to iscsi, but not like this).

We will investigate a bit further and post if we resolve, but for now, we unfortunately also could not get the embedded SW RAM disk to work to acceptable standards on both nodes.

Regards, JJ

PS. For your perusal, I attach the crystal disk output of the storage performance inside a VM that resides on the SW storage (please note, there is just 1 VM on that machine, and we could not detect any network activity on the host whilst testing, so the outcome of the test was likely not influenced by any networking bottlenecks). The performance of the underlying Raid10 storage on which this VM resides, is attached to my earlier post.
Attachments
Capture.PNG
Capture.PNG (44.27 KiB) Viewed 34386 times
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Sun Feb 07, 2016 4:57 pm

Of course local kernel-mode RAM disk is going to be faster compared to user-land distributed one done over iSCSI!

You need to understand when you're adding extra replication partners your read performance increases (we take care of data locality but do off-load some reads to partners to speed up the process) but writes has to be in sync so write performance is going to be limited with performance of your sync channel. That's why it's vital to make sure your 40 GbE links are fully working and all instances of StarWind are assigned to own NUMA node. We do automating this binding process in the upcoming version of StarWind (very soon) but for now you'll have to allow our engineers to jump in and check your config. Another issue we have is ±100K IOPS limit per virtual LUN. If your underlying storage can do more locally - you' have to layer more virtual LUNs on top of it to get aggregation of your peak performance. Again, give our engineers a chance ;)
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Japie
Posts: 3
Joined: Sat Apr 25, 2015 3:50 pm

Sun Feb 07, 2016 6:47 pm

Hi Anton,

Thanks for your quick reply.

"Of course local kernel-mode RAM disk is going to be faster compared to user-land distributed one done over iSCSI! "
ok, understood.

"You need to understand when you're adding extra replication partners your read performance increases (we take care of data locality but do off-load some reads to partners to speed up the process) but writes has to be in sync so write performance is going to be limited with performance of your sync channel. That's why it's vital to make sure your 40 GbE links are fully working and all instances of StarWind are assigned to own NUMA node."
See, our network speed is huge, way faster than our storage and when we do a sync operation we do indeed observe huge throughput through the sync channels. However, when we did the performance testing, we saw no throughput through any of our Ethernet connections. We did not understand that, but maybe this hints at the reason for the poor disk performance test (which was poorer for both reads and writes). So we decided to check the other node as well (the node that saw decent RAM disk performance), and when we checked the disk performance inside a VM running on that node, we actually do get read performance close to that of the underlying storage, and lower write performance as we did observe throughput over the iscsi nics. So, node 2 (the fast node) functions exactly as per what you say. Accordingly we likely configured something wrong on node 1 (the one that shows poor performance), but we just don't know what we did wrong. (We do think that all instances of SW are assigned to its own local NUMA node (we checked the log, and we think the binding is correct), but we can be wrong here and the difference between RAM disk performance does indeed hint at this).

"Another issue we have is ±100K IOPS limit per virtual LUN. If your underlying storage can do more locally - you' have to layer more virtual LUNs on top of it to get aggregation of your peak performance. "
This is hopeful, we did not know this, and indeed don't know how to do this ourselves.

"Again, give our engineers a chance "
Ha, I would love to give your engineers a chance :), and it looks like that we will have to, but we are a small start-up (on a tight budget) and SW kindly offers us the possibility to start with HA from onset for free. We will grow into more nodes and - as it now becomes apparent - we will need your help configuring it correctly as well, but we cant afford so now already. So we have to start like this (SW is very stable, just the performance is slower than anticipated), and then grow into a paid license. For now, it is important for us to know that the problems we face are solvable. So that in a couple of months from now we can indeed have you configure it all correctly.

So thanks again for your quick reply, which does indeed hint at our performance problems being solvable and is keeping us with SW.
Attachments
Capture.PNG
Capture.PNG (44.63 KiB) Viewed 34378 times
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Feb 08, 2016 9:02 pm

Even if StarWind gave you something free of charge we're still responsible for the whole thing so kick engineers and ask them to make software working the way it does. I'll talk to Max.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
User avatar
anton (staff)
Site Admin
Posts: 4008
Joined: Fri Jun 18, 2004 12:03 am
Location: British Virgin Islands
Contact:

Mon Feb 08, 2016 11:28 pm

OK, I've dropped a message to you and head of our support so please check your Inbox.
Regards,
Anton Kolomyeytsev

Chief Technology Officer & Chief Architect, StarWind Software

Image
Post Reply