The Latest Gartner® Magic Quadrant™Hyperconverged Infrastructure Software
Moderators: anton (staff), art (staff), Max (staff), Anatoly (staff)
craggy wrote:I know there are many tips and recommendations for various previous versions of SW and Server 2008 etc. but with the release of V8 running on Server 2012 R2 I'm sure things have changed.
Just a few questions that we encountered when testing V8 on Server 2012 R2.
What are the most recent recommendations and best practices for running V8 on Server 2012 R2 Bare Metal.
What network level tweaks are recommended in windows and ESXi for optimum iScsi performance?
What are the different tweaks recommended for those using 10Gbe or Infiniband over those using 1Gbe? (We never seemed to be able to achieve more than 400MBs with Multipath 10GBe for example).
What block size should we be using in SW Console for virtual disks, 512B or 4KB?
What is a recommended Cluster size when formatting NTFS volumes in Server 2012, 4KB, 8KB.. 64KB?
What is the recommended Raid block size for the underlying storage to optimise performance and capacity when the NTFS cluster size and SW Virtual Disk block size are factored in?
What are the recommended iScsi initiator settings and timings - and relevant tweaks to the iScsi target to match?
I'm sure people have many more questions but it would be great to have a central list of recommendations for people deploying V8 as a lot of these things are not easily changed after deployment of live data.
Klas wrote:Great initiative
What L1 cache size is recommended
What L2 cahce size is recommended
How many sync channels is optimal for X bandwidth. Example. 1 x 10 GBit iSCSI need 4 x 1 GBit sync. (In my lab with v6 and v8 beta I found that >3 x 1 GBit sync channels gave better performance than 1 x 10 GBit sync).
L1 and L2 cache contra Microsoft Storage spaces write back cache.
Klas wrote:Great initiative
What L1 cache size is recommended
What L2 cahce size is recommended
How many sync channels is optimal for X bandwidth. Example. 1 x 10 GBit iSCSI need 4 x 1 GBit sync. (In my lab with v6 and v8 beta I found that >3 x 1 GBit sync channels gave better performance than 1 x 10 GBit sync).
L1 and L2 cache contra Microsoft Storage spaces write back cache.
ReFS in a windows hyper-v scenario could be at multiple layers:Even usin ReFS with integrity streams don't help here (there's no way to use ReFS data integrity with running VM images either way...)
another Anton quote from another thread : http://www.starwindsoftware.com/forums/ ... tml#p20033LSFS is using strong checksums so ReFS is not recommended (waste of CPU). Also ReFS is not stable enough (at least yet).\
barrysmoke wrote:I found some links talking about integrity streams, and running vm's, so the article reccomended not using ReFS on your hyper-v servers, where they store running vm's.
does this extend to the vhdx container files starwind uses on storage pools, or is this just limited to vm's running on ReFS?
Another question that comes to mind then, is in a vmware environment. Could ReFS be used to house a starwind storage pool, with iscsi target vhdx files shared out to vmware, formatted vmfs.
Can I re-visit this one. These figures seem incredibly high guys! 4TB is actually a tiny amount of disk space these days. We've got 8TB on our current primary storage. Starwind default was 128MB but we increased it to 8GB. So for a 20TB disk system, you're saying you need 5% minimum RAM cache for any kind of reasonable performance. Err, guys - who has 1TB RAM in their SANs??DUOK wrote:Hi,anton (staff) wrote:L1 cache = 5-10% of a served capacity
L2 cache = 10-20% of a served capacity
Anton, L1 Cache 5-10%, L2 10-20%, are you sure?
Serving 4TB, 200GB-400GB RAM needed and 400GB-800GB SSD...