Guidance on my my setup
Posted: Sun Nov 08, 2015 6:11 am
I would like some assistance properly configuring 3 nodes. Two nodes being used as replicated vSANs and the 3rd node being used as a vSAN providing backup storage for DPM 2012.
About my environment:
We need to provide around 65TB usable on each active SAN. We're heavy read (75%+) and we will be using Starwind as storage for a Hyper-V CSV, iscsi targets for linux hosts, and for network user shares.
Hardware/Software:
OS: Windows Server 2012 R2
Starwind: Latest stable v8 build
Backup software: DPM 2012
Chassis: 3x 36 drive bay SMC w/ SAS Raid controllers with 512mb cache and BBU
Ram: 192GB per node
HDD: HGST 7k4000 SAS 4TB for storage nodes, Seagate Constellation 3TB Sas and Segate 3TB SATA in external storage attached in the backup node
SSD: 2x Intel DC P3500 2TB PCIE NVME SSD (could add 2 more if needed)
Network: Infiniband 40GB switch / connect x2 dual port cards in each host and 10gb Ethernet
Here is my Plan:
Each primary storage node will be setup with Raid 10 w/ cache enabled on the Raid card. The backup node will be Raid 6.
All ram not used by the base OS will be dedicated to L1 Cache in Write-Back mode
One PCIE SSD in each node as dedicated L2 Cache in Write-Back mode
SMB 3.0 for the MS CSV
iScsi for other storage presentation
Some things I would like clarification on:
I've heard that L1 Cache should be at a ratio of at least 1GB per TB on each node. Is this correct? Does adding more than 1GB per TB improve performance even more?
I've heard that L2 Cache should be 10-15% of your total available drive space. Since I only have 2TB of SSD per node is it worth even using this feature?
What happens if a SSD (L2 Cache) fails while the nodes are in production? Is it unsafe the run a single SSD for cache in each node?
How big of a performance increase is write through vs write back?
Finally, anything else you think needs to be discussed to optimize this setup please pass it this way. Thanks so much!
About my environment:
We need to provide around 65TB usable on each active SAN. We're heavy read (75%+) and we will be using Starwind as storage for a Hyper-V CSV, iscsi targets for linux hosts, and for network user shares.
Hardware/Software:
OS: Windows Server 2012 R2
Starwind: Latest stable v8 build
Backup software: DPM 2012
Chassis: 3x 36 drive bay SMC w/ SAS Raid controllers with 512mb cache and BBU
Ram: 192GB per node
HDD: HGST 7k4000 SAS 4TB for storage nodes, Seagate Constellation 3TB Sas and Segate 3TB SATA in external storage attached in the backup node
SSD: 2x Intel DC P3500 2TB PCIE NVME SSD (could add 2 more if needed)
Network: Infiniband 40GB switch / connect x2 dual port cards in each host and 10gb Ethernet
Here is my Plan:
Each primary storage node will be setup with Raid 10 w/ cache enabled on the Raid card. The backup node will be Raid 6.
All ram not used by the base OS will be dedicated to L1 Cache in Write-Back mode
One PCIE SSD in each node as dedicated L2 Cache in Write-Back mode
SMB 3.0 for the MS CSV
iScsi for other storage presentation
Some things I would like clarification on:
I've heard that L1 Cache should be at a ratio of at least 1GB per TB on each node. Is this correct? Does adding more than 1GB per TB improve performance even more?
I've heard that L2 Cache should be 10-15% of your total available drive space. Since I only have 2TB of SSD per node is it worth even using this feature?
What happens if a SSD (L2 Cache) fails while the nodes are in production? Is it unsafe the run a single SSD for cache in each node?
How big of a performance increase is write through vs write back?
Finally, anything else you think needs to be discussed to optimize this setup please pass it this way. Thanks so much!