Hi Posap
So firstly I'm using VMWare ESXi 6.5 with vCenter 6.5 too, with Hyper-V its a little easier to get access to the drives but I prefer ESXi.
To allow RDM access to the drives you either need an HBA that can act as a JBOD device, I use Dell H200s on my Dell R710s but a 9211-8i or similar flashed to IT mode would offer the same outcome, or local SATA. Essentially, these HBAs in IT mode don't really do raid at all at a hardware level and simply present the drives to the OS / Hypervisor as local storage.
RDM is easier to access with HBAs as there is a tick box in the advanced settings of ESXi to allow RDM passthrough (Host Config>Advanced System Settings>RdmFilter.HbaIsShared = false), this will then allow you to simply add the drives as RDM devices directly to the VM. Local storage via SATA is harder as you either have to pass the entire Sata interface in to the VM which will normally mean ALL of the attached drives and therefore no where to store the base VM or you need to go in to the cli (
https://gist.github.com/Hengjie/1520114 ... 7af4b3a064) to generate an RDM 'stub' for the VM.
Once you've got the drives in to the Windows VM, I simply used the GUI to create everything as I'm just using simple spaces with a single drive. If I need more then I'll use powershell to add the new drive and setup striping...
VM config is server 2016 with 8GB Ram, a single 100GB boot drive and then the two vSAN drives.
Network is 10GBe direct connect and then multiple 1Gbe network connections to support VM management and heartbeat.