Thank you for the reply, but this doesn't really answer my question. There is no such thing as off hours in my organization. So let me give you a scenario that I am considering moving to.
My present deployment if a 2 node HA that runs in a Hyperconverged cluster for VDI. This is all operating well at the moment, but I find myself in the same situation with Microsoft clustering being a major weak point, as we as introducing stupid issues such as Dynamic MAC mishandling, etc. I'm about done with Microsoft clustering TBH. So that issue aside, here is what I am considering and presently testing:
2 NODE starwind HA either locally mounting disk, or redundantly serving disk to additional nodes that host the VDI machines. The VDI machines will not be redundant, but will be using redundant disk. The disk will be presented to both machines as say the D: drive. There will be no split brain, as the VDI Hypervisors will not be configured in a cluster. Should a server die for any reason, I will run a scripted redeployment of static VDI guests on the other machine, or by utilizing UPD technology and pooled desktops, I will have to do nothing at all.
What I want to know is if I can enable Dedup against the actual drive that hosts starwind flat files. Thus keeping all overhead off of any front end boxes that host VDI (If i do end up using front end boxes due to performance). And to be honest, I have done this before in testing. I just want to know if there are any known issues with this.
Oleg(staff) wrote:Hello,
The best way will be to run Microsoft Dedup on CSV's. In order to avoid performance issues, you can enable deduplication task during out of working hours.