Looking for some generall pointers/tips/suggestions on my planned build for my workflow.
TLDR: A bunch of NVM's mirrored with direct PCIe-CPU access would be preferred, but I don't hate my money.
(I've got backup covered, all redundancy for me is to save time/hassle, uptime/backup is not the issue)
For my storage solution I've read up on all kinds of solutions. I did consider SSD cache with MS Storage Spaces, but long story short I've concluded It's not worth my time and effort as this point. Other then the basic storage spaces volume (no SSD cache, +++) it is still badly documented, full of issues and not officially supported on non-datacenter Win. Things like TrueNAS and other solutions with ZFS and such require pass-through solutions which I prefer to avoid at this point (KISS - Keep it simple stupid)
That being said, I assume I can use a Storage Space mirror volume and use that with PrimoCache?
In theory StabileBit Drivepool looks interesting for my use case. However it seems a bit exotic and using that with my use case would/could generate all kinds of weird issues.
I'm still getting reading up on PrimoCache solution, sorry in advance if my use case is obviously stated in documentation/FAQ.
My use case:
"Power workstation", Win10 running and labbing with VM's, Hyper-V (Win10), (alternatively/also vmware workstation). For me, the host OS is mostly just as a management OS with GUI support and easy/non-complicated GUI support for utils/programs/tools which would be a hassle to run on core Hyper-V2019 or Win2019ServerGUI.
What I want:
- Higher IO on non-ssd pools/volumes, everything else (transferspeed) is a appreciated bonus.
- non-exotic, low-maintenance, low-knowledge solution. (AK47>ICBM)
Host OS (Win 10 Pro) is used as management OS for VM's/RDP/VNC. Instead multiple VM's/dockers are running for example Plex, torrenting, Daily use "Office" VM and so on. Meaning storage and disk access will be on large to very large files VDI files (10 GB - 1 TB, Typically VHDX), any smaller file access would be within VDI files and mostly program/os related (generally very little workflow on small files).
PrimoCache works as a virtual disk "man in the middle" ? Running PrimoCache locally on the guest VM's with VHDX volumes might be better?
Specs:
- Ryzen 5950x (16/32)
- 128 GB memory
- 500 GB SATA SSD
Crucial MX500 - CT250MX500SSD1
Physical 4096
Logical 512
- 500 GB SATA SSD
Samsung EVO 850
Physical 512
Logical 512
- 2x 18 TB SATA WD Ultrastar HC550
WDC WUH721818ALE6L4
Physical 4096
Logical 512
- M.2 - NVM 256 GB
I assume 4096 / 512, can/will find out later
- M.2 - NVM 1 TB
I assume 4096 / 512, can/will find out later
Scenario:
2x 500 GB SATA SSD in stripe volume on Storage Spaces
2x 18 TB SATA HDD in mirror volume on Storage Spaces
The 1 TB striped storage spaces volume as a PrimoCache for the mirrored storage spaces 18 TB ?
Or
Use the 1 TB M.2 NVM as a PrimoCache, I assume CPU-PCI-e lane is better.
I'm still getting my head around the 512 / 512e / 4kn physical disk thing. Apparently SSD caching in Storage Spaces can cause significant performance drop (Source:http://jeffgraves.me/2014/06/03/ssds-on ... rformance/). Any thoughts about this regarding if something like this affect PrimoCache setup?
I might be going over to Hyper-V 2019 Core (free) at a later stage. Does PrimoCache support install on the mostly "non-gui" Hyper-V 2019 Core ?
All suggestions are appreciated.