Raid-0 with 4 drives for L2 Cache

FAQ, getting help, user experience about PrimoCache
bkostas
Level 3
Level 3
Posts: 16
Joined: Mon Mar 02, 2020 2:09 pm

Re: Raid-0 with 4 drives for L2 Cache

Post by bkostas »

Thank you for all that tips.
I just still think that 20GB L1 Cache on data drive (50%read-50%write) helps a lot when there is a patch to be installed. Usually patches are around 10-20GB.
Also are you sure that Raid 1 on nvme drives has better read speeds?
Googling a bit I found that it is not, so I am not that sure.
User avatar
Jaga
Contributor
Contributor
Posts: 692
Joined: Sat Jan 25, 2014 1:11 am

Re: Raid-0 with 4 drives for L2 Cache

Post by Jaga »

My past experience with RAID (mostly with spinners in the past) has held that RAID 1 benefits from multiple read requests at the same time, when compared to RAID 0 (which has better single-request performance). It's going to depend for you on your delivery app (that requests data from the drive and sends it to client computers), and the controller you have the NVMes on (guessing the motherboard in this case, but it could also be a riser card).

People doing benchmarks with RAIDed NVMes show that indeed, RAID 0 is faster for the benchmark test (sometimes by only a little, sometimes by a lot). But I'm unsure if that will hold true in your environment with the software you are using, and with Primocache using the array as a L2. 10-20 simultaneous read requests (if your delivery app is capable of doing >1 read thread at the same time) may actually benefit more from RAID 1.

It would be fairly easy for you to test, since it's just an underlying array configuration change on an L2 that you can add/remove at any time (provided your clients aren't using the service at the time). The only real trouble is re-populating the L2 with data again after changing the RAID array type so you can test game delivery.

I haven't personally RAIDed NVMes in the past, though I have done some expensive Samsung SSDs in RAID 0 for a client who was concerned about speed but not about safety. His needs were mostly single-user specific, and so a single thread reading/writing from/to the drives made RAID 0 most useful in his case.

If you can get away with using a 20GB L1 cache on the data drives (in addition to the L2) and not buy any more RAM, then that's great. You can even make it a write-only cache (I think you can do L1 write and L2 read) so that new data going to the drive flushes old data automatically. But I don't think I'd add more RAM to the server before adding the second NVMe drive, since whichever RAID type you go with is going to benefit you more than any amount of RAM.

So yeah, it comes down to testing the two RAID types off-hours in your environment and seeing which you prefer.
Post Reply