Fallback to L2 cache when L1 is full Topic is solved

Suggestions around PrimoCache
Post Reply
cichy45
Level 4
Level 4
Posts: 38
Joined: Sun Oct 14, 2018 3:34 pm

Fallback to L2 cache when L1 is full

Post by cichy45 »

I made an observation on PrimoCache behavior. I use it with DrivePool, so PrimoCache must be able to read and write to many volumes at the same time.

Lets say: I have 30GB folder that is compressed with windows NTFS compression. Then I disable compression so windows has to read all files then write them again.

My settings on screenshots so everything should be clear:
Image
Image
Image
Image

What I observed: when windows is writing files back to cached volumes, if free space on L1 runs out PrimoCache is ignoring "IDLE FLUSH" and performs reading/writing on the same volume at the same time, greatly reducing performance as a result.

In my opinion PrimoCache in my setup should fallback to L2 cache when L1 is already full, so it would respect "IDLE FLUSH" setting (continue caching of writes to L2 when L1 FULL) and perform simultaneous reading/writing ONLY when L1 and L2 cache is FULL and can not store any more data.
Current PrimoCache behavior is correct only when WRITING data, so when I write 10GB to volume, if falls back to L2 when 4GB of L1 is full. When reading and writing to the same volume it acts as only 4GB L1 like L2 was disabled.

Any ideas? I know that this might be due to developers being concerned with SSD as L2 not being fast enough to perform R/W at the same time to multiple volumes, and it is true for slow old SSD. NVME and RAID0 SATA is fast enough to read from volume A and write to volume B at the same time. So maybe you should add one more switch "Fallback to L2 when L1 is full" with warning that fast SSD is required. It would be much beneficial in my setup.

if anything is not clear ask and I will try to explain it better :)
cichy45
Level 4
Level 4
Posts: 38
Joined: Sun Oct 14, 2018 3:34 pm

Re: Fallback to L2 cache when L1 is full

Post by cichy45 »

One more observation.

This "failure to respect IDLE FLUSH" with L1 cache is only when:

-reading from volume A & writing back to volume A, L1 runs out of space, PrimoCache start writing to volume A directly instead of using L2 for data that can not fit in L1.

BUT, while Reading/Writing to volume A AND full L1 cache, another volume B start writing to cache it uses L2 cache.

So, in conclusion it seems, that one volume can not use L1 and L2 at the same time for caching writes when reading from volume. Volumes can only use L1 OR L2, not both at the same time, so one volume can not fall back to L2 cache when L1 is full of data and can not be emptied (as when running out of space in L1 but still writing to cache, it will write directly to disk).
User avatar
Support
Support Team
Support Team
Posts: 3623
Joined: Sun Dec 21, 2008 2:42 am

Re: Fallback to L2 cache when L1 is full

Post by Support »

Thank you very much for the detailed feedback!
Current design and behavior is below: When L1 write space is full, L1 will flush certain amount of deferred data to underlying disks and meanwhile L2 write cache will cache new incoming write-data. This design is to get best write performance because L1, L2 and underlying disks can work individually and will not affect each other. Using the mode like L1->L2->Disk will make L2 busy in processing L1 flush requests, new write requests and flush requests to disks which may happen simultaneously.
However, we do consider other writing mode option and to support the behavior like L1->L2->Disk as some users don't want to flush to underlying disks too early.
cichy45
Level 4
Level 4
Posts: 38
Joined: Sun Oct 14, 2018 3:34 pm

Re: Fallback to L2 cache when L1 is full

Post by cichy45 »

Thank you for response too! I think it is good idea to add support for L1->L2->Disk behavior as SSD are getting now cheaper and creating high capacity and fast L2 (like SATA3 RAID0 or NVME M.2) is affordable for more and more people. Especially as additional option so everyone could create setup that matches their needs :)
Post Reply