Thanks for the info support. Can you please elaborate a little more details? if it's Least Frequently used-based algorithm, then that means the least used cache data will be kicked out first, but under what condition? Is there a timer, say 10 hours, to kick the least used cache out? Or is it only going to kick out the cache when the cache storage is full, or when target source file has been deleted?Support wrote: ↑Sun May 14, 2023 6:32 amL2 caches data differently than L1. L2 caches data asynchronously. Even data has already been cached in L1, it still can be cached in L2. For reading cache, we use a dedicated LFU-based algorithm for both L1 and L2. If a file is deleted, its corresponding cached data will also be removed from the cache.SnowReborn wrote: ↑Thu May 11, 2023 9:11 pm From my observation correct me if i am wrong, primocache will save anything regarding disk activity that it can record(excluding read activity served by windowsd builtin cache since it doesn't see it), into L1 as long as it has space, but things I would like some explanation is that, once L1 is full, how 's the L2 cache going to store any data and cache, what's the behavior(it seems different from L1), and how does primocache decide which cache to kick out first for both L1, and L2? is it FIFO? LIFO? or LFU? under what condition?(L1 L2 full? or there is a timer to kick it out dependent frequency of being used?) If I am not mistaken, I have also seen behavior that despite my L2 is not full(200GB space left) I see primocache releases some of the cached content(is it because the cached target file was physically deleted from the disk? or is because the algorithm has decided it hasn't seen any read request for that cache for a long time, and is never being used, so it gets deleted?) because if target(original file on disk) were deleted, then the cache probably serves no purpose and should be deleted, that's fine; However, if the cache was deleted because it was not being used, while there are still plenty space on the L1 or L2, then that's not a behavior I want to see. I am again tempted to use ISLC to clear windows memory cache to ensure primocache to read all the data and store them properly so I don't have to deal with reading from hard disk when my RAM utilization is very high and windows clears all it's cache.
Are there ways for primocache to monitor and "register counts" for the cache served by windows cache so it is not "least used" in primocache's perspective? Under my circustances, should I start to use ISLC to clear out windows cache so primocache can properly track of cache usage pattern?
I have another separate question: I use defer write on L1 alot. However, a lot of circumstances has been defer write with indefinite flush timer; This is due to the large dataset in nature, and most of the write operation are temporary, and the data is deleted afterwards, so there is no reason for the data to be flushed to the disk. However, there are some cases where the data needs to be flushed, and due to the nature primocache defer write puts data as sequential as possible, this is a really nice feature to "defragment small files" before small files are scattered around hard disks. I understand when defer write is set to infinite, flush only happens when you manually flush, or when cache is full / outspace. My question would be: 1, let's say we have 20 GB defer write cache(write cache ONLY), and it's full now (20/20GB used with 32mb left). If i click manual flush, it will make a copy of cache to the target disk(flush), while preserving the write cache(still full with 20/20GB and 32mb space left). In this case, does that mean if a new 5GB file write request is issued, primocache in this case will instantly free 5GB space for least used, and already flushed space for the new write request? if it is true, does this also happen on interval timer? say instead of defer write of infinite time, we set to 10 seconds. So every 10 seconds it should flush all "unflushed" cache to disk, so all "flushed" cache should be ready to instantly be released for up coming new cache?