Automatic allocation and deallocation of L1 cache
Posted: Sun Feb 02, 2014 11:35 pm
Hi,
I've been using Primo Ramdisk for a long time now (since it was called VSuite), and I'm absolutely loving it. I remember suggesting TRIM-style dynamic deallocation for it, which was implemented a year later and still works beautifully.
So I'm trying out PrimoCache to see if it will be interesting to buy when it gets out of beta. I never like big permanent allocations of memory (which is why I love the deallocation of deleted files with Primo Ramdisk), so I was wondering if PrimoCache could also make use of dynamic allocation and deallocation.
Dynamic allocation: sparsely allocate new cache blocks in memory as data gets read and written. Written data is obviously retained in cache in case it needs to be read soon after writing. So after boot-up, the cache gradually grows and grows as applications are accessing files.
Dynamic deallocation: when a user-configured critical low memory situation arises (for instance, I would configure "less than 2GB free memory" as trigger condition), least recently/frequently used (depending on cache policy) blocks are discarded from the read cache and freed, until the low free memory situation is resolved. Let the user configure how much to deallocate at once (for instance 512MB or 1GB). If there are not enough read cache blocks available to be freed, but there are pending write blocks, skip the deferred write timer, write them immediately and free them when they are committed to disk.
What do you think?
I've been using Primo Ramdisk for a long time now (since it was called VSuite), and I'm absolutely loving it. I remember suggesting TRIM-style dynamic deallocation for it, which was implemented a year later and still works beautifully.
So I'm trying out PrimoCache to see if it will be interesting to buy when it gets out of beta. I never like big permanent allocations of memory (which is why I love the deallocation of deleted files with Primo Ramdisk), so I was wondering if PrimoCache could also make use of dynamic allocation and deallocation.
Dynamic allocation: sparsely allocate new cache blocks in memory as data gets read and written. Written data is obviously retained in cache in case it needs to be read soon after writing. So after boot-up, the cache gradually grows and grows as applications are accessing files.
Dynamic deallocation: when a user-configured critical low memory situation arises (for instance, I would configure "less than 2GB free memory" as trigger condition), least recently/frequently used (depending on cache policy) blocks are discarded from the read cache and freed, until the low free memory situation is resolved. Let the user configure how much to deallocate at once (for instance 512MB or 1GB). If there are not enough read cache blocks available to be freed, but there are pending write blocks, skip the deferred write timer, write them immediately and free them when they are committed to disk.
What do you think?