Hi guys,
while I understand the basic idea behind the request for dynamic RAM cache size, there might be the problem on how its being allocated.
I'd assume PrimoCache simply picks a piece of RAM in the size its set to, potentially in one piece. To make dynamic RAM cache work, you'd essentially need to allocate RAM block by block. So for each block you cache, you allocate the corresponding amount of RAM. Only then you can easily deallocate the RAM on e.g. a least used policy when the systems may get to a point of low RAM.
Another point would be - will there still be a minimum size and a maximum size set, to not go e.g. below 10 GB for cache but not beyon 80 GB and so on? Things to consider.
If you want to avoid allocating block by block, it would force you to allocate at least in bigger chunks like 1 GByte at a time. Then, when you decide to deallocate some cache blocks, you may need to relocate some of the cached data blocks content within that 1 GB block to other RAM blocks, as otherwise its not reflecting your caching strategy. That takes time and may slow down the system a bit, compared to "now".
You may quickly fragment RAM into only small pieces of RAM being available for 3rd party apps.
While I get the wish, I am not sure how easily it can be implemented to work as desired and not create other issues. A cache that is going to copying cache data from RAM to RAM all the time, rather than doing its main job may not be that efficient anymore.
Short term solution: Add RAM
