Why is there still free cache?
Posted: Mon Apr 11, 2016 11:26 am
Hi Romex Software Team,
I wonder why we see still free L2 cache even after many days of running PrimoCache and reading and writing terabytes of data:


Does the cache algorithm automatically free up space after a given time?
I would have expected that it uses all available cache blocks and begins overwriting them, once the cache is fully filled and new requests can't be handled without overwriting the least used cache blocks.
How can that be?
There are only two causes that come to my mind that might cause this:
a) Even if we have written several TBytes of data and read even more TBytes of data, it seems we have only affect e.g. 600 GBytes of the disk in total, overwriting old blocks over and over - resulting in no more than e.g. 600 GBytes of cache being required, even if there is 2 TBytes of cache available.
b) The cache algorithm drops blocks from caching after a given time.
With the stress we do to our project drive I won't really believe in a) but b) also sounds so strange from a programming perspective.
One of the reasons I don't really believe in a) is also that I run space defragmentation at least once a day on the entire volume, moving around 500+ GBytes/day on average. Defragmentation runs hell fast due to SSD caching. I know its causing extra wear on the SSD, but thats no problem for us.
Any other explaination?
And why can the "Total Write (Done)" be bigger than the "Total Write (Req)" ???
I've seen this a bunch of times and wonder why...?
I wonder why we see still free L2 cache even after many days of running PrimoCache and reading and writing terabytes of data:


Does the cache algorithm automatically free up space after a given time?
I would have expected that it uses all available cache blocks and begins overwriting them, once the cache is fully filled and new requests can't be handled without overwriting the least used cache blocks.
How can that be?
There are only two causes that come to my mind that might cause this:
a) Even if we have written several TBytes of data and read even more TBytes of data, it seems we have only affect e.g. 600 GBytes of the disk in total, overwriting old blocks over and over - resulting in no more than e.g. 600 GBytes of cache being required, even if there is 2 TBytes of cache available.
b) The cache algorithm drops blocks from caching after a given time.
With the stress we do to our project drive I won't really believe in a) but b) also sounds so strange from a programming perspective.
One of the reasons I don't really believe in a) is also that I run space defragmentation at least once a day on the entire volume, moving around 500+ GBytes/day on average. Defragmentation runs hell fast due to SSD caching. I know its causing extra wear on the SSD, but thats no problem for us.
Any other explaination?
And why can the "Total Write (Done)" be bigger than the "Total Write (Req)" ???
I've seen this a bunch of times and wonder why...?