Automatic allocation and deallocation of L1 cache

Suggestions around PrimoCache
Post Reply
muf
Level 3
Level 3
Posts: 12
Joined: Thu Apr 08, 2010 5:36 pm

Automatic allocation and deallocation of L1 cache

Post by muf »

Hi,

I've been using Primo Ramdisk for a long time now (since it was called VSuite), and I'm absolutely loving it. I remember suggesting TRIM-style dynamic deallocation for it, which was implemented a year later and still works beautifully.

So I'm trying out PrimoCache to see if it will be interesting to buy when it gets out of beta. I never like big permanent allocations of memory (which is why I love the deallocation of deleted files with Primo Ramdisk), so I was wondering if PrimoCache could also make use of dynamic allocation and deallocation.

Dynamic allocation: sparsely allocate new cache blocks in memory as data gets read and written. Written data is obviously retained in cache in case it needs to be read soon after writing. So after boot-up, the cache gradually grows and grows as applications are accessing files.

Dynamic deallocation: when a user-configured critical low memory situation arises (for instance, I would configure "less than 2GB free memory" as trigger condition), least recently/frequently used (depending on cache policy) blocks are discarded from the read cache and freed, until the low free memory situation is resolved. Let the user configure how much to deallocate at once (for instance 512MB or 1GB). If there are not enough read cache blocks available to be freed, but there are pending write blocks, skip the deferred write timer, write them immediately and free them when they are committed to disk.

What do you think?
Bjameson
Level 6
Level 6
Posts: 62
Joined: Mon Nov 08, 2010 12:00 pm

Re: Automatic allocation and deallocation of L1 cache

Post by Bjameson »

I can't speak for Romex but I think the idea by itself is correct. However, the default setting of the Windows page file is to dynamically grow and shrink. If Primo constantly allocates and releases RAM, it might be that Windows responds by constantly adjusting the size of the page file. Especially under low memory conditions. This might result in extra writes to the page file volume. On a spinning disk this means extra seeks and thus a slowdown.

It needs much investigation. How will each version of Windows respond, what is the trigger size to prevent Windows from swapping? Will Primo act fast enough, that is: can it release enough memory before Windows sees the low memory condition? Having many tuning options is great, but it also gives users the opportunity to set things so wrong that they start blaming Primocache for slowing down their system.
Neglacio
Level 4
Level 4
Posts: 32
Joined: Tue Jan 21, 2014 11:28 pm

Re: Automatic allocation and deallocation of L1 cache

Post by Neglacio »

I certainly miss an "advanced" option menu, like in uTorrent, Firefox' about:config, etc... In there, such an option could be present: If you're informed enough, you could enable/disable features which might harm/benefit your setup.
For example, I have my pagefile disabled, because I have RAM in abundance. So I could use this RAM more efficiently by setting a min-max of the allocated memory.
I would like to have it perform like this: It should always strive towards the minimum amount of RAM you set, but when you need it, like in a game, it could expand towards the maximum or even all available memory when it notices a lot of IO's.
muf
Level 3
Level 3
Posts: 12
Joined: Thu Apr 08, 2010 5:36 pm

Re: Automatic allocation and deallocation of L1 cache

Post by muf »

Bjameson wrote:I can't speak for Romex but I think the idea by itself is correct. However, the default setting of the Windows page file is to dynamically grow and shrink. If Primo constantly allocates and releases RAM, it might be that Windows responds by constantly adjusting the size of the page file. Especially under low memory conditions. This might result in extra writes to the page file volume. On a spinning disk this means extra seeks and thus a slowdown.
Page file growing is done on demand, not anticipated ahead of time. Meaning, if a virtual memory allocation would fail because the page file is too small, the request will be denied with the error "The paging file is too small for this operation to complete" (0x800705AF). Meanwhile, the page file is grown in the background and a few moments later the same allocation will succeed. If you are riding that close to the limits of your commit charge, you shouldn't be using PrimoCache in the first place; you should buy more physical memory. Any idiot can slow their system to a crawl by using system tools incorrectly, and I don't think it is up to Romex to "safeguard" users from themselves.

That said, I think most users of this feature will be people like Neglacio and me; people who most of the time have more than 50% of their total installed physical memory free and would like to put it to good use. "Most of the time" is key here, as if I could guarantee that I would have 16GB free at all times, I would tell PrimoCache to use that (problem arising: I'm also using a similarly sized ramdisk). During heavy video editing, number crunching, you name it; this memory will be used and it will need to be made available to applications.
CrypEd
Level 6
Level 6
Posts: 71
Joined: Mon Nov 11, 2013 11:04 am

Re: Automatic allocation and deallocation of L1 cache

Post by CrypEd »

Written data is obviously retained in cache in case it needs to be read soon after writing. So after boot-up, the cache gradually grows and grows as applications are accessing files.

Dynamic deallocation: when a user-configured critical low memory situation arises (for instance, I would configure "less than 2GB free memory" as trigger condition),

Im a little confused about "low memory trigger". What are you talking about... beeing low on memory inside the PrimoCache or on system-wide RAM? I do not think that it would be wise to let the Cache-Size dynamically in- or decrease.
least recently/frequently used (depending on cache policy) blocks are discarded from the read cache and freed, until the low free memory situation is resolved.
What do you think actually happens when cache that grew and grew hits full-state.... exactly least recently/frequently used blocks are discarded.

You already said it... cache for read only grows more and more larger, the more space is available, that why it scales up to an open end of available RAM....only write-tasks free up cache-space. So basically what I see what you want is to make the Cache-Size-Consumption getting even more and more bigger instead, until it consumes ALL system memory.... and make it detect available System-Memory and grew up to a certain offset of that physical limit...512 or 1GB.

Despite i see not much sense in it, because you simply can calculate that on yourself and fire up a fixed size cache according to that.... until you upgrade your memory.... for what you need an automatic!?

Problems with this "useful" automatic:
what about multiple caches? How should they conquer for the same ressources?
what about certain apps that need more RAM... should my system show "not enough free mem" or start swapping to disk instead... just to be able to read it into cache?

IMHO not a good suggest..... improving the already existing algorythms that discard least frequently/recently used data when cache is full and the user maintaining a propper size of his/her cache-tasks is quite more effective way and runs into less problems. I don't see it beeing neccessary.... if you want Cache using 1 GiB less then your system-mem... then take your system-memory, substract 1 GiB and fire it up. On the other hand such "dynamic/automatic cache-size allocation" would put PrimoCache far back in stability... so what?

BTW: In case you didn't notice... there is no more cache policy (LRU/LFU-R)... it's been replaced by an algorythms that puts both into respect... you cannot control that anymore.... only thing left beeing configurable is strategy (R-W-R/W), size, and defer-timeout.

That said, I think most users of this feature will be people like Neglacio and me; people who most of the time have more than 50% of their total installed physical memory free and would like to put it to good use. "Most of the time" is key here, as if I could guarantee that I would have 16GB free at all times, I would tell PrimoCache to use that (problem arising: I'm also using a similarly sized ramdisk). During heavy video editing, number crunching, you name it; this memory will be used and it will need to be made available to applications.
Thats exactly the problem.... if 512MB or 1GiB or even 2 GiB would be enough for "any" task... it would make sense. But Primo is same way unaware to predict how much memory an application going to use in the future... and when it actually demands it... it has to be free already, otherway it cannot start... windows can swap-ram to disk, sure.... but such "automatic allocation" would grab that freed up memory instantly for Cache before any human can retry to start his application... this would result in system becoming slower and slower over time and windows-swapping heavily to disk to feed the greedy Cache.....however how should Primo trigger decreasing size from "any" app that's demanding ram... some people still do not recognize that this is a block-layer software... not a filesystem-layer...not a system-monitor. it cannot do so! Also re-allocating all the L1-Cache-Size dynamically permanently is a totally risky task... any problem/crash there and the cache stops...what means: data gone!

If you know that 1 GiB free RAM or 2 GiB or 4 Gib is enough for you... or if you think you better keep 8 of 16 GiB free... thats all your choice, nothing stops you, because SIZE is freely configurable. And that's good!

Another good thing is that there is no problem running 15GB Cache of 16GB available memory... reduce SWAP to fixed small size. If a special app needs more available RAM it fails to start... you are free to stop cache, what gives you 15GB more available and re-start it later on... sure cache is gone. But there's nothing we can do about that!

There's a golder IO-performance rule: Thou shall not swap! ^^
muf
Level 3
Level 3
Posts: 12
Joined: Thu Apr 08, 2010 5:36 pm

Re: Automatic allocation and deallocation of L1 cache

Post by muf »

I don't get why people keep bringing up pagefiles in this discussion. Yes, I want it to use all available memory and no, I don't want to manually disable the cache if I want to use a memory-hungry application. My pagefile is 300MB, which is the minimum required for a kernel minidump. I only use physical RAM. If I run out, I run out. So, I need low priority caching to make way for high priority applications! It's not rocket science, guys! The user-configured threshold of course depends on what sort of applications the user needs to run. If the user has applications that allocate 1GB at once, then the threshold would need to be at least 1GB. Again, not rocket science. If the user only uses applications that make small allocations at a time, then a smaller threshold will suffice. I don't want to do manually what can just as easily be implemented in code!
InquiringMind
Level SS
Level SS
Posts: 471
Joined: Wed Oct 06, 2010 11:10 pm

Re: Automatic allocation and deallocation of L1 cache

Post by InquiringMind »

muf wrote:...So, I need low priority caching to make way for high priority applications! It's not rocket science, guys!
Windows' built-in file cache does this pretty well, which suggests that your best option would be to remove PrimoCache.
muf
Level 3
Level 3
Posts: 12
Joined: Thu Apr 08, 2010 5:36 pm

Re: Automatic allocation and deallocation of L1 cache

Post by muf »

InquiringMind wrote:
muf wrote:...So, I need low priority caching to make way for high priority applications! It's not rocket science, guys!
Windows' built-in file cache does this pretty well, which suggests that your best option would be to remove PrimoCache.
If Windows' file cache worked so well then I'm sure I'd be able to capture 1080p RGB24 video realtime to my hard drive which does 500MB/s sequential write. The main problem there being that Windows caching is pretty broken and PrimoCache is the only tool that seems to be able to get the job done, if I'm willing to permanently throw a fixed block of memory at it, which I'm not. And so we're back to what this thread was originally about, namely a helpful suggestion to the developers to implement useful functionality.
Post Reply