Is there any way to force L2 cache a specific directory?

FAQ, getting help, user experience about PrimoCache
User avatar
Jaga
Contributor
Contributor
Posts: 694
Joined: Sat Jan 25, 2014 1:11 am

Re: Is there any way to force L2 cache a specific directory?

Post by Jaga »

RobF99 wrote: Fri Jun 25, 2021 3:53 pmUltimately, when you do pre cache an entire folder, you are wasting resources because even a major complex game or program with tens of thousands of files really only uses a small percentage of those files. So you might be doing a lot of reading of all the files of a large program for nothing.
And then again (playing devil's advocate), if you know your cache can handle the directory you want to force into it, but can't because the feature wasn't implemented, what then?

I sometimes push my cache up to 52GB in size on a system with 64GB of RAM installed. Large amounts of data, needing smaller runtime space. I'd like to be able to force it, but the 7-zip hashing doesn't always do the job, and Primocache doesn't have the ability.

Many people have requested this feature over the years, for various reasons. I see no good reason now to dismiss it because it "might not be effective for others".
TomB
Level 5
Level 5
Posts: 47
Joined: Wed Jul 29, 2020 11:15 pm

Re: Is there any way to force L2 cache a specific directory?

Post by TomB »

Thanks for your input, Jaga. Your insights and suggestions regarding cache block sizes have helped me many times and I appreciate it.

In this case however, I think the reason given by RobF99, and the main point with which I was agreeing, was this:
RobF99 wrote: Fri Jun 25, 2021 3:53 pm I feel if we have that capability in the program and mess too much with it, we will find that the program will not perform like we want it to.
And not so much:
Jaga wrote: Sat Jun 26, 2021 9:17 am Many people have requested this feature over the years, for various reasons. I see no good reason now to dismiss it because it "might not be effective for others".
Now, this may or may not be a valid concern, and I'm sure that Romex has a better idea than I do of whether or not it could be a problem, but I just wanted to at least be clear about what my concern is.

Tom
RobF99
Level 8
Level 8
Posts: 130
Joined: Fri Sep 19, 2014 5:14 am

Re: Is there any way to force L2 cache a specific directory?

Post by RobF99 »

I sometimes push my cache up to 52GB in size on a system with 64GB of RAM installed. Large amounts of data, needing smaller runtime space. I'd like to be able to force it, but the 7-zip hashing doesn't always do the job, and Primocache doesn't have the ability.
You make good points but I believe that the program works best as an agnostic block-based cache. It allows statistics to do their thing better than what a human might want. :D I am just saying they shouldn't change the nature of the program too much. We might find that people may overdo their precaching and cause eviction of data that legitimately should be cached.

FYI if the zip hashing doesn't do the job for you, I made an earlier post here to a file called readfile. I use this to do any precaching I want. It completely reads each file and doesn't seem to skip a beat except the reading of files that might be in use.

Incidentally, my system as 32 Gb and I dedicate 24 Gb of it to PrimoCache L1. Even though I sometimes do some pretty heavy lifting, I rarely have to use more than 8 Gb. If I do, since my pagefile is also L1 cached, I get the benefit of all the paging done via L1 cache anyway. Warning on this one though, I have sometimes had blue screens doing this. Incidentally, I can also vastly speed up Photoshop by allocating only 4 Gb RAM to it and ensuring that the scratchfile drive is L1 write cached. It doesn't take long to use even 16Gb allocated to Photoshop at which point it then uses the scratch file, so it operates faster with the configuration I use.

L1 is definitely a force to be reckoned with.
User avatar
Jaga
Contributor
Contributor
Posts: 694
Joined: Sat Jan 25, 2014 1:11 am

Re: Is there any way to force L2 cache a specific directory?

Post by Jaga »

RobF99 wrote: Sat Jul 10, 2021 2:56 pmFYI if the zip hashing doesn't do the job for you, I made an earlier post here to a file called readfile. I use this to do any precaching I want. It completely reads each file and doesn't seem to skip a beat except the reading of files that might be in use.
I'll give it a look, thanks for mentioning it.

Edit: seems to work fairly well, certainly more reliable than hashing directories. Took quite a while to read ~18GB of data from a NVMe however, would be really effective if it was multi-threaded. But good as a stop-gap solution.
RobF99
Level 8
Level 8
Posts: 130
Joined: Fri Sep 19, 2014 5:14 am

Re: Is there any way to force L2 cache a specific directory?

Post by RobF99 »

Edit: seems to work fairly well, certainly more reliable than hashing directories. Took quite a while to read ~18GB of data from a NVMe however, would be really effective if it was multi-threaded. But good as a stop-gap solution.
It has multi-threading. Just run readfile with no options it will show you switches for quite a few interesting options, multi-thread, overlapped i/o, and it even does hashing.

If you code at all, it is open source, and you can modify the code yourself. www.winimage.com/readfile.htm
RobF99
Level 8
Level 8
Posts: 130
Joined: Fri Sep 19, 2014 5:14 am

Re: Is there any way to force L2 cache a specific directory?

Post by RobF99 »

Another useful tool, which I am the brain behind (but I don't code it) is UltimateDefrag. It lets you sort files by file size so you can have your small files (cause of random I/O) together on your spinner and benefit from a larger block size in PrimoCache where it will cache adjacent small files due to the larger block size. PrimoCache also enables any defragging to blaze away very fast since it all happens in L1 with the deferred write doing its thing depending upon the latency you have set.

I created a document here to show how synergistic these two programs are: https://www.disktrix.com/ultimatedefrag ... cache.html

You could also use Robocopy in a batch file to only copy files below a certain size to another drive to bring them into L1 and L2. Even on a system drive with a lot of files it only takes about 3 to 5 minutes for all files < 32 Kb. Be aware deleting them from the target drive doesn't come easy since there are often nested Application Data folders creating long entries due to a compatibility requirement of Windows with some legacy programs. There is a way to delete them with Winrar. I will find instructions and post here.
GoldenSun3DS
Level 3
Level 3
Posts: 15
Joined: Wed Mar 09, 2022 9:52 pm

Re: Is there any way to force L2 cache a specific directory?

Post by GoldenSun3DS »

RobF99 wrote: Fri Jun 25, 2021 3:53 pm Even though I like to cache specific directories into L2 cache, I do not feel that Romex should change the nature of the program too much to support specific directories or programs.

The program works best because it is agnostic to software that is running and based purely on caching blocks based upon usage. I believe it would start to complicate the program and the programming and start to move away from the ultimate core nature of the program and that is that it should be agnostic to specific data and file structures except maybe boot files.

There are ways for us to cache directories such as the readfile and 7-zip tips below. Ultimately, when you do pre cache an entire folder, you are wasting resources because even a major complex game or program with tens of thousands of files really only uses a small percentage of those files. So you might be doing a lot of reading of all the files of a large program for nothing.

E.g. My windows folder has 39 Gb and 127,000 files. If I precached all that, it would be a big waste of L2 SSD writes. Your C:\Prefetch\layout.ini will show you approximately which Windows files are used the most and you will see that might only be around 3,000 files and we can see from boot up that usually PrimoCache has only read at most around 1 to 1.2 Gb by the time a system it is fully booted. There is no need to precache 39 Gb of Windows directory. The same principle applies with most programs you run.

I feel if we have that capability in the program and mess too much with it, we will find that the program will not perform like we want it to.

That is just my 2 cents worth.
Agreed, although an exception is when you first set up the cache, it would be nice to brute force caching of files you know will need to be read instead of having to wait for it to load at slower times. An example is a big Steam game that has long load times on an HDD. You don't want to have to wait for those load times to make them faster the next time.

You want the load times faster from the start, and some inefficiency of loading files that wouldn't need to be fast like cutscene videos is fine since they'd get pushed off the cache eventually.

But definitely, beyond that scenario, you should let it run on its own building the cache based on usage.
barzattacks
Level 1
Level 1
Posts: 4
Joined: Thu Dec 01, 2022 8:59 am

Re: Is there any way to force L2 cache a specific directory?

Post by barzattacks »

I agree with this. I found this because I have an 8TB drive with a 1tb nvme as L2 and about 10gb currently set to L1. I have 32gb of ram in my system and I play a lot of games. I want to make sure I do not starve my games of memory but I also want the best L1 cache. Should I assume I only need to load something slow once? After it will be faster since it will pull it on L2? I currently have my L1 defer on 60 seconds, I debated infinite but then I thought would it still write anything to L2?
Axel Mertes
Level 9
Level 9
Posts: 184
Joined: Thu Feb 03, 2011 3:22 pm

Re: Is there any way to force L2 cache a specific directory?

Post by Axel Mertes »

RobF99 wrote: Fri Mar 19, 2021 10:20 am Here is a file you can download called readfile.exe. It just reads files and is used for testing disk speeds. The original build of this program from http://www.winimage.com/readfile.htm has a bug recursing subdirectories. I had a programmer correct this for me since it is open source.

You can get my modified build from here: https://www.dropbox.com/s/ysopra7ria9qp ... e.zip?dl=0

Just put it in the folder you want to precache and run readfile.exe *.* /s. I use it all the time to precache certain folders.
Simple, but effective!
Axel Mertes
Level 9
Level 9
Posts: 184
Joined: Thu Feb 03, 2011 3:22 pm

Re: Is there any way to force L2 cache a specific directory?

Post by Axel Mertes »

RobF99 wrote: Sat Jul 10, 2021 9:43 pm Another useful tool, which I am the brain behind (but I don't code it) is UltimateDefrag. It lets you sort files by file size so you can have your small files (cause of random I/O) together on your spinner and benefit from a larger block size in PrimoCache where it will cache adjacent small files due to the larger block size. PrimoCache also enables any defragging to blaze away very fast since it all happens in L1 with the deferred write doing its thing depending upon the latency you have set.

I created a document here to show how synergistic these two programs are: https://www.disktrix.com/ultimatedefrag ... cache.html

You could also use Robocopy in a batch file to only copy files below a certain size to another drive to bring them into L1 and L2. Even on a system drive with a lot of files it only takes about 3 to 5 minutes for all files < 32 Kb. Be aware deleting them from the target drive doesn't come easy since there are often nested Application Data folders creating long entries due to a compatibility requirement of Windows with some legacy programs. There is a way to delete them with Winrar. I will find instructions and post here.
Very good to know.

I have been advocating defrag of large volumes before FancyCache/PrimoCache even existed and was using it all the time along side various defrag tools. I can guarantee that this is a game changer in terms of defrag speeds and overall system performance.

I will definetly have a look at this one in particular, knowing the developers is always first choice! Keep going!

What many people are not aware of is that desaster recovering a defragmented drive or RAID is by far easier and more reliably and safe than if it is heavily fragmented. I think I explained every now and then here in the forum, but can only remind people of understanding this.

If your files are written to a heavily fragmented disks, it becomes very slow and inefficient, but even more so, it becomes dangerous, as you file is essentially spread over allmost the entire disk in worsed cases. If you for instance accidentially reformat a drive, overwriting the first 3% might be enough to destroy allmost all files on a disk "at once". Further, trying to recover such a disk will find many thousand old, unused file entries of earlier existing files. You might easily end up needing 10 TB of disk space to recover from a 1 TB disk. And then you have to manually sort out the garbage, finding that just 50 GB was savely rescued and the rest is junk.

Using regular defragmentation will keep your drives clean, sequential and performant.

In that context I also recommend the use of Condusiv undelete/undelete server, which does move "deleted" files to a special folder before finally deleting them. This way you do not open up the fragmentation holes when deleting files. Then you can set times like once a week or so to actually free up files deleted longer than e.g. 7 days ago to free up disk space (fragmenting...) all at once. Do a defrag right after the execution and you'll be very fine and safe. Maybe it would be possible to recreate such functionality at lower cost or open source. Its kind of a missing link.

Just saying.

With my recommendation of having at least 2 times as much SSD cache as you typically daily transact on data will keep you working from fast SSD almost all the time, while - when really needing to access RAID or HDD - you'll get full performance due to defragmente sequential reads. Win-Win-Win. I easily outperformed high end storage systems this way and comparably no cost, with higher safety.
Post Reply