What the people are asking for + a few others.

Report bugs or suggestions around FancyCache
Level 3
Level 3
Posts: 10
Joined: Wed Jul 27, 2011 11:51 pm

Re: What the people are asking for + a few others.

Post by laferrierejc »

Just bought a ocz synapse 128GB... too bad fancycache didn't come out with their persistent cache first... could have saved some money and just bought a solution from them (I've been waiting for over a year...), the market is there...

I had a theory that an ssd should hold all small files (say <=128KB), and the large files should be left on the storage medium. (I had a batch file that did just this using symlinks). the small files is what eats up most of the seek time on a hard drive.

After reading bcache stating it doesn't cache sequential reads...

I was thinking caching could take it a step further, say for larger files (over 128KB), the first 128K should be stored on the cache, so the hard drive can spin up and grab the remainder of the file (i.e. the 7ms time).

I'm not sure how much of an improvement that would make in performance, and the 128KB size might actually need to be higher, it might need to store up to the first 1MB to bridge the 7 ms gap between NAND and HDD access times.

Some stats...

on my pc, windows folder holds 28.8GB with about 80,000 files... comes out to be an average file size of 280KB

doing a simple *.* ; <=128KB search, I find this many files...

results in 4028 files. I didn't select them all to get a total size, but lets say they are 64KB average... hell, even at 128KB each, that's only

64KB average = 251MB cached for 28.8GB

128KB average = 900MB cached for 28.8GB.

So... there's a lot to play with here... Keep the small files on the SSD, and the large files on the platter and use the SSD mainly for small sequential reads/writes that it excels at, let the platter do all the large reads/writes...

I know there's a lot more involved with caching technology that I'm not aware of, such as fifo and queue's and all sorts of other things, but this just seemed like a simple solution that could be implemented into a ssd caching structure that wanted the hd to still do some work for maximum throughput.

Pre-emptive caching
cache small files written during write-back caching

for example, new install, installing a bunch of files, first written files... never been read, just written

cache the small files (not to overwrite current cache that has higher priority) just in case that file is re-read later so as not to have seek time penalties
Level 1
Level 1
Posts: 2
Joined: Sat Dec 29, 2012 7:41 pm

Re: What the people are asking for + a few others.

Post by mememe13 »

Hi all, im new to the forum, just made this account. That being said, ive been using the beta FC/V and have got quite good results on a couple of PCs.
Im now trying it on my PC with a 192MB cache for the boot partition. I know it's a small cache, but 1.5GB DDR1 is my current option. It does get >30% hit rate though.
I would also like to show my support for these specific FRs:
Mradr wrote: 3) Pool Dynamic Priority Percent Base L1 - Some people would like to pool their ram so multiple drives can access the L1 without wasting needless ram for each drive/volume. There are two levels of the dynamically caching going on here. One at the pool level and the other at the drive level. The pool level has very little user control as the system/FC handles that one. You just set the max size of it and the system will take care on how it grows or shrinks for the need. At the drive/volume level you'll be able to set the dynamic min and max use of the pool. This would allow drives/volumes to take on more ram if it needs, but still makes sure it has the min 4GB of the pooled cached ram. For example:
L1 = 2048MB (2GB) - gobal ram pool
Drive C: L1 = Min: 50%, Max: 100% Priority 1 (OS)
Drive F: L1 = Min: 0%, Max: 50% Priority 2 (Data)

8) Some of the advance features should be hidden into a sub hidden area. These features are features that could cause major problems when turn on. By giving them a place and a warnning that the user has to agree to, it will let the user know or at least be warn the user that the following features may are risky and could cause system stability issue such as lost of data, data corruption, and so on.

10) It would be nice to have an option to release ram cache once the program has been loaded into ram. This way memory can be freed up once it has been loaded for that program to use more ram. Another option is to allow that free up space to cache another program. Once the program closes, the cache is then return back to RAM for faster reload times.

11) Some would like to profile their settings to be able to load them up onto other pc. This one could be simple as in createing a load/Save base settup and then save the settings out onto a file (I am guessing you all perty much do this anyways) and then on load just read the file again.
Point 3 is because i have split my win into different partitions (programfiles is on C, docs on D, swap on E, etc) and fixed cache sizes for this case wouldnt make sense.

In addition to that, some settings similar to CacheSet or O&O CleverCache could prove useful. This is because of the possible data duplication between
system cache and FC. Addmitedly, reducing the system cache only hides the problem (a more integrated solution would do but.. M$).
Also, my only question so far, does FC do any prospective loading algo? Similar to CPU cache for example, where it would load a whole cache line (or in this case maybe a part of the file? or some other technique). That should really push its performance boost. This is also related to the preload in cache FR i saw in another thread.
And speaking about that, maybe add a list of locations to be preloaded at startup? Or by dir context menu.

// not a native english speaker, sorry for mistakes
thanks for your time

L.E. the graphs from `performance statistics` could use more detailed information. Right now they show mostly just a line. The timescale could be changed (as someone else said in another thread) or removed altogether.
Post Reply