Hangup with FC 0.5 Volumes

Report bugs or suggestions around FancyCache
Post Reply
Axel Mertes
Level 9
Level 9
Posts: 180
Joined: Thu Feb 03, 2011 3:22 pm

Hangup with FC 0.5 Volumes

Post by Axel Mertes »

Hi!

While doing some intense tests today I've seen already 3 times a total system hangup. No mouse, no task manager, no disk activity. Leaving the machine alone for more than 20 minutes did not change anything.

I used FC 0.5 beta for Volumes and cached only the C: volume. Sometimes I used only first level cache on a system with 8 GB RAM installed, Win 7 x64, 4 GB RAM used as first level cache. Sometimes I also used the second level cache on a 32 GB Intel SLC SSD. As it happend with and without second level cache, the problem is probably more in the first level cache.

I also played with very short write defer times (1 second) and also long ones (1000 seconds), but seeing no actual difference in the crash behavior.

Is there a neat way to document a crash like this?
I don't actually see a memory dump being written, though...

I was rendering DPX sequences in a compositing app for film production. Rather large files (8 to 12 MB each) and many of them (thousands).
The point was that we recognized the software to do very badly linewise writing on these files, meaning in many many many I/Os for just few data (like 11 KB per write). Using Write Defer option actually increased performance drastically until the cache is filled.

Another question that arose in that context:
When the cache is filled, and we use deferred writes, what will happen?
Will it write off the cache then in large chunks, with the old data being send out first, and the new data being put into cache instead?
Or will it bypass the cache in such situations?

We saw a drastic slowdown once the cache was filled, and the total processing time taken was then actually slower than directly to disk.
One example:
We have written like 200 MB/s to disk directly.
With FC and deferred writes enabled, we get it up to 500 MB/s, but then fall down to like 10 MB/s when cache is filled. The RAID can sustain like 600 to 800 MB/s... Its the massive I/O count of the app that causes the slow writes in first place. Writing to RAM instead makes really sense, as we can see from the performance in first place. Just the way the data is going to offload from the RAM cache seems to be an issue.

One thought here:
Could we trigger the deferred writes not with "seconds" but depending on the cache fill rate instead?
Like when a larger chunk is filled in write cache, it would start writing that off to the disk?
In a perfect world it needs to find out the best average transfer rates and block size combinations of the target drive. These values should then be reflected in the RAM cache settings to offload the data "in time" and "at highes possible sustained block transfer rates" to the target drive.

Honestly, we are not so much after increasing lifetime of SSDs in that scenario. There are not even SSDs in that scenario at all. Instead its meant to accelerate disk I/O to a possible maximum, even from non-optimized apps. We see it works. Its just occasionally hangs up and the cach algorythm may need a different approach as suggested above. It is just a new use option :)

Regards,
Axel
User avatar
Support
Support Team
Support Team
Posts: 3628
Joined: Sun Dec 21, 2008 2:42 am

Re: Hangup with FC 0.5 Volumes

Post by Support »

Regarding the slowdown issue, can you give us below details?
1. cpu & mobo
2. raid type, size & other settings (eg. stripe size, etc.)
3. raid driver & version
4. the model/type of the disks of which raid consists.
5. the capacity of the volume c:
6. the cluster size of the volume c:
7. fancycache settings. (block size, caching strategy, etc.)

thanks.
Post Reply