Page 4 of 5

Re: What the people are asking for + a few others.

Posted: Wed May 30, 2012 9:50 pm
by JimF
mabellon wrote: I don't think I would trust defer writes.
I am not sure whether you are making a general comment, or referring to a specific case. But do you think that in general a Ramdisk is safer than a deferred write cache?

Re: What the people are asking for + a few others.

Posted: Thu May 31, 2012 2:19 am
by Support
@manus: Thank you. Yes, we've already read your suggestions. But it is not related to the "persistent cache", so I didn't mention it in my last post. The approach you suggested mostly affects the caching algorithm. Of couse, with this approach, we can preload the frequent data in the background, as does Windows Prefetch. :)

Re: What the people are asking for + a few others.

Posted: Thu May 31, 2012 9:53 am
by mabellon
JimF wrote:I am not sure whether you are making a general comment, or referring to a specific case. But do you think that in general a Ramdisk is safer than a deferred write cache?
I'm generally referring to using write-back caching (deferred writes), but specifically when using RAM/L1. No its no worse than a RAMdisk that can be saved/flushed to disk on shutdown. In the event of power loss, your volatile RAM is lost obviously. Battery backup is essential. However that is simply not enough. Since the caching is implemented in software, a bug in FC, or anywhere in the Windows kernel could lead to disk corruption. If you BSOD, the writes aren't properly flushed to the disk. It's no different than if Windows crashed before you saved your RAMdisk.

This is different than the write cacheing done by hardware such as on large RAID controllers. You still need battery backups there, but once Windows writes to the disk, the hardware guarantees that its safe. But if you believe your system to be 100% stable, then there's nothing to worry about for deferred writes. Personally I haven't found a use case where write performance was so important that I would risk data integrity.

In theory if an SSD/L2 is being used as the write cache, the non-volatile nature would much improve the situation. In the event of powerfailure/crash, after reboot, the system could notice writes in L2 that had not been flushed to disk yet. However given that L2 isn't persistent in FancyCache yet, I highly doubt anything like this is done for writes. I could certainly be wrong.

Re: What the people are asking for + a few others.

Posted: Thu May 31, 2012 6:41 pm
by Mradr
In theory if an SSD/L2 is being used as the write cache, the non-volatile nature would much improve the situation. In the event of powerfailure/crash, after reboot, the system could notice writes in L2 that had not been flushed to disk yet. However given that L2 isn't persistent in FancyCache yet, I highly doubt anything like this is done for writes. I could certainly be wrong.
You would be right. That is why I was pushing for having the ability to have it on the L2 long as it being used on a non-volatile drive. It would allow for "safer, faster" write speeds but without the high risk as you do with L1-or a ramdisk.

Even on L1, a bad error on the Read cycle could also cause damage if it is writen out to/back to the disk if you really think about it, so no matter how you look at it, there is always going to be a risk using FC either way really. Oneway just carrys more risk then the other is all.


Also, I been looking up on how to use/program using the NTFS jounal system. I can't find much on it and what little I did ask for permissions at the system level (witch is really scary if you think about it). We might be running into a programming issue now xD

Re: What the people are asking for + a few others.

Posted: Sun Jun 03, 2012 12:19 am
by kalua
In theory if an SSD/L2 is being used as the write cache, the non-volatile nature would much improve the situation. In the event of powerfailure/crash, after reboot, the system could notice writes in L2 that had not been flushed to disk yet.
Dangerous since the boot process before FC is loaded could change the disk, and then FC flushing the old buffers from L2 could corrupt it.

Re: What the people are asking for + a few others.

Posted: Mon Jun 04, 2012 5:22 pm
by Mradr
kalua wrote:
In theory if an SSD/L2 is being used as the write cache, the non-volatile nature would much improve the situation. In the event of powerfailure/crash, after reboot, the system could notice writes in L2 that had not been flushed to disk yet.
Dangerous since the boot process before FC is loaded could change the disk, and then FC flushing the old buffers from L2 could corrupt it.
In theory you would know if the boot process changed something because of the NTFS journals. Using the that system the only real danger comes from are write collisions (or writes that happen at the same time).

I posted a quote somewhere around here about it from wiki.

How offten does that happen? I don't know. You would have to run test or look up how bad the issue is from a study.

Re: What the people are asking for + a few others.

Posted: Thu Jul 19, 2012 10:27 pm
by fmartin
Hi,

I'm happy to see there's feedback from the company. I've been using FC since 7.2 and hoped that cache pestistence would be introduced sooner or later; but since we're now at 8.0 and it seems to me things are going in another direction, I thought I'd throw in a few cents.

I think most of us started using FC because of SSD Caching. Unfortunately, as we found out, it is not persistent.
I also think many of us are in the same boat: we have a smaller size SSD that we could use for read-caching of an existing system on a HDD. Because of the SSD's size, we can't or don't want to move all the files to the SSD, it'd be much more convenient if only the frequently used files would be on the SSD as a cache.

In theory, the solution looks simple: a piece of software that monitors file use, and if a given file is accessed frequently, it copies it to the SSD cache; and the next time the file is accessed, it is opened from the SSD. The files could be assigned 'point values' depending on usage frequency and last use time; when the SSD capacity is reached, the ones with the lowest point value are flushed to make place for new files.
Being 'quasi-read-only' (or write-around) would make it safer compared to write-back and write-through, and especially deferred write. The software file-access database could also solve the removed cache drive issue: if the drive is removed, then the next time it is installed, the cache is rebuilt according to the database (which can monitor file use even when a cache drive is not present, or caching turned off).

Please let me know what you think
Thanks

Re: What the people are asking for + a few others.

Posted: Sat Jul 21, 2012 11:05 am
by Mradr
New update to V#: [0.8.0].[5]

Adeed 2 new request:
10) Release cache once cache has been loaded into normal ram. (Ram use only)
11) Profile base loading for settings.

Changed:
5) Keep-Alive Performance Monitor with auto start and save options (n6666661, JimF, mabellon).
- for the save option

Note: I will not always be here, so if your request doesn't show up, it means I didn't see it or that I haven't been on yet. ^^ Take it easy everyone.
fmartin wrote:Hi,

I'm happy to see there's feedback from the company. I've been using FC since 7.2 and hoped that cache pestistence would be introduced sooner or later; but since we're now at 8.0 and it seems to me things are going in another direction, I thought I'd throw in a few cents.

I think most of us started using FC because of SSD Caching. Unfortunately, as we found out, it is not persistent.
I also think many of us are in the same boat: we have a smaller size SSD that we could use for read-caching of an existing system on a HDD. Because of the SSD's size, we can't or don't want to move all the files to the SSD, it'd be much more convenient if only the frequently used files would be on the SSD as a cache.

In theory, the solution looks simple: a piece of software that monitors file use, and if a given file is accessed frequently, it copies it to the SSD cache; and the next time the file is accessed, it is opened from the SSD. The files could be assigned 'point values' depending on usage frequency and last use time; when the SSD capacity is reached, the ones with the lowest point value are flushed to make place for new files.
Being 'quasi-read-only' (or write-around) would make it safer compared to write-back and write-through, and especially deferred write. The software file-access database could also solve the removed cache drive issue: if the drive is removed, then the next time it is installed, the cache is rebuilt according to the database (which can monitor file use even when a cache drive is not present, or caching turned off).

Please let me know what you think
Thanks
They perty much already have that, well besides the SSD persistent caching, they already do the "point system" of frequency used items. Atm, I feel the community doesn't want to use the SSD as a a write cache device by any means, so that will also be sort of shot down ^^;

Re: What the people are asking for + a few others.

Posted: Sun Jul 22, 2012 5:02 pm
by fmartin
Mradr wrote:New update to V#: [0.8.0].[5]
They perty much already have that, well besides the SSD persistent caching, they already do the "point system" of frequency used items. Atm, I feel the community doesn't want to use the SSD as a a write cache device by any means, so that will also be sort of shot down ^^;
Hi,

well, this would be anything but a write cache :) To the contrary, it would be a 'quasi-read-only' cache, where data is only written once a new file reaches 'to be cached' point value, and it's only read from the SSD afterwards (until replaced by another file). It would perfectly complement SSD's characteristics (fast access times and read speeds, low write cycles).

Re: What the people are asking for + a few others.

Posted: Sun Sep 16, 2012 7:45 am
by Teodosio
I am looking for a way to cache my hdd to my sdd... so yes, I guess I am looking forward to persistent L2 too :)