Violator wrote:Upon power loss and system failure only a battery on the controller and an UPS will save you with a lazy cache in use.
In case that disk caching stability is a requirement, your sitting with a server that includes a proper I/O card to handle the caching.
There is no way that you can archive the same thing in a programmable way.
A proper I/O card will preserve the entire cache. I am not talking about preserving the entire cache - that preserves unwritten data. I'm talking about preserving disk integrity and stability. Preventing corruption of the OS and data already on the system. A caching program can be designed such that moving files on a hard drive itself, or moving from hard drive to hard drive results in no data loss. These are transactions that happen through RAM normally.
Let me try another example in which this will remove *some* (not all) of the instability removed in ram caching:
A set of files in Directory A is copied to Directory B. After each file's data is finished, the OS updates the file's metadata letting it know the file is finished. If the copy is interrupted during a file, one can tell that the metadata was never updated, and the file is likely corrupt.
In FancyCache currently the updating of each file's metadata will be optimized for write purposes, such that if the copy is interrupted during the write of a file the metadata may already be in its final state, even with a corrupt file.
In every case where the metadata of a file (or parts of a file) that is built to be fault tolerant via write order having the data written out of order causes that fault tolerance to no longer fill its objective - to make sure that it says the new data has been written only after it is actually written.
This allows for "FancyCache compatible" programs that are tested to not be corrupted.
As a programmer, I often think of ways to make my code tolerant to a power failure. However, most of them depend on updating metadata before and after possible points of failure for corruption recovery.