Page 1 of 1
Any use for write caching without defer writes?
Posted: Tue Dec 29, 2015 2:33 pm
by dwright1542
I'm already pleased with the performance gains on reads. Is there much of a benefit of write caching without deferred writes? I cannot turn that on and risk data loss, so I'm wondering if it would be better to just go full read cache mode and have all the memory for reads, if there is little use on writes.
Re: Any use for write caching without defer writes?
Posted: Tue Jan 12, 2016 4:08 am
by Support
I think you're referring to cache strategy and defer-write. Here is a link may help.
http://www.romexsoftware.com/en-us/prim ... q.html#q11 (Q11)
Re: Any use for write caching without defer writes?
Posted: Fri Feb 05, 2016 12:20 pm
by InquiringMind
Write caching (without deferred-write) will provide a performance benefit in cases where data written to disk is read again. However a popular case where this might happen is with write verification (where the software confirms that data has been saved to disk successfully) and it does seem possible that PrimoCache's write caching could effectively disable it (with the program checking the cached version and possibly missing data discrepancies due to disk problems).
So if data security is your prime concern, then it would seem better to use Read caching only with PrimoCache.
Re: Any use for write caching without defer writes?
Posted: Thu Mar 24, 2016 8:28 pm
by erich56
In my thoughts as to install and run PrimoCache in order to avoid huge amounts of data being written to my SSD during BOINC Grid Computing (particularly projects like Atlas@Home which writes up to 100GB per day on my Samsung Pro 850 - 256GB - TBW: 150TB) I guess that this would only make sense and use if "Write Cache" ("Enable Defer Write") is enabled. Please correct me if my assumption is wrong.
I am somewhat reluctant to run PrimoCache with "Enable Defer Write" switched on, since some of these grid computing projects do not run 100% smoothly all the time, and once in a while produce a system freeze. In some cases the only thing which I can do is to turn off the PC by hand and switch it on by hand.
From what I read in your various instructions, in such a case the whole system could get corrupted, right (similar to an electricity outage)? Which means that not only current grid computing data would be lost (which would be less a problem) but also the whole Operating System could be damaged in a way that it does not start and function any more. Is my assumption correct?
Please let me know so that I can decice what to do.
Re: Any use for write caching without defer writes?
Posted: Fri Mar 25, 2016 1:37 am
by InquiringMind
Sorry, but I don't think PrimoCache's write caching would help much since any data written to cache gets written to the SSD also - enabling Defer Write would probably reduce writes (since data written over multiple times during the Defer period should only have the most recent write applied) but with a risk of data loss.
It might be worth checking what files/data are written by BOINC - if temporary files then using a ramdisk may be a better option (any project worth its code should be able to handle the loss of temp data). Romex's Primo Ramdisk has a very useful File-Disk option where a ramdisk can "overflow" onto another disk, very useful for temp files which *occasionally* use a lot of space.
For what it's worth, I've had dozens of system freezes (almost all due to Nvidia driver issues) with PrimoCache Write Defer enabled (10 seconds) but never encountered an OS-level problem afterwards (system critical files tend not to be written to, unless installing new software or hardware) so the only concern should be loss of application data. File versioning or synchronising software backup up such data to another drive may be worth investigation (the
Direct Write thread mentions several options).
Re: Any use for write caching without defer writes?
Posted: Sat Mar 26, 2016 10:33 pm
by JimF
InquiringMind wrote:Sorry, but I don't think PrimoCache's write caching would help much since any data written to cache gets written to the SSD also - enabling Defer Write would probably reduce writes (since data written over multiple times during the Defer period should only have the most recent write applied) but with a risk of data loss.
The way that the BOINC projects work, new data overwrites old data as the calculations progress. And when a given work unit completes, after a few hours or days depending on the project, its results are uploaded to the project servers and that work unit is deleted. A new one is then downloaded. So the data is bounded, and does not increase arbitrarily. I use both PrimoCache (write-cache only) and Primo Ramdisk, and can go for months without writing anything to my SSD, even though the BOINC project has written many terabytes.
I use both PrimoCache (with infinite write-delay) and Ramdisk to save my SSDs on various machines, and they are quite effective.
Re: Any use for write caching without defer writes?
Posted: Sun Mar 27, 2016 7:40 am
by erich56
I have installed PrimoCache yesterday, with Write Cache only and Defer Write set to "infinite" - and within about 18 hours, BOINC has written some 90 GB to the cache and only some 2 GB to the SSD. This is exactly what I wanted to achieve.
So, even after this short time, I can say that PrimoCache turns out to be a very useful tool for everybody who wants to take care of his/her SSD

Re: Any use for write caching without defer writes?
Posted: Sun Mar 27, 2016 9:05 am
by erich56
One thing I am unsure about is the Memory Oberhead which indicates a value of 347MB (which is high).
I was simulating a change of the Block Size which I had originally set to 4kb (the cluster size of the SSD also is 4kb) by just entering various higher values but not saving them - and what I saw was that the bigger the Block Size (max. 512kb), the smaller the Memory Overhead (finally only some 2,4MB with 512kb Block Size).
In their Help, PrimoCache say "Memory Overhead: Additional physical memory cost of running cache. If you see the memory overhead is large, you may cut it down by using a bigger block size."
On the other hand, they say "A smaller block size brings more available blocks for the same amount of cache space and usually higher performance. However, it will need larger memory overhead and may cause heavy CPU overload. To reach the best performance, a value equal to or less than the cluster size of the file system is recommended."
so, at least to my understanding, this somehow contradicts.
Any recommandations for me? Should I leave the block size at 4kb, or should I increase it?
Re: Any use for write caching without defer writes?
Posted: Sun Mar 27, 2016 10:01 am
by InquiringMind
The "memory overhead" is from the index used to manage blocks and depends on the *number* of blocks - so lots of small blocks will need a larger index than fewer large blocks.
As for performance, it really depends on your average file size. If you are working on lots of very small files (<1K) then the minimum 4K size should be best. But larger block sizes will give better performance with larger files - use a quick and simple benchmark like CrystalDiskMark to compare the sequential and 4K speeds with different block sizes to see how your system copes.
And I would suggest caution with the "infinite defer writes" which would (if described accurately) only write data out when space needs to be freed up in the cache (effectively turning the cached volume into a ramdisk until the cache fills). If you use a ramdisk instead, you can redirect data to it on a folder-by-folder basis (using NTFS junctions) and can use an image file to reduce the scope of potential data loss (timed backup to a non-SSD disk being a good option).