Hey Guys,
I started this thread because there are lots of questions regarding the defer-write timeout.
Some people do not know, whats the difference between short and long defer-write timeout.
Because stuff around this matter is so hard to discribe in words I'm going to start with some synthetical benchmarks.
I know that these kind of synthetical benchmarks are not comparable to real-world benchmarks, but they are good enough to make something very clear.
What we see here is the same cache drive with only the "Defer-Write Timeout" beeing modified.
1. Shows the result maintaining a 10 seconds defer-write-timeout.
2. Shows the result maintaining a 60 seconds defer-write-timeout.
3. Shows the result maintaining a 360 seconds defer-write-timeout.
You might notice at first that the RAM itself (AM3+) performs nearly the same. To verify that this minor abbreviations are not dis-/advantages, i ran all these benchmarks at least twice giving me an abbreviation of max. 60 MiB/s per value... so that differences are only result of measuring inaccuracy.
Now lets get serious and take a look at the "Normal Writes". These are writes that actually hit the disk.
1. Shows the 10-seconds defer-write-timeout writing 1,76 GiB to disk.
2. Shows the 60-seconds defer-write-timeout writing 901,6 MiB to disk.
3. Shows the 360-seconds defer-write timeout writing 2,1 MiB (was 0 on bench stop, but pressed "Print"-Key)
You might think, that this isn't telling us that much, because in this synthetic benchmarks - files beeing deleted after and that way PrimoCache takes tremendous advantage of TRIM those blocks, but just try to imagine that this benchmark was your actual heavy R/W-real-world application and that this "amounts" are not TRIMable.
Now lets imagine your want to do your next workload right after, which contains 6 GiB of writes.
1. After work with 10-seconds defer-write-timeout, 1,76 GiB total have already been flushed to disk, cache is ready to free up space for next workload and no write wait's for flush.
2. After work with 60-seconds defer-write-timeout, 901,6 MiB total have already been flushed to disk, cache is ready to free up space for next workload, but ~860 MiB still waiting for flush.
3. After work with 360-seconds defer-write-timeout, no defered writes of the last task have been flushed to disk and 1,76 GiB waiting for flush.
The Result:
1. Cache frees all R/W-data instant and pulls the hole 6 GiB at full-RAM-speed into cache.
2. Cache frees all already flushed data instant and starts pulling the 6 GiB at full-RAM-speed, but when it's full, after ~5,1 GiB it has to flush the remaining ~860 MiB data to disk first, what knocks you down to disk-speed for the last 20%.
3. Cache frees all already flushed data instant and starts pulling the 6 GiB at full-RAM-speed, but when it's full, after ~4,24 GiB it has to flush the remaining 1,76 GiB data to disk first, what knocks you down to disk-speed for the last 40%.
Conclusion:
A) longer defer-write timeout results in more TRIM and less access what increases your SSD and HDD lifetime, while it seems not to increase performance at all.
B) longer defer-write timeout results in less data-security, you loose all changes after the last flush on crash.
C) shorter defer-write timeout results in low risk in getting disk-bottlenecked.
So it's absolutely ridiculous to believe that extreme timeouts like one day or one week or one month or even a year will be of any advantage. While you getting little less sustainability over time (who plans to uses actual SDD/HDD over decades) on the other hand it's ultimativly destroys the best advantage of a cache-software: highest possible R/W-speed.
My request put "Timeout" into the QuickInfo and tell the people that extreme timeouts are harmfull!.
Oh and my recommendation is 10 seconds... like the default.
Thanks for your attention.
Defer-Timeout... whats the matter?
-
- Level 6
- Posts: 65
- Joined: Fri May 31, 2013 3:03 pm
Defer-Timeout... whats the matter?
- Attachments
-
- defer-timeout.jpg (419.14 KiB) Viewed 2672 times
Last edited by Incriminated on Tue Oct 29, 2013 7:00 pm, edited 1 time in total.
-
- Level 6
- Posts: 65
- Joined: Fri May 31, 2013 3:03 pm
Re: Defer-Timeout... whats the matter?
Oops run No2 with only 30 seconds... sry.
I have another little explanation: On 1 week timeout, when you copy 6GiB using 6GiB-R/W-cache today, after your cache hits full you are limited to disk-write-speed for the rest of your week. See, when you try the same work with a different 6 GiB on the next day... it begins flushing 6GiB at disk speed to free space for the next 6GiB.... constant bottleneck!
I have another little explanation: On 1 week timeout, when you copy 6GiB using 6GiB-R/W-cache today, after your cache hits full you are limited to disk-write-speed for the rest of your week. See, when you try the same work with a different 6 GiB on the next day... it begins flushing 6GiB at disk speed to free space for the next 6GiB.... constant bottleneck!
Re: Defer-Timeout... whats the matter?
Thanks, Incriminated. We're also developing another schedule mechanism to avoid such issue. Thus users won't feel slow even when cache is full of deferred write-data.Incriminated wrote:On 1 week timeout, when you copy 6GiB using 6GiB-R/W-cache today, after your cache hits full you are limited to disk-write-speed for the rest of your week. See, when you try the same work with a different 6 GiB on the next day... it begins flushing 6GiB at disk speed to free space for the next 6GiB.... constant bottleneck!