I believe I finally understand. Thank you so much for bearing with me and helping me to get through this. Now I can make adjustments to my system with correct understanding of what I am doing and why I am adjusting. Not just doing something because of advice.
The last two posts of yours helped me to see my misunderstanding. This one:
... and especially this one:
These made me finally understand there is a difference between "new write-data being cached by L2" and "L1 cached write-data being transferred to L2."Support wrote: ↑Mon Aug 03, 2020 10:31 amWhen L1 becomes full, L1 Urgent write will be triggered. During L1 urgent writes, new write-data will be cached by L2.
And with "Flush L1 Cache to L2 Cache" enabled, L1 cached write-data is transferred to L2.
I had to think about it for a day, but when at last I understood what you were saying, then I realized that my entire idea of the PrimoCache L1 / L2 model was incorrect.
Somehow, I had been thinking that the general data flow was: L1 -> L2 -> Disk
But now I realize that this model is unnecessary and would also be inefficient.
I see that your model still allows me to set up an L2 write cache as a deep writing buffer (much bigger than L1 write cache in RAM), but your model will only use this deep buffer when necessary, and will not force PrimoCache to write everything through the L2 buffer when this is not necessary.
Your model is much better. It makes much more sense and seems both much more efficient and very better performance than the idea I was imagining in my mind.
Please, again, thank you one more time for the very high technical support and explaining complex concepts to old men!
And once again, absolutely great program, with absolutely great design and very excellent implementation!
Thanks again very much!