Something VERY different about buffered write mode
Posted: Tue Jul 04, 2023 6:42 pm
Yesterday and today I had to transfer about 4 Tb of data with around 2 million files from one USB3 drive to another. I did so using robocopy with only 1 thread (MT:1). It was only an 8 Gb system so I set up two cache tasks for 2 Gb of L1 read/write on each one.
I have done a lot of copying to USB drives over the years and always found the performance of PrimoCache write mode to be no better than copying without primocache. In fact often the USB drive being written to would go into a strange state where it would constantly be at 100% use would have very long queue times and data would flush at just a couple of megabytes every few seconds. When this problem did not occur and all was flushing okay, I found that the consistency of the writing would fluctuate quite a lot. It made sense, since L1 was receiving data while also flushing. I was generally using Native write mode at about 60 seconds.
But today i discovered something very different about Buffer write mode compared to all of the other write modes. What is different is that the performance is remarkable! Both the reading and the writing drive transfer rates were very consistent varying at between 90 Mb/s and 120 Mb/s - in other words, at their maximum performance. It was the writing performance that go my attention - Buffer/60 seconds/Ignore Busy/Idle. It was smooth and at maximum performance between thes rate of 90 and 120 for a variety of file size types. I would watch the counters and they performed exactly as per the specification for the buffer method. It would fill to 40% then flush back down to 20% and since the data was arriving quickly there were never any pauses in the writing. The disk write queue was smooth and constant at between 1.00 and 2.00.
So my question is, why is buffer so much better than all of the other write methods in such a scenario? Could it be because there was always some unused cache because the cache usage never goes above 40%? If that is the reason, maybe there should be the option in Primocache to always leave some amount of L1 free? Can you shed some light on why the performance is so much better and if it is because of that amount left free on L1 then add an option to a future build to always leave X% of L1 cache free? I know I am changing all of my write methods to Buffer on all my systems going forward.
I have done a lot of copying to USB drives over the years and always found the performance of PrimoCache write mode to be no better than copying without primocache. In fact often the USB drive being written to would go into a strange state where it would constantly be at 100% use would have very long queue times and data would flush at just a couple of megabytes every few seconds. When this problem did not occur and all was flushing okay, I found that the consistency of the writing would fluctuate quite a lot. It made sense, since L1 was receiving data while also flushing. I was generally using Native write mode at about 60 seconds.
But today i discovered something very different about Buffer write mode compared to all of the other write modes. What is different is that the performance is remarkable! Both the reading and the writing drive transfer rates were very consistent varying at between 90 Mb/s and 120 Mb/s - in other words, at their maximum performance. It was the writing performance that go my attention - Buffer/60 seconds/Ignore Busy/Idle. It was smooth and at maximum performance between thes rate of 90 and 120 for a variety of file size types. I would watch the counters and they performed exactly as per the specification for the buffer method. It would fill to 40% then flush back down to 20% and since the data was arriving quickly there were never any pauses in the writing. The disk write queue was smooth and constant at between 1.00 and 2.00.
So my question is, why is buffer so much better than all of the other write methods in such a scenario? Could it be because there was always some unused cache because the cache usage never goes above 40%? If that is the reason, maybe there should be the option in Primocache to always leave some amount of L1 free? Can you shed some light on why the performance is so much better and if it is because of that amount left free on L1 then add an option to a future build to always leave X% of L1 cache free? I know I am changing all of my write methods to Buffer on all my systems going forward.