Page 1 of 1

A suggestion to prevent this scenario from happening

Posted: Fri Oct 25, 2019 5:55 pm
by RobF99
I know this might be a low priority item but it is a situation that may appear for some users. It does appear for me on one of my systems.

I have a 400 Gb L2 and I update L2 on idle. Sometimes I process 1 Tb of images in Photoshop and when this is done and it updates L2, it seems that it reads the entire 1 Tb to update L2 so it unnecessarily fills up L2 with the first 400 Gb and then evicts data as it processes the next 600 Gb so that eventually the last 400 Gb remains on the L2. It would be nice if the program could detect such a scenario and not unnecessarily process the entire 1 Tb of data but only the last 400 Gb in this situation.

I know that you can't modify for all situations but this might be one that occurs occasionally for some users. It will save pointlessly completely rewriting the entire L2 twice for only the last data that is needed.

Re: A suggestion to prevent this scenario from happening

Posted: Tue Oct 29, 2019 4:01 pm
by Support
We also have noticed this issue and are working to get a better solution for this problem.
Thank you very much!

Re: A suggestion to prevent this scenario from happening

Posted: Thu Oct 31, 2019 7:04 am
by RobF99
Yes, you could check for if data to be written L2 > size of L2, then just iterate through the data in memory and only write the necessary data. I am sure that it is just a quick calculation but probably slightly complex.

Re: A suggestion to prevent this scenario from happening

Posted: Fri Nov 01, 2019 3:49 am
by Support
Not as easy as expected :) Actually it's a little bit complicated, but we'll try to work it out.