[2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

First hand news related to PrimoCache
User avatar
Support
Support Team
Support Team
Posts: 3622
Joined: Sun Dec 21, 2008 2:42 am

Re: [2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

Post by Support »

Babel17 wrote: Thu Jan 02, 2020 4:15 am I'm trying out the Alpha, and it wasn't until I uninstalled it, after rebooting and even later trying the new regular release, and then fresh installing it, was PrimoCache able to "see" some of my drives as being available to be used as an L2 cache.
A little bit weird... We didn't change in this aspect in the 4.0 alpha...
Babel17 wrote: Thu Jan 02, 2020 4:15 am Btw, it would be cool to be able to set the time delay for deferred writes on a drive by drive basis, and not just per cache task. I can guess why that might not be possible though, lol.
This is possible ;)
User avatar
Support
Support Team
Support Team
Posts: 3622
Joined: Sun Dec 21, 2008 2:42 am

Re: [2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

Post by Support »

minhgi wrote: Fri Jan 03, 2020 12:03 pm Can we get the option to flush the normal deferred and excess write from L1 to L2 first before hitting the disk. Any write that don't get hit to L2 is not cache, unless already in L1. If you have extremely large L1 cache, those data never get written to L2. That potential read cache that could be missed if that data be call again and flush to the disk already. There could potential performance gain miss from the write/read cache.
I'm afraid that we may not support this feature. Below are reasons.
1) For reading cache, once data are read from disks by Windows/Apps, these data will be cached into L2. Because L2 cache is persistent, though they might be missed in current cache, it won't cause too much difference in a long-term usage. Besides, after flushed, these data are still in L1 cache for reading back if the option "Free Cache on Written" is not checked.
2) Consider the L2 SSD writing lifespan, generally we shall reduce the writing on L2. It's kind of waste to transfer all L1 write-data to L2.
minhgi
Level 10
Level 10
Posts: 255
Joined: Tue May 17, 2011 3:52 pm

Re: [2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

Post by minhgi »

Thanks Support.

Just an idea squeeze more performance out of Primocache already excellent function. The feature already there by not using L1 cache. All writes get written to L2 and then disk at the set deferred interval. This could be set as optional with warning about quicker wears on the SSD. Beside, the SSD are cheap these days and endurance are much better than those of early generations. I would be more like upgrading the SSD before it reach 50% life span. Please make it option with either CMD line to enable it or just check mark option with huge warning beside it.
User avatar
Support
Support Team
Support Team
Posts: 3622
Joined: Sun Dec 21, 2008 2:42 am

Re: [2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

Post by Support »

Hi minhgi, I forgot to mention another reason. The feature conflicts with current design of the communication mechanism between L1 and L2. It is not easy to be implemented as you might expected. Besides, it might affect L1 cache performance, even the whole system responsiveness, while transferring data from L1 to L2. In consideration of the above, sorry, we are less likely to support this feature.
ml70
Level 2
Level 2
Posts: 9
Joined: Sat Dec 07, 2019 6:31 am

Re: [2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

Post by ml70 »

support wrote: Fri Jan 03, 2020 1:33 pm 2) Consider the L2 SSD writing lifespan, generally we shall reduce the writing on L2. It's kind of waste to transfer all L1 write-data to L2.
There are use cases where this would actually be absolutely preferable, if doing heavy caching the cache ssd is an expendable component with a finite lifespan and will just get replaced when it has done its duty.
More RAM is not an alternative because of cost and size limits, whereas 2 TB NVME drives are readily available.

Of course you'd have to warn people of the consequences of writing everything through L2.
hang10z
Level 2
Level 2
Posts: 6
Joined: Tue Jun 20, 2017 3:12 am

Re: [2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

Post by hang10z »

Will try alpha out soon, very excited to see improvements in this software, I love it.

One feature request that I would like, not sure if its possible, is to "pin" cache certain folders. What I mean by that is to keep certain folders cached and accelerate read/writes as a priority over other operations. This would come in handy especially for running VMs on slower disk, especially VDI. I deploy enterprise SANs that have this feature, for instance Nimble.
Babel17
Level 5
Level 5
Posts: 52
Joined: Tue Nov 03, 2015 3:41 pm

Re: [2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

Post by Babel17 »

Is it the case that that, while using the 4.0 Alpha, that the L2 cache can't be shared between two Cache tasks? I use two cache tasks because I prefer a very high delayed write with one of my drives. I get a bit of "urgent" write activity to that drive, and I'm hoping that an L2 cache would deal with that. That's not a big deal, but I figured it was worth asking about. I guess I could create another L2 cache as I do have two older SSDs that should have tons of life span to them. They're MLC, and have been mostly used for cold storage. I bought them nearly four years ago, back when they were still fairly expensive, and I got in the habit of using them that way. I use a newer SSD with lots of free space for my current L2 cache.

P.S. So far so good with the Alpha. It seems to smooth out even more my already very smooth set-up.
User avatar
Support
Support Team
Support Team
Posts: 3622
Joined: Sun Dec 21, 2008 2:42 am

Re: [2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

Post by Support »

hang10z wrote: Wed Jan 08, 2020 11:39 pm One feature request that I would like, not sure if its possible, is to "pin" cache certain folders. What I mean by that is to keep certain folders cached and accelerate read/writes as a priority over other operations. This would come in handy especially for running VMs on slower disk, especially VDI. I deploy enterprise SANs that have this feature, for instance Nimble.
Thanks. We're working on file-based caching rules.
User avatar
Support
Support Team
Support Team
Posts: 3622
Joined: Sun Dec 21, 2008 2:42 am

Re: [2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

Post by Support »

Babel17 wrote: Thu Jan 09, 2020 1:02 am Is it the case that that, while using the 4.0 Alpha, that the L2 cache can't be shared between two Cache tasks?
No, 4.0 alpha doesn't change this. A L2 storage volume can be used by 16 cache tasks. You may specify a part of the L2 storage volume (by setting L2 cache size) for one cache task, and PrimoCache will automatically allocate that part of space from the whole L2 storage volume as a L2 partition for the cache task.
Of course, each cache task has its independent L2 cache space and cannot access other cache tasks' L2 cache.
Babel17
Level 5
Level 5
Posts: 52
Joined: Tue Nov 03, 2015 3:41 pm

Re: [2019-12-31] PrimoCache 3.2.0 and 4.0.0 alpha released!

Post by Babel17 »

support wrote: Thu Jan 09, 2020 9:23 am
Babel17 wrote: Thu Jan 09, 2020 1:02 am Is it the case that that, while using the 4.0 Alpha, that the L2 cache can't be shared between two Cache tasks?
No, 4.0 alpha doesn't change this. A L2 storage volume can be used by 16 cache tasks. You may specify a part of the L2 storage volume (by setting L2 cache size) for one cache task, and PrimoCache will automatically allocate that part of space from the whole L2 storage volume as a L2 partition for the cache task.
Of course, each cache task has its independent L2 cache space and cannot access other cache tasks' L2 cache.
Thank you, that pointed me in the right direction. I just had to lower the size of my L2 storage down from MAX, under my main cache task, to one that left room for my other cache task. Once I did that then that other cache task no longer had the options for L2 storage greyed out. I somehow missed seeing that. I have it up and running now. :)
Post Reply