How to pre-load the cache on a high-end PC? Topic is solved
-
- Level 3
- Posts: 16
- Joined: Tue Jan 10, 2017 12:34 am
How to pre-load the cache on a high-end PC?
I work with large data sets composed of 1GB and larger files which I then do random i/o against.
I've got a large desktop PC with good specs:
- PrimoCache 2.7.0
- Windows 10
- 128 GB RAM
- 2 TB of NVMe SSD to use as L2 cache (Samsung 960 Pro 2TB newly purchased)
- AsRock x99 MB with M.2 slot
- the 2TB NVMe SSD Benchmarks at 3.5 GB/sec read, 2.1 GB/sec write (per Samsung Magician test application)
10+ TB of USB-3 external media
I'm trying PrimoCache for the first time.
I setup:
L1 - 32GB
L2 - 2TB NVMe SSD
In general I know what data files I'm going to work with so I'd like to preload L1/L2 via linear reads. I just tried a test where immediately after a reboot I started PrimoCache, then I copied 500 GB from a volume (I have set to cache) to a Network Share (not cached).
During the first 32GB of the copy I saw my L1 Cache go from 32GB down to near zero - good news.
But after that, I did not see my L2 cache being loaded. It stayed at 1907 GB free. I let the whole 500 GB copy finish and the L2 cache still showed 1907 free.
I'm confused by that. Is the display wrong? Is there something else I need to do to cause the L2 cache to be populated?
I've got a large desktop PC with good specs:
- PrimoCache 2.7.0
- Windows 10
- 128 GB RAM
- 2 TB of NVMe SSD to use as L2 cache (Samsung 960 Pro 2TB newly purchased)
- AsRock x99 MB with M.2 slot
- the 2TB NVMe SSD Benchmarks at 3.5 GB/sec read, 2.1 GB/sec write (per Samsung Magician test application)
10+ TB of USB-3 external media
I'm trying PrimoCache for the first time.
I setup:
L1 - 32GB
L2 - 2TB NVMe SSD
In general I know what data files I'm going to work with so I'd like to preload L1/L2 via linear reads. I just tried a test where immediately after a reboot I started PrimoCache, then I copied 500 GB from a volume (I have set to cache) to a Network Share (not cached).
During the first 32GB of the copy I saw my L1 Cache go from 32GB down to near zero - good news.
But after that, I did not see my L2 cache being loaded. It stayed at 1907 GB free. I let the whole 500 GB copy finish and the L2 cache still showed 1907 free.
I'm confused by that. Is the display wrong? Is there something else I need to do to cause the L2 cache to be populated?
-
- Level 3
- Posts: 16
- Joined: Tue Jan 10, 2017 12:34 am
Re: How to pre-load the cache on a high-end PC?
Hmm...
The first time I did this I used a non-standard tool to do the copy.
I just re-did the test using a simple drag and drop. (again after a fresh reboot). The second time the L2 cache appeared to work as expected.
Except it really seemed slow to load the L2 cache.
There was no L2storage write activity showing up for the first several minutes. Then the "Total read" activity stopped increasing and the L2storage write started advancing. The L2storage write advanced pretty slowly. I didn't time it, but it was nowhere near 2GB/second the the L2 cache SSD benchmarked at.
3 questions:
- How can I best pre-load the L2 cache without having to make an extra copy of my data?
- What speed should I expect to see reading/writing the L2 cache with my hardware?
- Are there any write-ups that explain what type of performance behavior I should expect?
Thanks
The first time I did this I used a non-standard tool to do the copy.
I just re-did the test using a simple drag and drop. (again after a fresh reboot). The second time the L2 cache appeared to work as expected.
Except it really seemed slow to load the L2 cache.
There was no L2storage write activity showing up for the first several minutes. Then the "Total read" activity stopped increasing and the L2storage write started advancing. The L2storage write advanced pretty slowly. I didn't time it, but it was nowhere near 2GB/second the the L2 cache SSD benchmarked at.
3 questions:
- How can I best pre-load the L2 cache without having to make an extra copy of my data?
- What speed should I expect to see reading/writing the L2 cache with my hardware?
- Are there any write-ups that explain what type of performance behavior I should expect?
Thanks
Re: How to pre-load the cache on a high-end PC?
PrimoCache populates the L2 cache when it detects system is idle in order not to affect other applications' tasks. If you feel L2 population too slow, you may set the "Gather Interval" to "FASTEST" in the Advanced Level-2 Cache Options.
Sorry, so far this is not available. However, we may support it in future versions.
gregfreemyer wrote:How can I best pre-load the L2 cache without having to make an extra copy of my data?
Sorry, so far this is not available. However, we may support it in future versions.
-
- Level 3
- Posts: 16
- Joined: Tue Jan 10, 2017 12:34 am
Re: How to pre-load the cache on a high-end PC?
RE: set the "Gather Interval" to "FASTEST" in the Advanced Level-2 Cache Options
That makes a meaningful difference. WIth the default it seemed that the L2 cache was actually getting in the way of large data copies.
Specifically today I have made several 500GB file copies with my normal tools today and I see no degradation on the first copy. After that the L2 cache does indeed provide benefit. I haven't done any real benchmarking and I'm only 24 hours into working with PrimoCache with this setup, but so far I have high expectations.
That makes a meaningful difference. WIth the default it seemed that the L2 cache was actually getting in the way of large data copies.
Specifically today I have made several 500GB file copies with my normal tools today and I see no degradation on the first copy. After that the L2 cache does indeed provide benefit. I haven't done any real benchmarking and I'm only 24 hours into working with PrimoCache with this setup, but so far I have high expectations.
-
- Level 7
- Posts: 88
- Joined: Wed Jan 11, 2017 12:57 am
Re: How to pre-load the cache on a high-end PC?
So . . . I'm not the only one who thought of this possibility with NVMe M.2's. How big were the caching volumes?
I was only experimenting with it, while using PRimo on four different machines including a laptop.
I merely bought the 960 EVO 250GB with the lesser performance specs to see how I might accelerate both SATA SSD's and HDD's.
But -- explained in a thread I created -- I can't understand why I get merely the (slow) source device's expected results when caching through the NVMe without a RAM-cache. But the performance increase is obviously "there" as I explained.
I was only experimenting with it, while using PRimo on four different machines including a laptop.
I merely bought the 960 EVO 250GB with the lesser performance specs to see how I might accelerate both SATA SSD's and HDD's.
But -- explained in a thread I created -- I can't understand why I get merely the (slow) source device's expected results when caching through the NVMe without a RAM-cache. But the performance increase is obviously "there" as I explained.
-
- Level 3
- Posts: 16
- Joined: Tue Jan 10, 2017 12:34 am
Re: How to pre-load the cache on a high-end PC?
Bonzai,
I subscribed to your thread.
I had 6 or 7 USB-3 drives hooked up to my PC today. A 5tb, 3tb, 2tb and the rest 1tb. But I was working with 3 different 500 GB data sets. Each was on a different drive.
I had L1 set to 32GB of ram and had the full 2TB NVMe as L2 cache. I too am mostly doing qualitative tests.
I subscribed to your thread.
I had 6 or 7 USB-3 drives hooked up to my PC today. A 5tb, 3tb, 2tb and the rest 1tb. But I was working with 3 different 500 GB data sets. Each was on a different drive.
I had L1 set to 32GB of ram and had the full 2TB NVMe as L2 cache. I too am mostly doing qualitative tests.
-
- Level 7
- Posts: 88
- Joined: Wed Jan 11, 2017 12:57 am
Re: How to pre-load the cache on a high-end PC?
Yo, Greg!
I'm trying to wrap my brain around what you're trying to do.
Is this a database system? Is your data stored in relational tables and files?
How big is the largest file that gets loaded from these spinner drives? I could ask why so many USB3 connections, or why you haven't built some sort of NAS, but it's peripheral.
The choice of an L1 should probably anticipate the largest file size loaded at one time. I wouldn't attempt to use an entire 1TB NVMe drive as SSD cache unless you could actually do what you asked with your thread.
For workstations, when they released the Z68 chipset or possibly others, or if you had one of several Marvell chipsets in a storage controller, you could use Intel ISRT or Marvell's Hyper-Duo for SSD-caching, but Primo is hardware and storage-mode agnostic. The original solutions of some 5 years ago limited you to an SSD cache-drive size of about 60 GB.
The first thought one would have about this is to accelerate the boot-system disk. If it was an SSD, you couldn't do much before until we got these Sammy NVMe M.2 SSD (cards), or some other PCIE NVMe. Now you can accelerate an SATA SSD to an NVMe M.2, and cache it to RAM as well.
So that's my first priority. If it were a matter of a database, it would be on my home-server and accelerated there, limited only by my Gigabit Ethernet connection to that server. That would be second priority. But I can't imagine using more than -- say -- 100 GB caching volumes on an NVMe. I've discovered that I can cache my SATA SSD boot-disk with only about 40GB of a 100GB caching volume, and I've probably "over-cached" my HDDs with the remaining 60GB.
Primo doesn't quite work the way you might want it -- as with your question. But it works nevertheless in a stealthy way to cache to SSD or NVMe SSD.
I'm just clueless about data-sets the size you describe, or how they could all be a single file.
For me, I'm testing out my dual-boot [Win7/10] by splitting disk resources so they don't get mixed up during any given OS user session. My boot-system is a cheap ADATA 480GB SSD. I have a 2TB Seagate Barracuda spinner with an extension to the Program Files and other things that would be specific to a given OS -- divided in 1TB parts for each respective OS. Then there's a 1TB media drive which isn't cached at all. RAM wouldn't help it; and I'd have to split it between the OS's in different volumes to cache it any other way. It doesn't need to be cached.
But both OS'es have to be able to access the same data, change and append to it, without screwing up a cache. That data would either be on an uncached (or with RAM-only) NVMe volume, or an HDD volume. The problem of using the media drive for those files as well -- I don't want to cache a drive containing 10GB HD movies and DVR captures.
I did a maintenance check on a 60GB caching-SSD after two years, before I flushed the cache and recreated the volume during the maintenance. It had filled up caching the OS-boot drive -- only a few GB free space. But the TBW racked up by the drive was less than 5TB.
I'm trying to wrap my brain around what you're trying to do.
Is this a database system? Is your data stored in relational tables and files?
How big is the largest file that gets loaded from these spinner drives? I could ask why so many USB3 connections, or why you haven't built some sort of NAS, but it's peripheral.
The choice of an L1 should probably anticipate the largest file size loaded at one time. I wouldn't attempt to use an entire 1TB NVMe drive as SSD cache unless you could actually do what you asked with your thread.
For workstations, when they released the Z68 chipset or possibly others, or if you had one of several Marvell chipsets in a storage controller, you could use Intel ISRT or Marvell's Hyper-Duo for SSD-caching, but Primo is hardware and storage-mode agnostic. The original solutions of some 5 years ago limited you to an SSD cache-drive size of about 60 GB.
The first thought one would have about this is to accelerate the boot-system disk. If it was an SSD, you couldn't do much before until we got these Sammy NVMe M.2 SSD (cards), or some other PCIE NVMe. Now you can accelerate an SATA SSD to an NVMe M.2, and cache it to RAM as well.
So that's my first priority. If it were a matter of a database, it would be on my home-server and accelerated there, limited only by my Gigabit Ethernet connection to that server. That would be second priority. But I can't imagine using more than -- say -- 100 GB caching volumes on an NVMe. I've discovered that I can cache my SATA SSD boot-disk with only about 40GB of a 100GB caching volume, and I've probably "over-cached" my HDDs with the remaining 60GB.
Primo doesn't quite work the way you might want it -- as with your question. But it works nevertheless in a stealthy way to cache to SSD or NVMe SSD.
I'm just clueless about data-sets the size you describe, or how they could all be a single file.
For me, I'm testing out my dual-boot [Win7/10] by splitting disk resources so they don't get mixed up during any given OS user session. My boot-system is a cheap ADATA 480GB SSD. I have a 2TB Seagate Barracuda spinner with an extension to the Program Files and other things that would be specific to a given OS -- divided in 1TB parts for each respective OS. Then there's a 1TB media drive which isn't cached at all. RAM wouldn't help it; and I'd have to split it between the OS's in different volumes to cache it any other way. It doesn't need to be cached.
But both OS'es have to be able to access the same data, change and append to it, without screwing up a cache. That data would either be on an uncached (or with RAM-only) NVMe volume, or an HDD volume. The problem of using the media drive for those files as well -- I don't want to cache a drive containing 10GB HD movies and DVR captures.
I did a maintenance check on a 60GB caching-SSD after two years, before I flushed the cache and recreated the volume during the maintenance. It had filled up caching the OS-boot drive -- only a few GB free space. But the TBW racked up by the drive was less than 5TB.
-
- Level 9
- Posts: 184
- Joined: Thu Feb 03, 2011 3:22 pm
Re: How to pre-load the cache on a high-end PC?
Hi!
A few questions about your setup:
Presumeably you use NTFS, right?
Which block size did you use to format your drives and cache?
Default has always been 4 KB most of time for NTFS. However, block size that small force PrimoCache to use a large amount of overhead for indexing the potential source disk and L1/L2 cache itself. As you are dealing mostly with really big data files, small block sizes make simply no sense at all and waste only RAM and overhead. In turn, if your cach is too large, you need to use larger block sizes otherwise PrimoCache may not be able to utlilize it due to RAM constraints. I had that scenario in my setup, and as I have to deal with larger files often too (film & video post production...), we have formatted everything using 64KB blocks. That reduced memory overhead by 64/4 = 16 times...
I currently use a 2044 GByte SSD RAID0 as cache and consider swapping to NVMe RAID0 for higher bandwith reasons. 2044 GByte is the physical maximum cache size PrimoCache 2.x can address. We use only read cache, for security reasons, as a power outtage/crash could leave you with corrupt disks. The cache disk is used to cache several volumes (roughly 150 TByte in total). The cache size is about 1.5 to 2 times the size of the data we touch every day, ie. its very likely the all users will receive data from the cache, or after they first touched it - for the rest of the day and usually in the following days too. Unfortunately fresh written data is not automatically kept in the cache too, so it gets only loaded into cache after a read operation. That should be changed in PrimoCache 3.x and it would be a huge timesaver for us, as about 90% of our writes (rendered image data) will be read immediately afterwards...
I have high hopes that PrimoCache 3 will introduce a lot of often requested features and may be able to even recover a write cache after a crash/reboot/power outtage. In that context it may hopefully be able to re-use a filled/populated SSD/NVMe cache without reloading/repopulating the data. That would make systems a lot faster after a reboot compared to now.
Maybe its worth to look into your block size settings and try optimizing. With the file sizes you have you should always use a large block size. While PrimoCache itself can use up to 512 KByte, NTFS can only be 64 KB max and ReFS is using 64 KB as default. So 64 KB seems to be the sweet spot for now.
A few questions about your setup:
Presumeably you use NTFS, right?
Which block size did you use to format your drives and cache?
Default has always been 4 KB most of time for NTFS. However, block size that small force PrimoCache to use a large amount of overhead for indexing the potential source disk and L1/L2 cache itself. As you are dealing mostly with really big data files, small block sizes make simply no sense at all and waste only RAM and overhead. In turn, if your cach is too large, you need to use larger block sizes otherwise PrimoCache may not be able to utlilize it due to RAM constraints. I had that scenario in my setup, and as I have to deal with larger files often too (film & video post production...), we have formatted everything using 64KB blocks. That reduced memory overhead by 64/4 = 16 times...
I currently use a 2044 GByte SSD RAID0 as cache and consider swapping to NVMe RAID0 for higher bandwith reasons. 2044 GByte is the physical maximum cache size PrimoCache 2.x can address. We use only read cache, for security reasons, as a power outtage/crash could leave you with corrupt disks. The cache disk is used to cache several volumes (roughly 150 TByte in total). The cache size is about 1.5 to 2 times the size of the data we touch every day, ie. its very likely the all users will receive data from the cache, or after they first touched it - for the rest of the day and usually in the following days too. Unfortunately fresh written data is not automatically kept in the cache too, so it gets only loaded into cache after a read operation. That should be changed in PrimoCache 3.x and it would be a huge timesaver for us, as about 90% of our writes (rendered image data) will be read immediately afterwards...
I have high hopes that PrimoCache 3 will introduce a lot of often requested features and may be able to even recover a write cache after a crash/reboot/power outtage. In that context it may hopefully be able to re-use a filled/populated SSD/NVMe cache without reloading/repopulating the data. That would make systems a lot faster after a reboot compared to now.
Maybe its worth to look into your block size settings and try optimizing. With the file sizes you have you should always use a large block size. While PrimoCache itself can use up to 512 KByte, NTFS can only be 64 KB max and ReFS is using 64 KB as default. So 64 KB seems to be the sweet spot for now.
-
- Level 3
- Posts: 16
- Joined: Tue Jan 10, 2017 12:34 am
Re: How to pre-load the cache on a high-end PC?
I do computer forensics. My datasets are copies (images) of entire disk drives with all the sectors physically on the drive. In general, it is considered unstructured data.BonzaiDuck wrote:Yo, Greg!
I'm trying to wrap my brain around what you're trying to do.
Is this a database system? Is your data stored in relational tables and files?
I'm processing a 3.7TB drive right now. I made the copy of it over the weekend, so this data is now connected to my PC for the first time.BonzaiDuck wrote:How big is the largest file that gets loaded from these spinner drives? I could ask why so many USB3 connections, or why you haven't built some sort of NAS, but it's peripheral.
I segmented it down to 8GB per segment file, but it is misleading to think of it as 8GB. It is really 3.7TB. Obviously that is too large to pre-load the L2 Cache, so I'm not doing that. To leverage the L2 cache, the first thing I did was "hash" all the PSTs on the drive. There was 500GB of PSTs on the drive. I hashed those first thing this morning, that caused the PSTs to all be in L2 cache after the hashing was done. After hashing those, I compared the hashes and ignored any PST copies that had identical hashes to others on the drive. That got me down to 235 GB of unique PSTs.
Within that 3.7TB, the largest files I care about are those PST files. To repeat, there's 235GB of unique PST files on the drive image. The biggest single PST is 40GB. I put all of those PSTs in L2 cache as my first action for the day.
I just finished parsing those PSTs now. That means going through each PST and pulling out every email. It took right at 2-hours. That is crazy fast. 5 years ago with a decent PC and rotating drives, my benchmark was 1 GB of PST data processed per hour. This is going at closer to 120 GB per hour.
Note that PSTs are not read linearly when parsing them. You do a ton of random i/o. That's why it takes about 1GB/hour on rotating drives and no decent caching mechanism.
Now that the PSTs are parsed, I have to run a keyword search against the entire drive. This time I'm only search existing files (non-deleted), so its 2.8 TB of data I need to search. I'm going to "pause" my cache for the 3.7TB image because I don't want to the PSTs/emails to be dropped.
Since only 10% of search will be in L2 cache, I imagine it will take overnight to run. It's only 10:30 AM here, so I'll find out tomorrow AM if it got done that fast or not.
Last week I saw my 2TB L2 Cache down to 512GB free. I've got my L1 cache set at 32GB, but maybe I should go to 64GB based on what you said.BonzaiDuck wrote:The choice of an L1 should probably anticipate the largest file size loaded at one time. I wouldn't attempt to use an entire 1TB NVMe drive as SSD cache unless you could actually do what you asked with your thread.
ThanksBonzaiDuck wrote:For workstations, when they released the Z68 chipset or possibly others, or if you had one of several Marvell chipsets in a storage controller, you could use Intel ISRT or Marvell's Hyper-Duo for SSD-caching, but Primo is hardware and storage-mode agnostic. The original solutions of some 5 years ago limited you to an SSD cache-drive size of about 60 GB.
It's a SSD, but a SATA one. I've added it do my L2 cache setup. I don't know how much it helps, but it doesn't seem to hurt.BonzaiDuck wrote:The first thought one would have about this is to accelerate the boot-system disk. If it was an SSD, you couldn't do much before until we got these Sammy NVMe M.2 SSD (cards), or some other PCIE NVMe. Now you can accelerate an SATA SSD to an NVMe M.2, and cache it to RAM as well.
Primo doesn't quite work the way you might want it -- as with your question. But it works nevertheless in a stealthy way to cache to SSD or NVMe SSD.[/quote]BonzaiDuck wrote:So that's my first priority. If it were a matter of a database, it would be on my home-server and accelerated there, limited only by my Gigabit Ethernet connection to that server. That would be second priority. But I can't imagine using more than -- say -- 100 GB caching volumes on an NVMe. I've discovered that I can cache my SATA SSD boot-disk with only about 40GB of a 100GB caching volume, and I've probably "over-cached" my HDDs with the remaining 60GB.
I've found I can preload it pretty well using cygwin (a linux compatibility layer that is opensource/free). I start a cygwin bash shell, then:
cd <image_dir>; cat * | dd of=/dev/null bs=1M status=progress
With the gather interval set at "1", that seems to do an excellent job of pre-loading the cache with that image prior to me starting to do my analysis.
Last week I was working with a set of 8 images spread across 6 USB-3 drives. I simultaneously started up a cygwin pre-load command on each drive. PrimoCache seemed to do an excellent job of putting all of those images into L2 cache simultaneously. That's when I saw my free L2 cache drop to 500 GB or so.
ThanksBonzaiDuck wrote:I'm just clueless about data-sets the size you describe, or how they could all be a single file.
For me, I'm testing out my dual-boot [Win7/10] by splitting disk resources so they don't get mixed up during any given OS user session. My boot-system is a cheap ADATA 480GB SSD. I have a 2TB Seagate Barracuda spinner with an extension to the Program Files and other things that would be specific to a given OS -- divided in 1TB parts for each respective OS. Then there's a 1TB media drive which isn't cached at all. RAM wouldn't help it; and I'd have to split it between the OS's in different volumes to cache it any other way. It doesn't need to be cached.
But both OS'es have to be able to access the same data, change and append to it, without screwing up a cache. That data would either be on an uncached (or with RAM-only) NVMe volume, or an HDD volume. The problem of using the media drive for those files as well -- I don't want to cache a drive containing 10GB HD movies and DVR captures.
My 2TB L2 Cache drive is at 9.1 TB written and I've only had it 3 weeks. Say 0.5 TB/day. 1200 TWB means 2400 days before it dies of use. That's 6 1/2 years. That seems fine.BonzaiDuck wrote:I did a maintenance check on a 60GB caching-SSD after two years, before I flushed the cache and recreated the volume during the maintenance. It had filled up caching the OS-boot drive -- only a few GB free space. But the TBW racked up by the drive was less than 5TB.
-
- Level 9
- Posts: 184
- Joined: Thu Feb 03, 2011 3:22 pm
Re: How to pre-load the cache on a high-end PC?
Hi Greg,
after your description I would consider to use a 4 TByte sized SSD storage stripe, preferably made of say 2 * 2 TByte or 4 * 1 TByte M.2 NMVe with a fitting PCIe Adapter card (or onboard, if your Mainboard has the right slots).
Then you would just copy the data from the source disk to the M.2 in total and analyze from there. An array of two M.2 NVMe like the Samsung 960 Pro runs at likely 6+ GByte/s and over 10+ GByte/s with four of M.2 NVMe cards installed, which would potetntially cut down your processing by *magnitudes*.
Here is a fitting adapter card, but I have not yet tested it myself:
http://amfeltec.com/products/pci-expres ... d-modules/
Similar cards like the HP Z-Drive Turbo Quad or the one from Dell run at these speeds, usually based on Samsung NVMe modules like the 950/951/960.
If it is so random in reads (possible spread over the full 3.73 TByte you named) then the caching will not help as much as having a true SSD storage approach, were effectively EVERY search is fed surely from the brutal fast SSD.
In this case I'd use PrimoCache only either with this as an Level 2 cache (unfortunately PrimoCache can only address 2 TB right now) or better only with L1 cache and the SSD stripe as a native disk. Once a new version of PrimoCache allows to address more than 2 TB, you can just enable L2 cache and get a bit more flexible, as you won't have to copy, but just begin working and the system does caching transparently...
Of course, this is a budget question.
Just a thought.
after your description I would consider to use a 4 TByte sized SSD storage stripe, preferably made of say 2 * 2 TByte or 4 * 1 TByte M.2 NMVe with a fitting PCIe Adapter card (or onboard, if your Mainboard has the right slots).
Then you would just copy the data from the source disk to the M.2 in total and analyze from there. An array of two M.2 NVMe like the Samsung 960 Pro runs at likely 6+ GByte/s and over 10+ GByte/s with four of M.2 NVMe cards installed, which would potetntially cut down your processing by *magnitudes*.
Here is a fitting adapter card, but I have not yet tested it myself:
http://amfeltec.com/products/pci-expres ... d-modules/
Similar cards like the HP Z-Drive Turbo Quad or the one from Dell run at these speeds, usually based on Samsung NVMe modules like the 950/951/960.
If it is so random in reads (possible spread over the full 3.73 TByte you named) then the caching will not help as much as having a true SSD storage approach, were effectively EVERY search is fed surely from the brutal fast SSD.
In this case I'd use PrimoCache only either with this as an Level 2 cache (unfortunately PrimoCache can only address 2 TB right now) or better only with L1 cache and the SSD stripe as a native disk. Once a new version of PrimoCache allows to address more than 2 TB, you can just enable L2 cache and get a bit more flexible, as you won't have to copy, but just begin working and the system does caching transparently...
Of course, this is a budget question.
Just a thought.
Last edited by Axel Mertes on Mon Jan 23, 2017 4:10 pm, edited 2 times in total.