The half baked idea is read data from the L2 cache drive and some data from the base storage drive/s at the same time in a kind of RAID 0 implimentation to beat the read speed of the L2 cache device alone.
I realise tha SSDs are way faster than HDDs etc, but if the read head happens to be in the vicinity of some requested, larger, sequential files...
(and proximity is not a seek time issue if the base storage is a slower SSD...)
ATM using an L2 cache (trial as full is unaffordable to me) is a bit slower than a pure SSD instalation.
That means that the moneyed hardware enthusiasts and tweakers just use SSDs and ignore Primocache.
If you can manage to achieve faster than the implemented SSD performance with the acompanying capacity increase, these enthusiasts will sit up, take notice and buy..!!
I have no idea how this might be achieved?
Perhaps by simply sending a filtered (for larger sequential data) read request to the the base drive, simultaneous to the L2 request?
Plz note the importance of being able to use a defragmenter without upsetting the L2 cache, to the above idea, and to your ...thinking clientel.
(MyDefrag (free) is 15% faster than the std Win Defrag...)
The same goes for trim support on L2, L3 etc SSDs.
Suggestions around PrimoCache
1 post • Page 1 of 1