Works in Win2012 (w stor spaces) but I have some questions.
Posted: Sun Sep 02, 2012 3:09 pm
Hi,
I just wanted to let you guys know I did some testing of running FancyCache in Windows and it seems to work with no problems other then the ones mentioned here already. I was able to use FancyCache in combination with storage space with no problem.
Here is my test machine
Core i7 970
24GB
2xIBM m1015 SAS controller
8xOCZ Vertex 3 120GB SSD
8xWestern Digital Enterprise Class (RAID) 1 TB HDD
Setup:
I created 2 storage spaces, one for each drive type. One for the 8 HDDs and one for the 8 SSDs.
A 8xHDD storage pool was created with a thin provisioned 1 TB drive, using Parity (RAID5, Striped w/Parity).
A 8xSSD storage pool was created with a thin provisioned 1 TB drive, using simple (RAID0, Striped).
Obviously the product is still in beta, so I'm sure my results maybe as a result of unfinished or code that hasn't been optimized but I figured I'd give my feedback to help move you guys along.
I'm using the ATTO drive benchmarking tool to create sequential reads and writes, it doesn't provide accurate real world results but i figured it should show best case performance. I followed up with Anvil Pro to simulate real(ish) world performance and it seems to confirm the the results.
Level-2 cache (MBU or LBW) is only used when I exceed the Level-1 cache. So for my setup if I don't have Defer write enabled, write speed is terrible because writes are being passed directly to the HDD Storage space which has that RAID5 (like) write penalty. So write speed only become usable when I turn defer write on. There are some things that I can use this with no problem but I can't see using this with a Virtual machine or a database as they could get corrupted if there is a power loss.
What I had hoped to do was:
- Use the RAM act as a fast read cache, only using it as a write cache for transient data that won't cause any issue if lost (such as a pagefile, or temporary database)
- Use the SSDs to cache frequently accessed reads (level 1 & 2) that are not popular enough to be held in RAM and use as a write cache for all writes. Passing all writes directly through to the SDD stripe set which would then be synchronized to the the HDDs (at their slower transfer rate) for slow storage and parity.
- Use the HDDs for high capacity, slow storage. The HDDs are where I can provide redundancy more economically, if the SSDs are intercepting the writes first, then I don't have to be concerned with the extra overhead that parity or 3x mirroring adds to the write speed.
I've also made some observations on how RAM write caching performs and I had hoped to see a little more of a linear scale up on RAM writes. The read speeds seem to scale nicely but the write speeds seem to take a hit which worsens the smaller the block size I use. I was able to witness this scaling performance on my older and my newest machine (i7-970 & i7-3930k). When I run the same benchmark on just the SSDs everything scales up perfectly until read speed falls off a cliff on 8192k reads.
I'm going to include some of my results so you can visualize what ATTO is reporting. Looks like the forum software only allows one uploaded image at time.
8xSSDs Simple (raid 0)
I just wanted to let you guys know I did some testing of running FancyCache in Windows and it seems to work with no problems other then the ones mentioned here already. I was able to use FancyCache in combination with storage space with no problem.
Here is my test machine
Core i7 970
24GB
2xIBM m1015 SAS controller
8xOCZ Vertex 3 120GB SSD
8xWestern Digital Enterprise Class (RAID) 1 TB HDD
Setup:
I created 2 storage spaces, one for each drive type. One for the 8 HDDs and one for the 8 SSDs.
A 8xHDD storage pool was created with a thin provisioned 1 TB drive, using Parity (RAID5, Striped w/Parity).
A 8xSSD storage pool was created with a thin provisioned 1 TB drive, using simple (RAID0, Striped).
Obviously the product is still in beta, so I'm sure my results maybe as a result of unfinished or code that hasn't been optimized but I figured I'd give my feedback to help move you guys along.
I'm using the ATTO drive benchmarking tool to create sequential reads and writes, it doesn't provide accurate real world results but i figured it should show best case performance. I followed up with Anvil Pro to simulate real(ish) world performance and it seems to confirm the the results.
Level-2 cache (MBU or LBW) is only used when I exceed the Level-1 cache. So for my setup if I don't have Defer write enabled, write speed is terrible because writes are being passed directly to the HDD Storage space which has that RAID5 (like) write penalty. So write speed only become usable when I turn defer write on. There are some things that I can use this with no problem but I can't see using this with a Virtual machine or a database as they could get corrupted if there is a power loss.
What I had hoped to do was:
- Use the RAM act as a fast read cache, only using it as a write cache for transient data that won't cause any issue if lost (such as a pagefile, or temporary database)
- Use the SSDs to cache frequently accessed reads (level 1 & 2) that are not popular enough to be held in RAM and use as a write cache for all writes. Passing all writes directly through to the SDD stripe set which would then be synchronized to the the HDDs (at their slower transfer rate) for slow storage and parity.
- Use the HDDs for high capacity, slow storage. The HDDs are where I can provide redundancy more economically, if the SSDs are intercepting the writes first, then I don't have to be concerned with the extra overhead that parity or 3x mirroring adds to the write speed.
I've also made some observations on how RAM write caching performs and I had hoped to see a little more of a linear scale up on RAM writes. The read speeds seem to scale nicely but the write speeds seem to take a hit which worsens the smaller the block size I use. I was able to witness this scaling performance on my older and my newest machine (i7-970 & i7-3930k). When I run the same benchmark on just the SSDs everything scales up perfectly until read speed falls off a cliff on 8192k reads.
I'm going to include some of my results so you can visualize what ATTO is reporting. Looks like the forum software only allows one uploaded image at time.
8xSSDs Simple (raid 0)