Re: Using hidden RAM in Windows Server 2008
Posted: Mon Jun 16, 2014 8:35 am
I was looking for potentially the LSI controller or its successor tested here:
http://www.tomshardware.de/RAID_Control ... 98-11.html
They reach a performance that is fully in our aims with this. As we will probably use "standart" SATA SSDs instead of more expensive server style SAS ones, we plan to use RAID 5 or 6 for the forseeable event of damage of an SSD. I would think that in the long run it may turn out cheaper this way. From two dozen SSDs we run so far, we had just one failure, and that was a mechanical connectore issue. So however strong we used them yet (as caching drives), they hold up for years now. I would assume that after the warranty time is over, the next generation will be around the corner too, so we may swap drives or controller at some point. But again, to feed a 20 GBit ethernet link, a 2+ GByte/s SSD RAID should be a good fit. There is very small latency and using RAM as first level cache would making the server firing data at yet unseen speed. Response times will be very small compared to what we experience now (I hope).
As disks we aim at e.g. the Samsung 840 EVO 1000 GByte or the Crucial M550 1000 GByte ones. Given that each one of those is rated around 500 MByte/s, we should be fairly safe using 8 of them to get around 2 GByte/s total.
When writing data, I would expect that system is first using local RAM as write cache to collect for bigger writes. Then write them through to normal HDD storage for safety reasons (not loosing anything in the case of a power failure of any kind). From the RAM cache data should be written (deferred) to SSD cache to stay in a fast cache media of later re-use (again rendering & review is most of the tasks). These deferred writes should minimize the SSD write cycles.
Is kind of a tiered storage model, feeding data from caches first, then HDDs. It works this transparent in the server. My wish would be to implement this architecture in every clients (so server and client sides). However, block storage and SMB shares aren't the same game, and a SAN sharing requires a global cache control (in terms of invalidating cached data blocks which have been changed by another machine)
What do you think?
http://www.tomshardware.de/RAID_Control ... 98-11.html
They reach a performance that is fully in our aims with this. As we will probably use "standart" SATA SSDs instead of more expensive server style SAS ones, we plan to use RAID 5 or 6 for the forseeable event of damage of an SSD. I would think that in the long run it may turn out cheaper this way. From two dozen SSDs we run so far, we had just one failure, and that was a mechanical connectore issue. So however strong we used them yet (as caching drives), they hold up for years now. I would assume that after the warranty time is over, the next generation will be around the corner too, so we may swap drives or controller at some point. But again, to feed a 20 GBit ethernet link, a 2+ GByte/s SSD RAID should be a good fit. There is very small latency and using RAM as first level cache would making the server firing data at yet unseen speed. Response times will be very small compared to what we experience now (I hope).
As disks we aim at e.g. the Samsung 840 EVO 1000 GByte or the Crucial M550 1000 GByte ones. Given that each one of those is rated around 500 MByte/s, we should be fairly safe using 8 of them to get around 2 GByte/s total.
When writing data, I would expect that system is first using local RAM as write cache to collect for bigger writes. Then write them through to normal HDD storage for safety reasons (not loosing anything in the case of a power failure of any kind). From the RAM cache data should be written (deferred) to SSD cache to stay in a fast cache media of later re-use (again rendering & review is most of the tasks). These deferred writes should minimize the SSD write cycles.
Is kind of a tiered storage model, feeding data from caches first, then HDDs. It works this transparent in the server. My wish would be to implement this architecture in every clients (so server and client sides). However, block storage and SMB shares aren't the same game, and a SAN sharing requires a global cache control (in terms of invalidating cached data blocks which have been changed by another machine)
What do you think?