Using hidden RAM in Windows Server 2008

FAQ, getting help, user experience about PrimoCache
Axel Mertes
Level 9
Level 9
Posts: 180
Joined: Thu Feb 03, 2011 3:22 pm

Using hidden RAM in Windows Server 2008

Post by Axel Mertes »

Hi Romexsoftware Team!

It may have been asked before (I may have asked before), but I don't know if something has changed with the release build:

Is it possible to use the hidden RAM in our Windows Server (currently Server 2008 R2 x64) as cache RAM?
The point is that the standart edition of Windows Server 2008 R2 x64 supports only 32 GBytes of RAM, while the motherboard requires you (due to triple channel CPUs) to insert either 48 GByte or e.g. 96 GByte of RAM (up to 192 GByte is possible).
So even in the minimum config to get 32 GBytes of RAM, I will end up with at least 16 GBytes hidden, unuseable for me. If it would be possible to use them for PrimoCache caching, it would be perfect. Then I would even consider upgrading to 96 or even 192 GByte at some point, to increase RAM cache.

We are considering building a 10 GBit ethernet as follows:

External FC SATA disk RAIDs striped as RAID60, direct attached to the server. Internal SSD RAID (RAID5 probably) with 6 or 8 * 1 TByte SSDs as cache for the FC RAIDs. Dual 10 GBit connection to the switch.
Having lots of RAM Cache would help a lot here to protect the SSD RAID from unnecessary wear & tear (write deferred to SSD).

Given the above approach, I have following other question:
If I use RAM as 1st level cache and SSD RAID as 2nd level cache, is there a cache mode that enables write through to the disks, while doing deferred writes to the SSD cache?
So data is safe by being directly written to the target drives, but to protect the SSD cache RAID from wear & tear, the RAM cache data is written "deferred" to the SSD cache?
If that is not yet implemented, I think it would be useful doing it!

Any thoughts?

Thanks,
Axel
Davey126
Level 7
Level 7
Posts: 99
Joined: Sun Mar 23, 2014 3:40 pm

Re: Using hidden RAM in Windows Server 2008

Post by Davey126 »

Axel Mertes wrote:Is it possible to use the hidden RAM in our Windows Server (currently Server 2008 R2 x64) as cache RAM?
Romex offers a generous trial period for Primocache. Assuming your server can be safely taken off-line for testing it would be a simple matter to install Primocache and see if it detects the 'hidden' memory in your configuration.
User avatar
Support
Support Team
Support Team
Posts: 3623
Joined: Sun Dec 21, 2008 2:42 am

Re: Using hidden RAM in Windows Server 2008

Post by Support »

Yes, it is possible to use invisible memory on Server 2008 x64 as level-1 cache. However, because invisible memory feature is related to hardware configuration, you may try first on your site.
Axel Mertes wrote:If I use RAM as 1st level cache and SSD RAID as 2nd level cache, is there a cache mode that enables write through to the disks, while doing deferred writes to the SSD cache?
Well, PrimoCache has the alogrithm to avoid unnecessary write wear on SSDs. However, since SSD is used as cache, usually speed performance is the most consideration. PrimoCache stores cached data into SSD when system is idle. And only read-data will be stored into SSD. So there's no so-called "deferred writes" to SSD cache.
User avatar
Violator
Level 5
Level 5
Posts: 48
Joined: Mon Jan 16, 2012 11:13 pm

Re: Using hidden RAM in Windows Server 2008

Post by Violator »

What do you intend to use the server for?
It might be more of a benefit to apply an enterprise license rather than using the unused RAM as cache.

As for SSD wear and tear... are you talking about something home build??
Normally you have one or more controllers that takes care of read and deferred write caching.

Not trying to be an arse here, just want to make sure your not saving at the wrong places, I have seen an entire company go down because the owner didn't want to listen to advise regarding a SAN.
Instead of going for recommended and brands with good reputation he went for some low price no name system, with the result of so many disk crashes within 3 months that he was forced to close his entire company down and outsource all customers to avoid going into large negative numbers.
Axel Mertes
Level 9
Level 9
Posts: 180
Joined: Thu Feb 03, 2011 3:22 pm

Re: Using hidden RAM in Windows Server 2008

Post by Axel Mertes »

Hi!

I will try using hidden RAM as 1st level cache as soon as I get the SSDs for the server. Unfortunately the RevoDrive 3x2 cards I have plenty of around aren't working on Server 2008 OS, only Win7/8 etc. :(

The idea is to use either the onboard SAS/SATA RAID controller or a dedicated one (LSI or Adaptec) to build a large SSD cache RAID5 with e.g. 6-8 SSDs to get into the 2+ GByte/s R/W speed range. We consider using e.g. 500 GB or 1 TBye SSDs for this, so it will be a really large cache RAID.

I see no reason why you would not want to store write data into the cache. Wouldn't that make a lot of sense to also store data written to disks in the cache?
When I look at our workflow, we usually render image sequences. Then, after rendering (=writing) those image sequences we usually play them back. If that would play from cache it would be very beneficial. If I have to wait until they got loaded from disk again (only READ caching), then it would take reasonably longer and the effect of caching would be a lot less efficient.

Could't we do write caching at all?

I think my tests with an earlier beta showed me to see write cache...

Axel
User avatar
Violator
Level 5
Level 5
Posts: 48
Joined: Mon Jan 16, 2012 11:13 pm

Re: Using hidden RAM in Windows Server 2008

Post by Violator »

Axel Mertes wrote:Hi!<br abp="683"><br abp="684">I will try using hidden RAM as 1st level cache as soon as I get the SSDs for the server. Unfortunately the RevoDrive 3x2 cards I have plenty of around aren't working on Server 2008 OS, only Win7/8 etc. :(<br abp="685"><br abp="686">The idea is to use either the onboard SAS/SATA RAID controller or a dedicated one (LSI or Adaptec) to build a large SSD cache RAID5 with e.g. 6-8 SSDs to get into the 2+ GByte/s R/W speed range. We consider using e.g. 500 GB or 1 TBye SSDs for this, so it will be a really large cache RAID.<br abp="687"><br abp="688">I see no reason why you would not want to store write data into the cache. Wouldn't that make a lot of sense to also store data written to disks in the cache?<br abp="689">When I look at our workflow, we usually render image sequences. Then, after rendering (=writing) those image sequences we usually play them back. If that would play from cache it would be very beneficial. If I have to wait until they got loaded from disk again (only READ caching), then it would take reasonably longer and the effect of caching would be a lot less efficient.<br abp="690"><br abp="691">Could't we do write caching at all?<br abp="692"><br abp="693">I think my tests with an earlier beta showed me to see write cache...<br abp="694"><br abp="695">Axel
You would not want to use an onboard raid controller if you want maximum Iops and redundancy.
And you would want an UPS as well if you plan to address deferred writes to RAM, besides having redundant PSU's in the server.

What are you going to render, 2D, 3D, video and in realtime? Keep in mind that raid 5 may not be ideal for realtime rendering.
Does the rendering application not come with it's own memory caching support, and is it going to run server site?
Usually rendering needs a lot CPU/GPU power and a good amount of RAM/V-RAM and disks for scratch/temp/work in raid 0 if it is realtime.

You can get to the hidden 16GB with quite a few ramdisk applications too btw.

Do you plan to have many users render on the server via their desktop systems or what is the exact idea?
Axel Mertes
Level 9
Level 9
Posts: 180
Joined: Thu Feb 03, 2011 3:22 pm

Re: Using hidden RAM in Windows Server 2008

Post by Axel Mertes »

We have all our FC RAIDs (5 * 16 bay RAID6, 3 * 12 bay RAID5), server, switches etc. backed up by a redundant 40 kW PSU, feeding 5 racks constantly with battery power. No issues with that.

Onboard Supermicro MoBO RAID would be just a quick "try". There are some decent SSD RAID tests by e.g. Toms Hardware that show the desired transfer rates easily with both RAID5 and RAID6 configs with some recent LSI and Adaptec controllers. I think we will go for RAID5, because rebuild times should be fairly short with SSDs (compared to HDDs, where I strictly prefer RAID6 for safety reasons). The aggregated bandwidth is big enough, but a renderfarm with 160 cores will kill any HDD based RAID system (we even tried DataDirectNetworks S2A controllers and killed their performance). What definetly keeps up is SSD, but having our storage build entirely of SSDs is unpayable. So SSD cache is the obvious solution.

We do all kinds of 2D, 3D, lots of 4K 60p now, used to do lots of 4K 24p/25p before.

My idea is to expand the server possibly to 96 or 192 GByte and use the hidden RAM for 1st level caching - if that works that'll be great in rendering situations, where all cores scream for the same image texture files etc..
Local caching in workstations etc. was the aim, but with a block level caching its impossible over LAN connections, nor SAN connections. Either your SAN software needs to implement it (talking to them since >2 years without much success), QLogic FabriCache costs 7000 to 12000 Euros per single FC+SSD cache card, or you have to develop a software that is caching from Ethernet SMB shares. In theory Microsofts Branched cache would do exactly that, but is probably to slow and with to much latency to really make it working in a local high speed network environment. Best approach of all so far is QLogic FabriCache, but its soooooo expensive. My idea was to add local SSD caching similar to PrimoCache within a SAN/LAN software. IMHO it would be easy to implement this, all there is about is to negotiate from the MDC with all caching clients when a block has been written, so that this block gets invalidated in their caches. Thats about it. If we would only be reading, then PrimoCache would do, but thats not the reality. We need to read and write files...

We can see effects from 10 to 100 times more efficient transfers (free'd up network and FC lines) when caching locally, using SSDs. QLogic claims similar values with their FabriCache system, so this is not out of this world values.

The simple rule of thumb:
Transfer data only as often as absolutely minimum needed.

In a SAN/LAN environment its best if all servers and clients do local caching. In our case, we want to use PrimoCache for the server (no SAN sharing then) to build a huge, fast and excessively responsive cache for its local 150+ TBytes of storage. Bridge that cache via quad 10 GBit Ethernet into the LAN and we are already above the SAN performance. So during normal operation most of the daily used data will be in that cache and will stay there. When accessing the data, it will not perform HDD reads anymore, increasing remaining HDD performance dramatically. Writes should go through directly or write back, but that would be fast, as the total I/O would be reduced significantly and RAIDs are fast in writing. Unfortunately this will not yet solve the LAN traffice issues. To reduce them, we need caching in the LAN, but then it'll no longer be block based, but file based (see branched cache).

What we need is something like PrimoCache as transparent local SMB to SSD cache. Or, otherwise, a mechanism that identifies a block AND the disk, so that in the end you have the block level cache pool spread among your network. Like FabriCache, machines could send data from one cache to another... OK, thats far in future, but it IS THE FUTURE. In a couple of years from now we will see storage controllers like the QLogic FabriCache for affordable prices. This is the biggest improvement in networking, since networking was invented. Stay with the old storage network, but get 10 times more performance or more out of it. Very green idea too. And, after all, beyond that point there is not much left one improve at all anymore. It'll be the "perfect" and most efficient thinkable network. See above rule of thumb.

And we do render on the server and renderfarm all the time. Very very efficiently. But I am only satisfied if its perfectly efficient (see above, future...).

Cheers,
Axel
User avatar
Violator
Level 5
Level 5
Posts: 48
Joined: Mon Jan 16, 2012 11:13 pm

Re: Using hidden RAM in Windows Server 2008

Post by Violator »

And buying I/O accelerators is not an option?
I know they cost quite a lot, but so does larger bricks of fast ECC memory.
As for Branche Cache performance, if you use 2012 R2 you can and should run it on SMB 3.0 to get max performance.
Axel Mertes
Level 9
Level 9
Posts: 180
Joined: Thu Feb 03, 2011 3:22 pm

Re: Using hidden RAM in Windows Server 2008

Post by Axel Mertes »

Well, FusionIO cards are one of the options, but the most expensive one. They are no longer alone IO-wise and other cards may surparse their performance. When we want to build an SSD RAID it will be actually a couple of times larger and potentially faster than the FusionIO card at a significantly lower price tag. We can build a 8 TByte SSD RAID for around 3800 Euros including controller... A 1.2 TByte FusionIO card cost the same... and is potentially slower. OK, some of their cards use SLC, but those for 4000 is MLC as far as I know.
User avatar
Violator
Level 5
Level 5
Posts: 48
Joined: Mon Jan 16, 2012 11:13 pm

Re: Using hidden RAM in Windows Server 2008

Post by Violator »

Axel Mertes wrote:Well, FusionIO cards are one of the options, but the most expensive one. They are no longer alone IO-wise and other cards may surparse their performance. When we want to build an SSD RAID it will be actually a couple of times larger and potentially faster than the FusionIO card at a significantly lower price tag. We can build a 8 TByte SSD RAID for around 3800 Euros including controller... A 1.2 TByte FusionIO card cost the same... and is potentially slower. OK, some of their cards use SLC, but those for 4000 is MLC as far as I know.
8TB with which controller and disks?

You still want to have deferred writes to a SSD cache?
I think your only option to get a full SSD cache fitting your specifications would be to get an application tailored or buy hardware that has it implemented, since Primo Cache wouldn't be able to cover exactly what you ask for, unless you can get it tailored..
Post Reply