Dev question. Memory bandwidth vs caching performance.

FAQ, getting help, user experience about FancyCache
Post Reply
dustyny
Level 8
Level 8
Posts: 118
Joined: Sun Sep 02, 2012 12:54 am

Dev question. Memory bandwidth vs caching performance.

Post by dustyny »

Hi,

My machine has 24GB of RAM dedicated to RAM caching (8GB for OS) and I get about 3.5GB transfer rate when I use a benchmarking tool (ATTO, IOMeter). SiSoft Sandra reports that the RAM has around 9GBs transfer rate, I expect some overhead for SATA protocols, block size, etc but 2/3's for overhead seems a bit high. I've noticed similar performance with other caching software so my guess is it's not a FancyCache issue but I am curious as to why I'm being limited.

Of course 9GBs is way more then I need but I'd love to see my 4k transfer speed increased.
TechRaven
Level 1
Level 1
Posts: 2
Joined: Thu Feb 23, 2012 12:35 pm

Re: Dev question. Memory bandwidth vs caching performance.

Post by TechRaven »

SiSoft is doing raw memory transfer rate calculations, therefor there is absolutely no logic involved, the code is simply commanding the CPU to fetch memory in a tight loop.
In comparison, you are benchmarking the Windows FileSystem, FancyCache, and your Ram. In so you have multiple layers of function calls, sanity checks, and potentially multiple kernel context switches occurring.

The windows FileSystem is not optimized for such high transfer rates with near no latency, and even if it was there is a huge difference between synthetic memory transfer benchmarks and what you'll get in any real world situation. As well at some point you'll actually hit a threshold where the bench marking software itself isn't going to be tuned well enough to show you the maximum performance.

Anyway, 3.5GB/s is a pretty massive transfer rate, one you'll unlikely see in most situations, what are you trying to achieve that requires such throughput and have you actually verified your largest bottleneck is what your targeting?

- Not a fancycahce Dev.
dustyny
Level 8
Level 8
Posts: 118
Joined: Sun Sep 02, 2012 12:54 am

Re: Dev question. Memory bandwidth vs caching performance.

Post by dustyny »

Thanks for the response TechRaven but I was actually hoping for a little more technical explanation. :)

I understand overhead costs and synthetic transfer rates not being an accurate way to measure real world performance. I'm a SysAdmin so I'm always on the look out for information that will help me understand the underlying technology. If I can better understand where bottlenecks occur, why they do and how or if they can be mitigated, I can build better systems. As a side note the latest version of NTFS in Win2012 is optimized for low latency, high speed transfers. Storage spaces isn't as good as RAID just yet but it's way more flexible and worth using.

I'm using FancyCache with Windows Server 2012, Storage spaces, SMB over RDMA and 40Gbs Infiniband cards. I am using this to eliminate the slow expensive SAN for my Hyper-V cluster. I've hit 7Gbs on the IB link but I only had a few machines running, hopefully I'll see 20-30Gbs at max capacity.. The server has dual ports so theoretically I can saturate the 3.5GBs.

Really what I'm after at this point is a best case scenario so I can make more accurate performance estimates.

No complaints though, as is my 3k SMB file server outperforms my last 50K SAN.. =D
gabrielmorrow
Level 4
Level 4
Posts: 25
Joined: Thu Dec 01, 2011 9:29 pm

Re: Dev question. Memory bandwidth vs caching performance.

Post by gabrielmorrow »

dustyny wrote:Thanks for the response TechRaven but I was actually hoping for a little more technical explanation. :)

I understand overhead costs and synthetic transfer rates not being an accurate way to measure real world performance. I'm a SysAdmin so I'm always on the look out for information that will help me understand the underlying technology. If I can better understand where bottlenecks occur, why they do and how or if they can be mitigated, I can build better systems. As a side note the latest version of NTFS in Win2012 is optimized for low latency, high speed transfers. Storage spaces isn't as good as RAID just yet but it's way more flexible and worth using.

I'm using FancyCache with Windows Server 2012, Storage spaces, SMB over RDMA and 40Gbs Infiniband cards. I am using this to eliminate the slow expensive SAN for my Hyper-V cluster. I've hit 7Gbs on the IB link but I only had a few machines running, hopefully I'll see 20-30Gbs at max capacity.. The server has dual ports so theoretically I can saturate the 3.5GBs.

Really what I'm after at this point is a best case scenario so I can make more accurate performance estimates.

No complaints though, as is my 3k SMB file server outperforms my last 50K SAN.. =D
have you tried the new filesystem thats gonna replace ntfs soon http://en.wikipedia.org/wiki/Windows_Server_2012#ReFS its faster and less prone to errors
dustyny
Level 8
Level 8
Posts: 118
Joined: Sun Sep 02, 2012 12:54 am

Re: Dev question. Memory bandwidth vs caching performance.

Post by dustyny »

I've done pretty extensive testing of ReFS, it's got potential but it's not ready for primetime just yet. My testing showed that it was a bit slower the NTFS but the real issue for me was it didn't support deduplication. I save 30-50% disk space when using dedupe with hyper-v images. It also doesn't support compression. The other issue is there are no utilities availible yet, so if you run in to a issue there is no 3rd party tools availible.

I'm sure it will be worthwhile once a major update comes along and hopefully they'll make it more ZFS like as it matures.
Post Reply