Re: Preload currently accessed files -> improve Audio/Video/Film
Posted: Mon Dec 07, 2015 9:18 am
Hi there,
what I'd suggest to do is to do here is run a perfmon.msc and have a look at physical disk performance. Interesting figures are: Split I/O, Bytes / read, Bytes / s.
Then adjust your RAID array configuration appropriately. You should in the first place set the stripe size to match your overall throughput/s. If you require e.g. 30MB/s then you should find out either I/O per second or Bytes / read to be able to maximize the amount of drives involved in that operation by adjusting the stripe size. If your SW reads data in 64kB blocks you should set stripe to 64kB so that 64kB are evenly distributed over your RAID drives. Consult your controller documentation about what your stripe size actually means (I have come across some controllers that try to confuse you with segment size, etc.)
Assuming you have RAID6 with 8 Drives without hot-spare (leaving us with 6 actually reading drives) then setting a stripe size to 512kB will result in cca. 85kb segment size (512/6) for a single disk. Operating system partition should be formatted with the same settings as stripe size for block size (NTFS or exFAT) so one read involves all drives at once and does not produce thrashing (reading 64kB on the RAID level requiring only 16kB on filesystem level) or split I/O (reading 64kB in SW requires filesystem to perform 2 read operations of 32kB blocks and/or requiring RAID to read 4 16kB stripes).
Check your product guide on what exactly A/V streaming setting does to your cache settings. I would generally set controller cache to 80/20 for read, maybe even more. Also a cache pre-read should be turned on (if such setting exist).
If you have your disks connected to local FC controller have a look at partition aligning. Improper aligning results in split I/O (filesystem reads continuous 64kB block but underlying RAID has to perform 2 reads as FS block does not start at RAID stripe size start). Usually this is a matter of fiddling and benchmarking as RAID controllers rarely provide real configuration to the OS and they emulate 512b/4k sector behavior for compatibility reasons.
If you're really hunting down the best performance you should consider RAID 10. If properly implemented within the controller you are able to read from all drives (even the mirrored ones) so during reads you have 100% performance (all 8 disks reading in parallel) and during writes 50% which still can be mitigated by write cache settings on the controller.
You can change your windows file caching settings by "fsutil behavior set MemoryUsage". Consult Microsoft documentation on your desired setting value.
Make sure You're not using SATA or NL-SAS drives as these produce very low IOps. Use enterprise SAS 10k or 15k drives in your RAID.
What I assume is happening:
1) Your controller is set to cache writes heavily. It is impossible to achieve higher write than read performance on RAID5/6 as both impose write penalty (by design).
2) Your SW uses parallel reads overloading your controller and disks. What you should do is to use e.g. sqlio.exe, CrystalDiskMark or similar and tune your storage for parallel performance (QD32 or 16, it depends on your SW design).
Best Regards,
Slavius
what I'd suggest to do is to do here is run a perfmon.msc and have a look at physical disk performance. Interesting figures are: Split I/O, Bytes / read, Bytes / s.
Then adjust your RAID array configuration appropriately. You should in the first place set the stripe size to match your overall throughput/s. If you require e.g. 30MB/s then you should find out either I/O per second or Bytes / read to be able to maximize the amount of drives involved in that operation by adjusting the stripe size. If your SW reads data in 64kB blocks you should set stripe to 64kB so that 64kB are evenly distributed over your RAID drives. Consult your controller documentation about what your stripe size actually means (I have come across some controllers that try to confuse you with segment size, etc.)
Assuming you have RAID6 with 8 Drives without hot-spare (leaving us with 6 actually reading drives) then setting a stripe size to 512kB will result in cca. 85kb segment size (512/6) for a single disk. Operating system partition should be formatted with the same settings as stripe size for block size (NTFS or exFAT) so one read involves all drives at once and does not produce thrashing (reading 64kB on the RAID level requiring only 16kB on filesystem level) or split I/O (reading 64kB in SW requires filesystem to perform 2 read operations of 32kB blocks and/or requiring RAID to read 4 16kB stripes).
Check your product guide on what exactly A/V streaming setting does to your cache settings. I would generally set controller cache to 80/20 for read, maybe even more. Also a cache pre-read should be turned on (if such setting exist).
If you have your disks connected to local FC controller have a look at partition aligning. Improper aligning results in split I/O (filesystem reads continuous 64kB block but underlying RAID has to perform 2 reads as FS block does not start at RAID stripe size start). Usually this is a matter of fiddling and benchmarking as RAID controllers rarely provide real configuration to the OS and they emulate 512b/4k sector behavior for compatibility reasons.
If you're really hunting down the best performance you should consider RAID 10. If properly implemented within the controller you are able to read from all drives (even the mirrored ones) so during reads you have 100% performance (all 8 disks reading in parallel) and during writes 50% which still can be mitigated by write cache settings on the controller.
You can change your windows file caching settings by "fsutil behavior set MemoryUsage". Consult Microsoft documentation on your desired setting value.
Make sure You're not using SATA or NL-SAS drives as these produce very low IOps. Use enterprise SAS 10k or 15k drives in your RAID.
What I assume is happening:
1) Your controller is set to cache writes heavily. It is impossible to achieve higher write than read performance on RAID5/6 as both impose write penalty (by design).
2) Your SW uses parallel reads overloading your controller and disks. What you should do is to use e.g. sqlio.exe, CrystalDiskMark or similar and tune your storage for parallel performance (QD32 or 16, it depends on your SW design).
Best Regards,
Slavius