RobF99 wrote: ↑Sat Jul 10, 2021 9:43 pm
Another useful tool, which I am the brain behind (but I don't code it) is UltimateDefrag. It lets you sort files by file size so you can have your small files (cause of random I/O) together on your spinner and benefit from a larger block size in PrimoCache where it will cache adjacent small files due to the larger block size. PrimoCache also enables any defragging to blaze away very fast since it all happens in L1 with the deferred write doing its thing depending upon the latency you have set.
I created a document here to show how synergistic these two programs are:
https://www.disktrix.com/ultimatedefrag ... cache.html
You could also use Robocopy in a batch file to only copy files below a certain size to another drive to bring them into L1 and L2. Even on a system drive with a lot of files it only takes about 3 to 5 minutes for all files < 32 Kb. Be aware deleting them from the target drive doesn't come easy since there are often nested Application Data folders creating long entries due to a compatibility requirement of Windows with some legacy programs. There is a way to delete them with Winrar. I will find instructions and post here.
Very good to know.
I have been advocating defrag of large volumes before FancyCache/PrimoCache even existed and was using it all the time along side various defrag tools. I can guarantee that this is a game changer in terms of defrag speeds and overall system performance.
I will definetly have a look at this one in particular, knowing the developers is always first choice! Keep going!
What many people are not aware of is that desaster recovering a defragmented drive or RAID is by far easier and more reliably and safe than if it is heavily fragmented. I think I explained every now and then here in the forum, but can only remind people of understanding this.
If your files are written to a heavily fragmented disks, it becomes very slow and inefficient, but even more so, it becomes dangerous, as you file is essentially spread over allmost the entire disk in worsed cases. If you for instance accidentially reformat a drive, overwriting the first 3% might be enough to destroy allmost all files on a disk "at once". Further, trying to recover such a disk will find many thousand old, unused file entries of earlier existing files. You might easily end up needing 10 TB of disk space to recover from a 1 TB disk. And then you have to manually sort out the garbage, finding that just 50 GB was savely rescued and the rest is junk.
Using regular defragmentation will keep your drives clean, sequential and performant.
In that context I also recommend the use of Condusiv undelete/undelete server, which does move "deleted" files to a special folder before finally deleting them. This way you do not open up the fragmentation holes when deleting files. Then you can set times like once a week or so to actually free up files deleted longer than e.g. 7 days ago to free up disk space (fragmenting...) all at once. Do a defrag right after the execution and you'll be very fine and safe. Maybe it would be possible to recreate such functionality at lower cost or open source. Its kind of a missing link.
Just saying.
With my recommendation of having at least 2 times as much SSD cache as you typically daily transact on data will keep you working from fast SSD almost all the time, while - when really needing to access RAID or HDD - you'll get full performance due to defragmente sequential reads. Win-Win-Win. I easily outperformed high end storage systems this way and comparably no cost, with higher safety.