The files I put on the Ramdisk and keep between reboots are always 300MB. When I create the Ramdisk from scratch and reboot, the VDF starts out at around 500MB. I am guessing the extra 200MB is coming from the "NTFS MFT Reserved Space". Then with usage, the size grows to 1GB and 2GB, etc. This is probably because NTFS is not actually deleting temp file data blocks but is rather just removing their entries in the NTFS MFT. With a large VDF file - startup times then become long.
I had encountered this same problem with VirtualBox vdi images - but the problem there was solved with Sysinternals "sdelete" which zeros out all the stale sectors on a drive.
http://vl4rl.com/blog/2011/11/compactin ... ual-disks/
http://www.maketecheasier.com/shrink-yo ... 2009/04/06
http://www.kaibader.de/compact-virtualbox-vdi-images/
But why does sdelete.exe solution not work on Primo Ramdisk VDI Image Format? I tried "sdelete -z" on Ramdisk. What happens then is that the VDF image file becomes the full 4GB. Same with "sdelete -c" and also "sdelete -z" and even with running JkDefrag in compact mode. All sdelete methods result in the image file becoming the full 4GB. But sdelete is supposed to zero out the sectors - so then why is Primo Ramdisk imager writing the full 4GB after usage of sdelete? Could you elaborate on algorithm used by the Primo Ramdisk Smart Imager so as to figure out a way to zero-out stale disk blocks using some kind of secure delete utility? Or can you please make it so that it will be compatible with method used by sdelete?
The method that does work to shrink/reset my VDF image size is to copy all the files off the Ramdisk, quick format the drive, and then copy all the files back on. Then the VDF image file sizes get created at their minimal size again. But if sdelete worked it would be much faster and easier.
From sdelete documentation:
Cleaning free space presents another challenge. Since FAT and NTFS provide no means for an application to directly address free space, SDelete has one of two options. The first is that it can, like it does for compressed, sparse and encrypted files, open the disk for raw access and overwrite the free space. This approach suffers from a big problem: even if SDelete were coded to be fully capable of calculating the free space portions of NTFS and FAT drives (something that's not trivial), it would run the risk of collision with active file operations taking place on the system. For example, say SDelete determines that a cluster is free, and just at that moment the file system driver (FAT, NTFS) decides to allocate the cluster for a file that another application is modifying. The file system driver writes the new data to the cluster, and then SDelete comes along and overwrites the freshly written data: the file's new data is gone. The problem is even worse if the cluster is allocated for file system metadata since SDelete will corrupt the file system's on-disk structures.
The second approach, and the one SDelete takes, is to indirectly overwrite free space. First, SDelete allocates the largest file it can. SDelete does this using non-cached file I/O so that the contents of the NT file system cache will not be thrown out and replaced with useless data associated with SDelete's space-hogging file. Because non-cached file I/O must be sector (512-byte) aligned, there might be some left over space that isn't allocated for the SDelete file even when SDelete cannot further grow the file. To grab any remaining space SDelete next allocates the largest cached file it can. For both of these files SDelete performs a secure overwrite, ensuring that all the disk space that was previously free becomes securely cleansed.