As the subject declares, this project is almost ready to be delivered as a project I can confidently submit without the B-tree and other such optimizations. For now, it is a stable sequential binary merge procedure. There are inclusions for possible independent divide and conquer merge of datasets, along the lines of launching multiple instances. This will require semaphores to signal to individual instances working on different segments of the original problem that dependent processes have completed. It works successfully as a single monolithic merge, working as expected in O(NLogN) time and changed to BINARY type file I/o because standard input # is not at all well-suited for this i/o-heavy process. The difference between input # and GET/PUT is dramatic in this case. Thank you for your patience. I want it to be CORRECT. Then I will kindly ask for input on helping speed things along while maintaining sort stability as the only property of this algorithm I am unwilling to give up. Time is for 11 approximately 1 MB.files from a 10 MB file segmented to files no greater than 1 MB. Yes, it's not fast because disk I/O is appreciably slower than RAM for non-SSD, non-RAM media.