My contribution to the future of QB64 will be small, yet significant. I am working on a stable Polyphase Merge, aka External Merge, used heavily in Big Data applications. It will involve of course, MergeInsert(), TreeSort() allowing multiple same-key entries, a file parser (cr, crlf or other character(s) delimited) and of course the output will be STABLY sorted, exactly like the Big Boys of Big Data do, in a user-definable maximum memory partition. I have tested it to M = 8 KBytes, enough for 682 LongInt values and the 341 necessary for the n/2 version of merge in my sorting library. Yes, this creates THOUSANDS of files, but they are cleared as the Polyphase Merge progresses. Yes, it is dramatically slower than standard MergeSort, but able to handle as much data as the medium can hold. Grab your coffee, this process, while O(NLogN) is limited by the input/output medium, whichever is slower and the efficiency of TreeSort(), used as a B-tree. Those are the dominant factors. I am roughly to the actual merge process, a bit different with the priming reads necessary for classic disk-based merge. Remember, this algorithm is designed for very low high-speed memory constraints, exactly as the widely used version employed nearly everywhere Big Data is a regular issue. First, make it work correctly, then make it fast. The attached picture illustrates the merge path for odd and even numbers of files to be merged. If the file count ends in anl even number, there are an odd number of files and this leftover is merged.with the most recent file created by a binary (two file) merge. This procedure is repeated until one file containing all the data is created. Using a different approach will make this FAR slower and unusable garbage for a extremely large datasets.