Send the data X bytes at a time until finished. For example, you might send a 10,000 byte file as 10 sequential packets of 1000 bytes. It’s how I’ve always did it, without any issues.
That's essentially what I'm doing.
I'm using various files for testing ranging from 3mb to 500mb.
The server accepts a connection, sends data back and forth to get a command and if the command is to send a file, it receives the file name and then creates a transfer process. (It has a queue of 1024 tasks but only runs 32 at once).
In the transfer process, it reads so many bytes, sends it, then once it's received on the other end, the receiving computer sends a control character back, then another chunk of data is sent.
For example I can set the packet size to 1000, and it will send 1000 bytes, wait for it to be received, then send 1000 again, until it is finished. And it works flawlessly that way. It can handle dozens of transfers at the same time and it works.
However, the issue is that I can change the packet size to anything up to about 64 kilobytes and it works.... then, data loss. The sending computer sends all the data and progress reaches 100%, but the receiving computer may only receive 98% of the data, or less with bigger packet size.
Whatever is going on, the port buffer or whatever, it seems to be 64kb for all the connections, because for example for 2 transfers to work properly, the 2 can only use 32kb packets. 4 would be 16kb packets, etc.
This works ok for lan connections where latency is fast, but over the actual internet (the end intention) it would be extremely slow to send a few KB then wait for a receive signal before sending a few more kb.
Now I *could* add a control data to the end of the packet data, like data$=data$+"END-OF-PACKET", and the receiving computer needs to see that before adding the packet to the file being saved, and that would technically work, but it does nothing to solve the issue other than retry until it works. Still limiting speed drastically.
If the connection is 100ms, the maximum throughput is 640kb/sec with this apparent limit of 64kb buffer.
I haven't used the program over the internet yet, just local network. *IF* internet routers and switches give additional buffer memory for packets, maybe there can be more than 64kb "in the pipe" since the data is crossing multiple switches instead of just one, I'm not entirely sure and a ton of reading leaves me dumber than when I started.
If the case is that switches and routers add buffering, I would still have to know what tricks are used to sense the combined buffer size of the route. I could maybe increase packet size until one is dropped then decrease the packet size?
Or make it fluid so it always creeps packet size up until a lost packet then backs off and creeps up again slowly?
Maybe I'm overthinking it but it seems that if sent data can be lost if it's sent too fast, the only way to accommodate this is to adjust data send rate on the fly. But THEN, :facepalm: , I was trying to use a string where the received packets were marked as characters, so that it can receive out of order packets or packets from multiple sources, like if the file is 10000 bytes, the string would be 10 bytes with each character representing 1000 byte packet received, so if the packet size is fluid, that no longer works. I can't reference the file for empty spots either as "GET"ing a byte from the received file will be a 0 weather it's an actual 0 in the file or nothing has been received. I would need a 500mb file to track the progress of a 500mb file. I wonder how limewire or other programs track file data that way.