Author Topic: TCP/IP data limit size in buffer etc  (Read 4309 times)

0 Members and 1 Guest are viewing this topic.

Offline Parkland

  • Newbie
  • Posts: 51
    • View Profile
TCP/IP data limit size in buffer etc
« on: April 02, 2021, 01:37:47 am »
Hello everyone!
I have a question that I ran into before and couldn't figure out, but hoping I might find help as I understand it better somewhat.

In experimenting with get/put and transferring data, Sometimes I found that data can be lost.
The computer sending the data would send it too fast, and the receiving computer wouldn't get all the data.
So, I tried and fooled around, and found a few solutions:
I tried to simply slow data send rate to match connection speed. Seems stupid.

Then, the more elegant solution was to send data in packets and keep track of how many are received vs how many are sent to insure that 60 kilobytes
may be between the sending and receiving computer but no more. This solution works, but as multiple connections are running, there doesn't seem to be 60 kilobytes of buffer space for each connection anymore.

I have read that 64kb figure for tcp/ip connections but I'm not 100% sure how to apply what I'm reading.
Does anyone have an idea where to find this information? I've already read everything I can on qb64 literature and seaching the internet.

I'm just wondering how programs generally do this. Send data and increase rate until loss, then reduce speed?
Let it go full speed and limit data in the buffers?

How does one even know how much buffer space is available? I can read different operating systems have different size buffers for socket etc, but so would switches and other network gear, so I'm just wondering the logic behind network programs to account for network speeds to not lose data....

Offline SpriggsySpriggs

  • Forum Resident
  • Posts: 1145
  • Larger than life
    • View Profile
    • GitHub
Re: TCP/IP data limit size in buffer etc
« Reply #1 on: April 02, 2021, 01:41:02 am »
I would simply have the client reply back each time it receives a packet. If it didn't reply for the particular packet then resend it. Rinse and repeat.
Shuwatch!

Offline Parkland

  • Newbie
  • Posts: 51
    • View Profile
Re: TCP/IP data limit size in buffer etc
« Reply #2 on: April 02, 2021, 02:46:22 am »
I would simply have the client reply back each time it receives a packet. If it didn't reply for the particular packet then resend it. Rinse and repeat.

That is exactly what I'm now doing. The sending computer sends a chunk of data followed by a control character, and the receiving computer sends a character back once the control character is received to send more data. It does work, but it seems there can only be ~64 kilobytes of data between the sending and receiving computer or else data loss happens.
Also, that ~64 kb of data seems to be split among connections on the same port, so if there are 2 computers receiving data from one sending computer losses occur or data chunk size needs to be reduced to ~32kb per data chunk.

I don't think this is a qb64 problem as the networking seems to work fine, until data overwhelms the buffers.
I'm just wondering if anyone knows the industry standard approach for dealing with this. There must be some type of algorithm or process for the sending computer to calibrate itself to not push more data than the port buffer can hold?

Ideally I need an EOF function but in reverse, so data is only sent as fast as it's leaving the buffer onto the network and other computer.

If I can only send 64kb at a time and wait for the "clear to send" character, this will be extremely slow on anything but the home ethernet network I think.
« Last Edit: April 02, 2021, 02:47:39 am by Parkland »

Offline SMcNeill

  • QB64 Developer
  • Forum Resident
  • Posts: 3972
    • View Profile
    • Steve’s QB64 Archive Forum
Re: TCP/IP data limit size in buffer etc
« Reply #3 on: April 02, 2021, 05:36:15 am »
Send the data X bytes at a time until finished.  For example, you might send a 10,000 byte file as 10 sequential packets of 1000 bytes.  It’s how I’ve always did it, without any issues.
https://github.com/SteveMcNeill/Steve64 — A github collection of all things Steve!

Offline luke

  • Administrator
  • Seasoned Forum Regular
  • Posts: 324
    • View Profile
Re: TCP/IP data limit size in buffer etc
« Reply #4 on: April 02, 2021, 07:13:08 am »
I have a hypothesis that this is due to the non-blocking nature of the socket. Could you please try see if the value of _CONNECTED(s), where s is the handle used for PUT, returns 0 once data starts being dropped. This is on the sending side.

Offline Parkland

  • Newbie
  • Posts: 51
    • View Profile
Re: TCP/IP data limit size in buffer etc
« Reply #5 on: April 03, 2021, 10:05:56 pm »
Send the data X bytes at a time until finished.  For example, you might send a 10,000 byte file as 10 sequential packets of 1000 bytes.  It’s how I’ve always did it, without any issues.

That's essentially what I'm doing.
I'm using various files for testing ranging from 3mb to 500mb.

The server accepts a connection, sends data back and forth to get a command and if the command is to send a file, it receives the file name and then creates a transfer process. (It has a queue of 1024 tasks but only runs 32 at once).
In the transfer process, it reads so many bytes, sends it, then once it's received on the other end, the receiving computer sends a control character back, then another chunk of data is sent.
For example I can set the packet size to 1000, and it will send 1000 bytes, wait for it to be received, then send 1000 again, until it is finished. And it works flawlessly that way.  It can handle dozens of transfers at the same time and it works.

However, the issue is that I can change the packet size to anything up to about 64 kilobytes and it works.... then, data loss. The sending computer sends all the data and progress reaches 100%, but the receiving computer may only receive 98% of the data, or less with bigger packet size.

Whatever is going on, the port buffer or whatever, it seems to be 64kb for all the connections, because for example for 2 transfers to work properly, the 2 can only use 32kb packets. 4 would be 16kb packets, etc.
This works ok for lan connections where latency is fast, but over the actual internet (the end intention) it would be extremely slow to send a few KB then wait for a receive signal before sending a few more kb.

Now I *could* add a control data to the end of the packet data, like data$=data$+"END-OF-PACKET", and the receiving computer needs to see that before adding the packet to the file being saved, and that would technically work, but it does nothing to solve the issue other than retry until it works. Still limiting speed drastically.

If the connection is 100ms, the maximum throughput is 640kb/sec with this apparent limit of 64kb buffer.

I haven't used the program over the internet yet, just local network. *IF* internet routers and switches give additional buffer memory for packets, maybe there can be more than 64kb "in the pipe" since the data is crossing multiple switches instead of just one, I'm not entirely sure and a ton of reading leaves me dumber than when I started.

If the case is that switches and routers add buffering, I would still have to know what tricks are used to sense the combined buffer size of the route. I could maybe increase packet size until one is dropped then decrease the packet size?
Or make it fluid so it always creeps packet size up until a lost packet then backs off and creeps up again slowly?

Maybe I'm overthinking it but it seems that if sent data can be lost if it's sent too fast, the only way to accommodate this is to adjust data send rate on the fly. But THEN, :facepalm: , I was trying to use a string where the received packets were marked as characters, so that it can receive out of order packets or packets from multiple sources, like if the file is 10000 bytes, the string would be 10 bytes with each character representing 1000 byte packet received, so if the packet size is fluid, that no longer works. I can't reference the file for empty spots either as "GET"ing a byte from the received file will be a 0 weather it's an actual 0 in the file or nothing has been received.  I would need a 500mb file to track the progress of a 500mb file. I wonder how limewire or other programs track file data that way.

Offline Parkland

  • Newbie
  • Posts: 51
    • View Profile
Re: TCP/IP data limit size in buffer etc
« Reply #6 on: April 04, 2021, 10:37:18 am »
I have a hypothesis that this is due to the non-blocking nature of the socket. Could you please try see if the value of _CONNECTED(s), where s is the handle used for PUT, returns 0 once data starts being dropped. This is on the sending side.

I want to try this, but just had to build a new computer for testing and qb64 won't run now so have to figure out why.
It would be super handy if the value of _connected could be used to determine if more data should be sent. That would definitely solve the issue easily.

If that doesn't work;
https://tools.ietf.org/html/rfc5681

I found that article, and it specifies the slow start TCP routine somewhat. If a similar routine was implemented into the qb64 program, the data rate could possibly more closely match the TCP protocol so packet loss could be minimal instead of erratic like it is currently.