I have an application which needs to transfer data between a server and a client. Both ends are behind corporate firewall but they need to communicate securely. I have written a TCP relay server which can establish a communication between 2 applications.
My issue is that the performance of TCP stream is now drastically reduced and I would like to find out why. I have the TCP receive and send buffer sizes set to 10 MB for the server, the client and the relay as well. The performance issue is most noticeable with larger RTTs, so my current RTT is 60ms. Once the initial handshake is made, the relay pipes the raw TCP streams between the server and the client with no additional framing.
I have checked the TCP window size scale and it is properly set. Tried searching for tcp.analysis.flags
in wireshark to see if the receiving window is full, but no such warning was ever generated.
What can I do to figure out why the performance drops like this? Thank you in advance!
Here are some data I was able to gather using Wireshark:
Capture showing the point when the speed drops (Yellow=downloading peer, Cyan=uploading peer)
It turns out this was caused by the VPN connection which I was using to simulate long-distance connections. Once I was able to test the system with real people on the other side of the world the problem went away and instead of spikes I got nice wave patterns.