I'm building a port forward in C# using TCP sockets. It should not have to worry about the application protocol that's being used.
However, depending on the application protocol, the packet size may influence the protocol... For example, when authenticating with FTP, trailing bytes (00) may result in errors while sending USER and PASSWORD though the port forward. So... I decided to remove the trailing bytes by removing the last sequence of 00's...
But, the ssh protocol started to get messed... By removing the trailing bytes, I end up getting malformed packets...
The solution I thought would be to get the actual size of the data when it first gets to the port forward, and then, forward the exact size of the packets to the endpoint.
My question is how to get the actual size of data when it gets to the socket, example: when I use socket.Receive to get the bytes sent to the port forward tool (by a simple ssh blabla@someIP), how do I know what is the last byte of the actual data? The last packet will always have less bytes than the actual buffer size, but I have no idea how to differ the extra allocating space from actual data in form of 00's.
OBS: I'm using Socket class and Receive, Send functions.
Forget about zeros; you cannot tell anything about the data by looking for zeros - and indeed, garbage may not be zeros (buffer recycling etc). When receiving data, you must look at the return value from Receive
, which tells you how much data was read. When sending data: it is your job to tell it how much of the buffer is actually data - you can use a buffer segment for that; there are overloads of Socket.Send
that take byte[] buffer, int offset, int size
- or other overloads that take ArraySegment<byte>
fragments (for discontiguous buffers).