I have the following method, which reads and deserializes packets from a NetworkStream
asynchronously. Everything works, but CPU profiling shows that the very last line, in which I am awaiting an asynchronous read, is where the majority of my CPU usage comes from.
Have I implemented this badly/inefficiently, or is there something inherently wrong with the NetworkStream
's async implementation?
public async Task<Packet> ReadAsync(CancellationToken cancellationToken)
{
while (true)
{
// Read through the available bytes until we find the start of a packet
while (start < length && buffer[start] != Packet.STX)
start++;
// Align the packet (and all successive bytes) with the beginning of the buffer
if (start > 0)
{
if (start < length)
Array.Copy(buffer, start, buffer, 0, length - start);
length -= start;
start = 0;
}
// Read through the available bytes until we find the end of the packet
while (end < length && buffer[end] != Packet.ETX)
end++;
// If we have a whole packet in the buffer, deserialize and return it
if (end < length)
{
byte[] data = new byte[end + 1];
Array.Copy(buffer, data, end + 1);
byte[] decoded = null;
Packet packet = null;
try
{
decoded = Packet.Decode(data);
}
catch (Exception ex)
{
throw new IOException("Could not decode packet", ex);
}
if (decoded != null)
{
try
{
packet = Packet.Deserialize(decoded);
}
catch (Exception ex)
{
throw new IOException("Could not deserialize packet", ex);
}
}
Array.Copy(buffer, end + 1, buffer, 0, length - (end + 1));
length -= end + 1;
end = 0;
if (packet != null)
return packet;
}
// If we read all available bytes while looking for the end of a packet
if (end == length)
{
if (length == buffer.Length)
throw new InsufficientMemoryException();
length += await Stream.ReadAsync(buffer, length, buffer.Length - length, cancellationToken);
}
}
}
I have updated the code to sleep between each call to ReadAsync
, for roughly the amount of time the last read took:
var stopwatch = new Stopwatch();
var iteration = 0;
while (true)
{
// ...
var delay = stopwatch.Elapsed;
stopwatch.Restart();
if (iteration % 10 != 0)
await Task.Delay(delay);
length += await Stream.ReadAsync(buffer, length, buffer.Length - length, cancellationToken);
stopwatch.Stop();
iteration += 1;
}
This has drastically dropped the CPU usage. This is definitely a work-around, as it does not address the issue, but it works. I would love to hear anyone else's answers or opinions on this issue.