Search code examples
network-programmingtcpudpdistributed-computingdistributed-system

How To Estimate the total time to complete the request In UDP and TCP ( Distirbuted Systems)


I have Stumbled Upon a Question Which I really can't figure out how the answers came up. I Will Post the Question And Answer Below.

Consider a distributed system that has the following characteristics: * Latency per Packet (Local or remote, incurred on both send and receive): 5 ms. * Connection setup time (TCP only): 5 ms. * Data transfer rate: 10 Mbps. * MTU: 1000 bytes. * Server request processing time: 2 ms

Assume that the network is lightly loaded. A client sends a 200-byte request message to a service, which produces a response containing 5000 bytes. Estimate the total time to complete the request in each of the following cases, with the performance assumptions listed below:

1) Using connectionless (datagram) communication (for example, UDP);

Answer : UDP: 5 + 2000/10000 + 2 + 5(5 + 10000/10000) = 37.2 milliseconds

We were not given any formula so I have trouble finding what the numbers in above calculation actually means.

  • 2000/10000 - i think 10000 has to be 10Mbps * 1000 , i just dont know what 2000 means

  • (5+10000/10000) - ( I know that this has to be multiplied by 5 because MTU is 1000 Bytes , But I just dont know what the numbers Mean)

Thank You , Looking Forward to Your Ideas


Solution

  • For 2000/10000, I guess that 2000 means the request message size in terms of bits. Theoretically, the request message size should be 1600 bits since 200 bytes = 200*8 bits. I guess the answer approximate 1600 to be 2000 for simplicity.

    For 5(5+10000/10000), first MTU is short for Maximum Transmission Unit, which is the largest packet size that can be communicated in the network. The response message is 5000 bytes while MTU is 1000 bytes, so the response is divided into 5 packets, each having 1000 bytes.

    Since this is connectionless communication, there is no pipelining. There is only one packet in the link each time. Thus, for each packet, the time to send it back is 5 + 10000/10000 (strictly, it should be 8000/10000 since MTU is 1000*8 bits. Again, I guess it is also approximated to be 10000 for simplicity). So to send back all of the 5 packets, the total time is 5(5+10000/10000).