I am struggling to understand and need confirmation in this regard. I am calculating the percentage of frame drop rate per second which is affected by the processing capability of the device and network for distributed deep neural network application consisting of client and server.
I have the following pseudocode for calculating frame drop rate of my application.
Client
TIME_DELTA=1
before_sent = frame_requests
time.sleep(TIME_DELTA)
after_sent = frame_requests
client-side=(after_sent - before_sent) / TIME_DELTA
Server
TIME_DELTA=1
before_received = frame_requests
time.sleep(TIME_DELTA)
after_received = frame_requests
server-side=(after_received - before_received) / TIME_DELTA
Frame drop rate
Frame-drop-rate=client-side - serverside
I need to confirm is this the right way to calculate it. Your suggestions and thoughts are highly appreciated to guide me in the right direction.
Frame_drop_rate=client_side - serverside
would be the number of Frames dropped per second.
Frame_drop_rate= 1 - (client_side / serverside)
would be the rate of Frames dropped each second.
The term "rate per second" refers to the change in dropped frames over time. So wheter it is getting worse or better but not how good or bad it is.
Also: No hyphens in variable names. And check your variable spelling serverside
/server-side