I'm planing to write a small web app where two players can play a chess game with blitz time controls. Here's a small description of how the process of making a move works:
It makes sense that step 1 will be running on player A's allotted time and step 7 will be running on player B's allotted time. What can I do about the other steps? Is there any way to measure how much time those other steps took and maybe to add the result to both players' clocks?
This can't be done without trusting the client.
The basic problem is that a client with low latency can pretend to have a high latency in order to gain bonus time. A client with 100ms of round trip latency may pretend to have a 3s RTT and use this deception to turn a 6.9s move into a 4s move. There is no practical way for a protocol to detect a client pretending to have high latency, unless the client makes a mistake (e.g. asking for a 3s delay waiver when they took 1s to make their move).
On the other hand, if you do trust the client, you can just estimate their RTT by pinging their machine periodically and deducting that time from their move time. If you really trust the client, have them measure and report the elapsed time with the move data.
There are a lot of ways to manage this trust/latency-penalty tradeoff. You can assume a minimum RTT (50ms?) and deduct that from everything. You can put bonus time on the clocks. You can have the user-facing option of a game being 'secure' vs 'forgiving'.