Search code examples
videoanimationaudiogame-physics

how to synchronize physics model, audio, game rendering, frame display and input


What is a good processing sequence and/or threading model to use to give a user impression of well synchronised physics model, audio, video, sound and input in an application, assuming the application prepares no "predictive" frames or sound?

EDIT

My question assumes no "networked game" concept, just a standalone machine.


Solution

  • Broad question.

    I'm assuming a game context. What seems to be done more or less universally is to synchronize on frame rendering. Here's roughly what happens:

    • Inputs are grabbed and evaluated, responses (AI and such) computed. This may set new physics processes in motion.
    • If an event starts that is accompanied by a sound, that sound is started up. It runs more or less autonomously from that point on, until it completes, independently of frame processing (which is where we are)
    • The physics model is updated. In most cases this will be something pretty simple like calculating a new position from previous position and velocity. The amount to extrapolate by depends on the amount of time that passed since the last frame (though this may be averaged rather than re-computed for every frame)
    • From the updated physics model, the visual model is updated.
    • The graphics engine gets to display a new scene (frame) from the updated model.
    • Repeat as soon as done.