Search code examples
c++debuggingtimerelease

Time differences between Release and Debug builds


My program is supposed to trigger an event every 180 seconds.

mFoodSpawnTime += dt;
if (mFoodSpawnTime > mFoodSpawnCycleLength)
{
       ..... etc;
}

mFoodSpawnCycleLength = 180.0f, and mFoodSpawnTime is another float that accrues time every loop.

My problem is that if mFoodSpawnCycleLength is around 180.0f, then on a release build it never seems to arrive, taking up to 10 minutes for mFoodSpawnTime to be > than mFoodSpawnCycleLength! I have timed the debug build and it does execute the loop after the 180 seconds, and I have validated using a stopwatch. Back to the release build: as long as the mFoodSpawnCycleLength isn't near 180.0f it also matches the stopwatch and executes the code. I once set it to 120.0f and when it executed the stopwatch read 2min 30sec. There is no #ifdef DEBUG code that could be causing this. So what I'm saying is: the closer to 180.0f the mFoodSpawnCycleLength gets, the less precise the time becomes, but only on the release build!

I just now printed out mFoodSpawnTime and when my clock read 3 min, its time was only around 160s, and I found that as the timer approaches 150 seconds the time incrementation just slows to a halt. I tracked the dt in each loop and it doesn't seem to be any different than in the beginning.

Could this all be caused by an intrusive compiler optimization? A 32-bit floating point error? I'll continue to look into this, but any help is appreciated.

I'm still learning so I'm using a DirectX Book. I use the time code from a demo the book provided:

int D3DApp::run()
{
MSG  msg;
msg.message = WM_NULL;

__int64 cntsPerSec = 0;
QueryPerformanceFrequency((LARGE_INTEGER*)&cntsPerSec);
float secsPerCnt = 1.0f / (float)cntsPerSec;

__int64 prevTimeStamp = 0;
QueryPerformanceCounter((LARGE_INTEGER*)&prevTimeStamp);

while(msg.message != WM_QUIT)
{
    // If there are Window messages then process them.
    if(PeekMessage( &msg, 0, 0, 0, PM_REMOVE ))
    {
        TranslateMessage( &msg );
        DispatchMessage( &msg );
    }

    // Otherwise, do animation/game stuff.
    else
    {   

        if( mTimeReset )
        {
            QueryPerformanceCounter((LARGE_INTEGER*)&prevTimeStamp);
            mTimeReset = false;
        }

        if( !isDeviceLost() )
        {
            static float frameLimit = 0.0f;
            __int64 currTimeStamp = 0;
            QueryPerformanceCounter((LARGE_INTEGER*)&currTimeStamp);
            float dt = (currTimeStamp - prevTimeStamp)*secsPerCnt;

            if (dt > 2.0f) dt = 0.0f;

            frameLimit +=dt;

            updateScene(dt);
            if (frameLimit > 0.0167f)
            {
                drawScene();
                frameLimit = 0.0f;
            }

            // Prepare for next iteration: The current time stamp 
            // the previous time stamp for the next iteration.
            prevTimeStamp = currTimeStamp;
                    }
         }
    }
return (int)msg.wParam;
}

Solution

  • You might want to try using double rather than float here. The performance difference in this case will be minimal, and I suspect you may be hitting a problem where you are adding a very small number to a big number - leaving the floating point value of the big number unchanged.

    Floats only give you around six digits of precision. If the performance counter period is of the order of ns then you will start to hit problems when frameLimit reaches a few ms.

    Debug mode will tend to run slower, resulting in larger increments, which may be why you don't see the problem there.