I'm trying my hand at coding an Objective-C(++) app from scratch and completely stumped as to why testing a float
within a while loop appears to cause an infinite loop.
Firstly, the file: test.mm
#include <stdio.h>
#include <mach/mach_time.h>
int main(int argc, const char* argv[])
{
#pragma unused(argc)
#pragma unused(argv)
// --- LOOP ---
float timer = 2.0f;
float debugMarker = 2.0f;
uint64_t lastLoopStart = mach_absolute_time();
mach_timebase_info_data_t timebase;
mach_timebase_info(&timebase);
while(timer > 0.0f)
{
uint64_t now = mach_absolute_time();
uint64_t elapsed = now - lastLoopStart;
uint64_t nanos = elapsed * timebase.numer / timebase.denom;
float deltaTime = static_cast<float>(static_cast<double>(nanos) * 1.0E-9);
timer -= deltaTime;
lastLoopStart = now;
// Including this line avoids the bug
// timer -= 0.1f;
// This does not cause the bug
// if(0.0f < timer)
// This causes the bug
// if(debugMarker > 0.0f)
// This causes the bug
if(debugMarker >= timer)
{
printf("timer: %f\n", static_cast<double>(timer));
debugMarker -= 1.0f;
}
}
printf("DONE\n");
return (0);
}
Compiled with: clang -g -Weverything test.mm
Running the program generated by this code causes the loop to output the timer value once, and then appears to infinitely loop.
Using the if(debugMarker > 0.0f)
will cause it to print the timer value twice.
I'm at a complete loss as to what could be happening here. Any help would be appreciated!
Do you really must/have any reason to use float?
My advice is to forget about float, really, there is no advantage for using it and it does not have good precision, not even for sin/cos functions.
For best practice I suggest to always choose double for a floating-point type, on any language, from C to Swift.
Your code is fine and OK!! I converted it to double, removed the casts and adjusted some variables values.
I commented out the printf line, compiled it and executed.
It looped during 2.011 seconds and exited.
> $ time ./a.out
DONE
./a.out 1.97s user 0.02s system 98% cpu 2.011 total
Then I activated the prinf function, executed it again.
Looped for 2.014 seconds and printed more than 5 million lines, regressive counting.
(partial listing... total was 5168689 printed lines)
timer: 0.000060
timer: 0.000057
timer: 0.000054
timer: 0.000051
timer: 0.000048
timer: 0.000045
timer: 0.000041
timer: 0.000039
timer: 0.000036
timer: 0.000033
timer: 0.000029
timer: 0.000026
timer: 0.000023
timer: 0.000020
timer: 0.000017
timer: 0.000014
timer: 0.000011
timer: 0.000008
timer: 0.000005
timer: 0.000002
timer: -0.000001
DONE
./a.out 0.24s user 0.25s system 23% cpu 2.014 total
Here it is, converted to use double precision:
#include <stdio.h>
#include <mach/mach_time.h>
int main(int argc, const char* argv[])
{
#pragma unused(argc)
#pragma unused(argv)
// --- LOOP ---
double timer = 2;
double debugMarker = 2;
uint64_t lastLoopStart = mach_absolute_time();
mach_timebase_info_data_t timebase;
mach_timebase_info(&timebase);
while(timer > 0)
{
uint64_t now = mach_absolute_time();
uint64_t elapsed = now - lastLoopStart;
uint64_t nanos = elapsed * timebase.numer / timebase.denom;
double deltaTime = nanos * 1.0E-9;
timer -= deltaTime;
lastLoopStart = now;
// Including this line avoids the bug
// timer -= 0.1f;
// This does not cause the bug
// if(0.0f < timer)
// This causes the bug
// if(debugMarker > 0)
// This causes the bug
//if(debugMarker < timer)
//{
printf("timer: %F\n", timer);
// debugMarker -= 1;
//}
}
printf("DONE\n");
return (0);
}