I am very new at C programming and I am working on a firmware application for my MCU. This method was working fine when I was using the KEIL compiler (Big Endian) but when I switched to the SDCC compiler (Little Endian) it is not working properly. Can someone please explain what I am doing wrong???
The target device is a Silicon Labs C8051F320 which is based on the 8051 architecture.
unsigned **int** MotorSteps = 0; //"Global" variables
unsigned **int** MotorSpeed = 0;
bit RampUp()
{
float t = 0;
t = MotorSteps;
if ( t < 51 )
{
t = (1-((50 - t)/50))*15;
t = (t * t);
MotorSpeed = 100 + t;
return 0;
}
else return 1;
}
ADDED: First, I now changed the MotorSteps and MotorSpeed to be unsigned ints. In my debugger, for some reason, if I set a break point at the if-statement line, on the first entrance of this function MotorSteps = 00, so t should get assigned to 0 also but the debugger shows that t=0.031497 (decimal). If I switch the debugger to display in Hex, t = 0x3d010300. It's like t is never getting assigned...
If MotorSteps = 49 then
(50 - 49) / 50 = 0.02
next
(1 - 0.02) = 0.98
and
0.98 * 15 = 14.7
Squaring this value would set t as
t = 14.7 * 14.7 = 216.09
Finally, the implicit conversion from the float back to the unsigned char overflows the MotorSpeed variable:
MotorSpeed = 100 + 216.09...// Implicitly converts the float t to an unsigned char of 216
The sum of 100 + 216 = 316, of course, overflows an unsigned char and you end up with 316-256 = 60.
This is probably unwanted behavior regardless of the compiler.