Search code examples
linuxgccffmpegogg-theora

Reason for output 1 bit differences between Linux-gcc compiled C code output and MS-VS2008 compiled output?


I have a Theora video deocder library and application compiled using VS-2008 on windows(Intel x86 architecture). I use this setup to decode theora bit streams(*.ogg files). The source code for this decoder library is used from FFMPEG v0.5 source package with some modifications to make it compile on windows-VS-2008 combination.

Now when i decode same theora bitstream using the ffmpeg(V0.5) application on linux(Intel x86 architecture) that i have built using gcc, and get some decoded output yuv file, this output file has 1 bit differences with the output obtained from the windows-VS2008 setup, and that too for few bytes of the output file, not all. I expected the 2 outputs to be bit-matching.

I am doubting below factors:

a.)Some data type mismatch between the two compilers gcc and MS-VS2008?

b.)I have verified that the code is not using any run time math library function like log, pow, exp, cos, etc...but still my code has some operations like (a+b+c)/3.Could this be an issue?

The implementation of this "divide by three" or any other number can be different in the two setups.

c.)Some kind of rounding/truncation effects happening differently?

d.) Could i be missing any macro which is present in Linux as a makefile/configure option which is not there in windows setup?

But i am not able to narrow the problem and the fix for it.

1.) Are my doubts above valid, or could there be any other issues which could cause these 1 bit differences in the output produced by these two different setups.

2.) How do i debug and fix this?

I guess, this scenario of difference in outputs between linux-gcc setup and Windows MS compilers can be even be true for any generic code(not necessarily specific to my case of video decoder application)

Any pointers would be helpful regarding this.

thanks,

-AD


Solution

  • I think, such behavior may come from x87/sse2 math. What version of gcc do you use? Do you use float (32-bit) or double (64-bit)? Math on x87 have more precision bits internally (82), than can be stored in memory

    Try flags for gcc -ffloat-store; -msse2 -mfpmath=sse

    Flags for msvc /fp:fast /arch:SSE2