Search code examples
gccfloating-pointpowerpc

undefined reference to `__floatundisf' when using hard float (PowerPC)


I'm building code for PowerPC with hard float and suddenly getting this issue.

I understand that this symbol belongs to gcc's soft-float library. What I don't understand is why it's trying to use that at all, despite my efforts to tell it to use hard float.

make flags:

CFLAGS += -mcpu=750 -mhard-float -ffast-math -fno-math-errno -fsingle-precision-constant -shared -fpic -fno-exceptions -fno-asynchronous-unwind-tables -mrelocatable -fno-builtin -G0 -O3 -I$(GCBASE) -Iinclude -Iinclude/gc -I$(BUILDDIR)
ASFLAGS += -I include -mbroadway -mregnames -mrelocatable --fatal-warnings
LDFLAGS += -nostdlib -mhard-float $(LINKSCRIPTS) -Wl,--nmagic -Wl,--just-symbols=$(GLOBALSYMS)

Code in question:

static void checkTime() {
    u64 ticks = __OSGetSystemTime();
    //note timestamp here is seconds since 2000-01-01
    float secs = ticks / 81000000.0f; //everything says this should be 162m / 4,
        //but I only seem to get anything sensible with 162m / 2.
    int days  = secs / 86400.0f; //non-leap days
    int years = secs / 31556908.8f; //approximate average
    int yDay = days % 365;
    debugPrintf("Y %d D %d", years, yDay);
}

What more do I need to stop gcc trying to use soft float? Why has it suddenly decided to do that?


Solution

  • Looking at the GCC docs, __floatundisf converts an unsigned long to a float. If we compile your code* with -O1 and run objdump, we can see that the __floatundisf indeed comes from dividing your u64 by a float:

        u64 ticks = __OSGetSystemTime();
      20:   48 00 00 01     bl      20 <checkTime+0x20> # Call __OSGetSystemTime
                20: R_PPC_PLTREL24  __OSGetSystemTime
        //note timestamp here is seconds since 2000-01-01
        float secs = ticks / 81000000.0f; //everything says this should be 162m / 4,
      24:   48 00 00 01     bl      24 <checkTime+0x24> # Call __floatundisf
                24: R_PPC_PLTREL24  __floatundisf
      28:   81 3e 00 00     lwz     r9,0(r30)
                2a: R_PPC_GOT16 .LC0
      2c:   c0 09 00 00     lfs     f0,0(r9)   # load the constant 1/81000000
      30:   ec 21 00 32     fmuls   f1,f1,f0   # do the multiplication ticks * 1/81000000
    

    So you're getting it for a u64 / float calculation.

    If you convert the u64 to a u32, I also see it going away.

    Why is it generated? Looking at the manual for the 750CL which I'm hoping is largely equivalent to your chip, there's no instruction that will read an 8 byte integer from memory and convert it to a float. (It looks like there isn't one for directly converting a 32-bit integer to a float either: gcc instead inlines a confusing sequence of integer and float manipulation instructions.)

    I don't know what the units for __OSGetSystemTime are, but if you can reduce it to a 32-bit integer by throwing away some lower bits, or by doing some tricks with common divisors, you could get rid of the call.

    *: Lightly modified to compile on my system.