I'm trying to convert non-integers to and from hexadecimal<->decimal representations. This is trivial for integers, but I can't get it to work like I want for non-integers.
I'm looking for the hex representation of the number, not a hex representation of its binary encoding.
I've gotten this far:
float.fromhex('2a.0') # prints "42.0" (number), fine
float(42.0).hex() # prints "'0x1.5000000000000p+5'" (string), not fine
float.hex()
returns a string holding a number in exponential representation. Is there a formatter that will convert to non-exponential representation? I want '0x2a.0'
. Is this possible?
I guess what you actually want is to get 0x2a.0
back instead of 0x1.5000000000000p+5
?
Unfortunately, the builtin format specifiers don't support hexadecimal formatting of floating points. The hex-function in your example doesn't support any arguments to e.g. suppress the scientific notation. I also couldn't find any library for this problem.
If you concern about performance, you may want to implement this in C. In this case the C-implementation of float.hex may be a good thing to look at. Interestingly, it seems (to me) like they do extra work to perform the exponential shift but don't support a way to bypass it.
I played around a bit and it's actually much easier to deal with the floating point number directly instead of messing around with the strings from float.hex
(which was my original idea). Here is, what I'd come up with:
def float_to_hex(number: float, precision: int = 6) -> str:
if precision == 0:
return hex(int(number))
sign = "-" if number < 0 else ""
number = abs(number)
hex_int = hex(int(number * 16 ** precision))[2:]
if len(hex_int) <= precision:
hex_int = "0" * (precision - len(hex_int) + 1) + hex_int
hex_float = f"{hex_int[:-precision]}.{hex_int[-precision:]}".rstrip("0.")
return f"{sign}0x{hex_float}"