I want a 128 bit integer because I want to store results of multiplication of two 64 bit numbers. Is there any such thing in gcc 4.4 and above?
For GCC before C23, a primitive 128-bit integer type is only ever available on 64-bit targets, so you need to check for availability even if you have already detected a recent GCC version. In theory gcc could support TImode integers on machines where it would take 4x 32-bit registers to hold one, but I don't think there are any cases where it does.
In C++, consider a library such as boost::multiprecision::int128_t
, but it doesn't use compiler-specific wide types, always sign/magnitude (with a 128-bit magnitude plus a separate sign bit, so a wider value-range than 128-bit 2's complement). You might want an #ifdef
to typedef GCC's __int128
or Clang's _BitInt(128)
on systems that have that, only typedef to Boost as a fallback, if your use-case is ok with a wider value-range. See also @phuclv's answer on another question.
ISO C23 will let you typedef unsigned _BitInt(128) u128
, modeled on clang's feature originally called _ExtInt()
which works even on 32-bit machines; see a brief intro to it. GCC13 -std=gnu2x
doesn't even support that syntax yet, but current nightly GCC14 trunk does support it with -std=gnu23
(Godbolt)
GCC 4.6 and later has a __int128
/ unsigned __int128
defined as a built-in type. Use
#ifdef __SIZEOF_INT128__
to detect it.
GCC 4.1 and later define __int128_t
and __uint128_t
as built-in types. (You don't need #include <stdint.h>
for these, either. Proof on Godbolt.)
I tested on the Godbolt compiler explorer for the first versions of compilers to support each of these 3 things (on x86-64). Godbolt only goes back to gcc4.1, ICC13, and clang3.0, so I've used <= 4.1 to indicate that the actual first support might have been even earlier.
legacy recommended(?) | One way of detecting support
__uint128_t | [unsigned] __int128 | #ifdef __SIZEOF_INT128__
gcc <= 4.1 | 4.6 | 4.6
clang <= 3.0 | 3.1 | 3.3
ICC <= 13 | <= 13 | 16. (Godbolt doesn't have 14 or 15)
If you compile for a 32-bit architecture like ARM, or x86 with -m32
, no 128-bit integer type is supported with even the newest version of any of these compilers. So you need to detect support before using, if it's possible for your code to work at all without it.
The only direct CPP macro I'm aware of for detecting it is __SIZEOF_INT128__
, but unfortunately some old compiler versions support it without defining it. (And there's no macro for __uint128_t
, only the gcc4.6 style unsigned __int128
). How to know if __uint128_t is defined
Some people still use ancient compiler versions like gcc4.4 on RHEL (RedHat Enterprise Linux), or similar crusty old systems. If you care about obsolete gcc versions like that, you probably want to stick to __uint128_t
. And maybe detect 64-bitness in terms of sizeof(void*) == 8
as a fallback for __SIZEOF_INT128__
no being defined. (I think GNU systems always have CHAR_BIT==8
, although I might be wrong about some DSPs). That will give a false negative on ILP32 ABIs on 64-bit ISAs (like x86-64 Linux x32, or AArch64 ILP32), but this is already just a fallback / bonus for people using old compilers that don't define __SIZEOF_INT128__
.
There might be some 64-bit ISAs where gcc doesn't define __int128
, or maybe even some 32-bit ISAs where gcc does define __int128
, but I'm not aware of any.
The GCC internals are integer TI mode (GCC internals manual). (Tetra-integer = 4x width of a 32-bit int
, vs. DImode = double width vs. SImode = plain int
.) As the GCC manual points out, __int128
is supported on targets that support a 128-bit integer mode (TImode).
// __uint128_t is pre-defined equivalently to this
typedef unsigned uint128 __attribute__ ((mode (TI)));
There is an OImode in the manual, oct-int = 32 bytes, but current GCC for x86-64 complains "unable to emulate 'OI'" if you attempt to use it.
Random fact: ICC19 and g++/clang++ -E -dM
define:
#define __GLIBCXX_TYPE_INT_N_0 __int128
#define __GLIBCXX_BITSIZE_INT_N_0 128
@MarcGlisse commented that's the way you tell libstdc++ to handle extra integer types (overload abs, specialize type traits, etc)
icpc
defines that even with -xc
(to compile as C, not C++), while g++ -xc and clang++ -xc don't. But compiling with actual icc
(e.g. select C instead of C++ in the Godbolt language dropdown) doesn't define this macro.
The test function was:
#include <stdint.h> // for uint64_t
#define uint128_t __uint128_t
//#define uint128_t unsigned __int128
uint128_t mul64(uint64_t a, uint64_t b) {
return (uint128_t)a * b;
}
compilers that support it all compile it efficiently, to
mov rax, rdi
mul rsi
ret # return in RDX:RAX which mul uses implicitly
There isn't a printf
conversion for __int128
in Glibc nor an ostream::operator<<(__int128)
or to_chars
in libstdc++ or libc++ as far as I know. And there isn't support for integer literal constants wider than [unsigned] long long
.
__int128
from two literal constants.asctou128
and u128toasc
functions, simple and readable but not optimized to try to work with only 64-bit when possible (especially valuable for itoa to minimize 128-bit division).to_chars
answer on one of those old questions.Some answers on those questions have implementations, some of them more efficient than others. Dividing by 1e19 to start with is a good way to split the decimal digits into two chunks so you can use 64-bit division.
Hex is of course very easy for any width since each 4-bit group corresponds to one hex digit, independent of other bits, since 16 is a power of 2 unlike 10. It's a good use-case for SIMD: How to convert a binary integer number to a hex string? has an answer with AVX2 and AVX-512 intrinsics for uint32_t
; my asm answer has some versions that could be ported to intrinsics for wide integers.