I've heard that the x87 FPU works with 80-bit floats, so even if I want to calculate using 64-bit numbers, it would calculate it with 80-bit and then convert it. But which is fastest in Swift on x86-64, Double
or Float80
(when calculating arithmetic)?
While it's true that the x87 FPU operates internally at 80-bit "extended" precision (at least, by default; this is customizable, and in fact 32-bit builds following the macOS ABI set 64-bit internal precision), binaries targeting x86-64 no longer use x87 FPU instructions. All x86 chips that implement the 64-bit long mode extension also support SSE2 (in fact, this was required by the AMD64 specification), so a 64-bit binary can always assume SSE2 support. As such, this is what is used to implement floating-point operations, because it's much more efficient and easier to optimize around for a compiler.
Even 32-bit builds in the modern era assume SSE2 as a minimum, and certainly on the Macintosh platform, since SSE2 was introduced with the Pentium 4, which predated the Macintosh platform's switch to Intel x86 chips. All x86 chips ever used in an Apple machine support SSE2.
So no, you aren't going to see any performance improvement by using an 80-bit extended precision type. You weren't going to see any performance improvement from x87 instructions, even if they were generated by the compiler. And you certainly aren't going to see any performance improvement on x86-64, because SSE2 supports a maximum of 64-bit precision in hardware. Any 80-bit precision operations are going to have to be implemented in software, or force a smart compiler to emit x87 instructions, which means you don't benefit from any of the nice features and tangible performance improvements of SSE2.