I was looking at the minimum CPU specified for a recently released game and it got me thinking about CPU compatibility. One of the main selling points behind x86 and AMD64 is backwards compatibility. However, each new generation of x86 based CPUs usually include new extensions, supporting new instructions.
I run Gentoo Linux and create native builds for my userspace software, attempting to maximize the performance of the software on my system. I also like it because it gives me a lot of control and the ability to create custom patched and easily rebuild within the package manager. However, building software from source puts me in the minority of computer users these days. This brings me to my main question and while Linux is an interesting place to explore, I am mainly curious about how this is done in Windows and Mac OSX. My question is are these new instructions actually used by most people and if so, how is compatibility maintained?
My understanding is that on binary x86 distros, software is built to some ancestor and minimum CPU required to run that distro. Every x86 CPU since then should be able to run that software, as they backwards compatible. However, this means that these distro do not take advantage of the newer instructions for users who have newer CPUs. Some software might attempt to detect at runtime the CPU features. It may then fill out a function pointer table according to the results or in some other way adjust it's execution pathways to utilize the newer instructions according to what is best for that CPU's features.
Beyond Linux is where I really don't know how any of this is handled. Are windows programs built to some older CPU standard? But if that is the case, all these new shiny extensions would seldom be taken advantage of by any users or software(Unless run-time detection in employed). I have a suspicion that the Windows kernel is trapping the illegal instruction exceptions during the use of newer CPU instructions on older hardware and emulating the behavior. However, how is this handled? Or is it just that in the case of some software, perhaps such as in the game that I mentioned at the start, that they are building to the older models and slowly those old CPUs simple can no longer run newer software. I run a CPU that is 11 years old. Maybe I will soon be unable to run common windows applications.
Many programs will be built to some minimum CPU type, likely quite an old one. For example these days if we build for Nehalem or Westmere, those are roughly a decade old and will cover most users.
Some compilers like Clang and GCC have auto-dispatching features, but MSVC basically does not. See Does MSVC 2017 support automatic CPU dispatch?
That leaves applications and libraries to do their own manual dispatching based on CPU type. OpenSSL is one popular library which does that, because newer CPUs have cryptography-focused instructions which can make it much faster. But most programs won't bother. Here's one article about it, from someone who has done it themselves: https://lemire.me/blog/2020/07/17/the-cost-of-runtime-dispatch/
JIT compilation is an alternative, as it will compile directly on the target machine. Many languages have this as an option if not the standard mode of operation.