How are extremely large numbers handled in video games? Cookie Clicker, for an example, the numbers can get as large as 1 duodecillion (39 zeroes - 1,000,000,000,000,000,000,000,000,000,000,000,000,000) and even bigger. How do games manage to process numbers this large? Do they implement a system like every 1,000 thousands is 1 million and so on and split them between multiple variables?
Floating point math trades off precision for being able to use part of its bytes as a exponent, allowing for efficiently representing very large or small values.
For larger integer numbers software some languages offer fixed sized, possible with direct hardware support, or emulated in software. You will often find built-in support for larger fixed width integer sizes, where the compiler will usually take care of it.
Many languages (e.g. JS) also offer BigInt
, which dynamically will allocate enough bytes to represent the number. Python does this automatically for your integers.
Of course doing any arithmetic on these number types comes at a runtime cost, but fortunately computers are really fast and I would suspect Cookie Clicker might be using the JS standard BigInt.
Do they implement a system like every 1,000 thousands
Internally these all work with binary of course, and will probably just use all the bits they have available, "moving to the next" byte/int once they reach 8/32/64 bits.
Most developers aren't implementing this themselves but relying on libraries, like GMP, or whatever the programming language itself provides.