I don't have a CS background, so I apologize for what I assume is a basic question. But out of curiosity, if I were to sort [3,2,1] vs [3e100, 2e100, 1e100], is there a speed difference (even if minute)?
There may or may not be. There is a difference between "computer science", which has to do with mathematical theories and principles, and "software engineering" or "programming", which has to do with making actual software.
In computer science, a detail like that doesn't matter in the general case. If you define a given scenario on the chalkboard to have differences in speed like that, it does. You can just as easily define the chalkboard scenario to not have a difference in speed like that. It's up to you and whatever problem space you're working in, but either way, it's primarily a matter of chalkboard math, not a real, literal computing machine.
In software engineering / programming / development / whatever you want to call it, it kind of depends on the situation. As a general rule of thumb, sorting [2, 1, 3]
and sorting [200, 1, 30000]
would likely take similar, if not equal, amounts of time on average. However sorting [2, 1, 3]
and sorting [2000000000, 1, 300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000]
would likely see a meaningful difference in speed.
The reason is that it largely has to do with the amount of bits used to store the numbers. It can also potentially have to do with where different bytes and stuff are stored in memory and that kind of thing, but the difference in bit size alone is enough to demonstrate a decent example.
Take, for instance, a 32-bit integer. It is very common to use 32 bits (or in some cases, 64, but 32's more common) to store a number. If we have 32 bits for any non-negative integer, for instance, we would now have a number between 0 and 4,294,967,295. This is how a few numbers within that range would be stored in the computer:
0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
1: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01
2: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10
3: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 11
4: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00
5: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 01
6: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 10
7: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 01 11
8: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 10 00
...
15: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 11 11
16: 00 00 00 00 00 00 00 00 00 00 00 00 00 01 00 00
...
4,294,967,295: 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
As you can see, 0, 1, 15, and 4,294,967,295 each take the same amount of space. Basically speaking, the computer goes through the same amount of hassle to do arithmetic with any one of these numbers as it does with any of the rest of them. They may be bigger or smaller conceptually, but in the computer, they all take the same amount of information to store.
(There can be a little bit of difference, because of reasons that are generally very close to the hardware itself; however I'm not personally sure how big of a difference that makes, and it's outside of the scope of this question. Software and hardware are two different fields.)
Now...Now, let's say we want to store the big, huge number mentioned above: namely, 300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000.
Well, heck, 300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 is a whole lot bigger than 4,294,967,295, and 4,294,967,295 is already the biggest number than can be stored with 32 bits.
Well, then, what about our 64-bit option? The biggest integer that can hold is 18,446,744,073,709,551,616, which is still a lot smaller than the big, huge number listed above. So completely direct, run of the mill 64-bit storage is out of the question as well.
So having run out of typical sizes of memory, you then start breaking up the huge number into smaller chunks. You don't store it all in one 32-bit or 64-bit location; instead you store it across several.
And that's where you see the difference in speed. With smaller numbers that can all fit into 32 bits or 64 bits (or even 8 or 16 bits) each, the computer just has to look in one little spot for each number. With big, huge numbers, it has to look across potentially several. And when it has to look across several spots, it will take addition time - yes, absolutely.
Now, all of that said, you can still store the big, huge number (30000000...) in 32 bits or 64 bits if you really wanted to. However, you can't just store it the basic way. You have to use a special format, with a special meaning for all those 1
s and 0
s. You could arrange them in a way that's based on 3 x 10^(89)
, not on 30000000000...
. For example, you could do something like this:
89| 3
-----------|-----------------------------------
01 01 10 01|00 00 00 00 00 00 00 00 00 00 00 11
That would be 32 bits, but it just uses the first 8 bits to store the 10^(89)
part, then uses the remaining 24 bits to store the 3
part.
The problem this introduces is complication. It complicates the job of the programmers, QA people, and potentially other people involved.
However it also complicates the way the computer has to handle the number. The computer itself will not understand that format above. Your code - or either some tool your code is built on top of, potentially the actually programming language itself - will have to translate that back and forth to a format the computer can make sense of or whatever. And even then, it will still wind up being so big, that the computer can only handle it one piece at a time.
So wrapping up, a few things here: