Here is a simple example:
int function (int n) {
//code
}
long long function (long long n) {
//absolutely the same code but intended to work with bigger values
}
I was thinking about saving extra memory (if you use it with variables of small types) AND making it more universal (if you use it with variables of big types)
I hope you get my idea here - dont use long long if you put something like 15 in there, instead use overload for smaller types (like int).
I think I'm missing something here. Should i even make this overload? Can I make most universal AND most optimal function without overloading it for each existing integer type (short, int, long, long long...)?
If you're writing code for reasonably-recent personal computers or web servers, having a separate int
version is probably a premature optimization, given that most of these machines have 64-bit processors anyway, so calculations with 64-bit long long
should be fast.
OTOH, if you're writing for an 8/16/32-bit embedded system on which 64-bit arithmetic is dramatically slow, it might be a worthwhile speed optimization.
But it's still unlikely to be helpful in terms of memory usage due to the fact that having duplicate implementations of a function will increase the size of your compiled code, thus cancelling the benefit of using less memory for your variables. Unless that function takes an array of possibly millions of integers instead of just one, in which case the data size is more of a concern than the code size.
If you do insist on having overloaded functions for different integer sizes, I recommend using Ranoiaetep's suggestion of making it a template
function, so as to avoid duplication in the source code.