Consider this code,
function isPrime(n):
for i from 2 to n - 1:
if (n mod i) = 0, return false
return true
That inner loop runs O(n) times and each time does some amount of work to compute n mod i (as a really conservative upper bound, this can certainly be done in time O(n^3)). Therefore, this overall algorithm runs in time O(n^4) and possibly a lot faster.
Our algorithm runs in time O(n^4), but what is that as a function of the number of input bits? Well, writing out the number n takes O(log n) bits. Therefore, if we let x be the number of bits required to write out the input n, the runtime of this algorithm is actually O(2^(4x)), which is not a polynomial in x.
My question here is
To write a number n in bits, it must take log n bits(Base 10). Therefore if we let x be the number of bits, then the actual run time must be O(10^(4x)). This is drastically different from O(2^(4x)). How can we afford to do such an approximation??
Conversion between logarithm bases is equivalent to multiplying by some constant. Constant multiplication does not affect big O complexity class. So the logarithm base has no effect on the analysis.
However, the example in your question isn't really about logarithms. It's kind of the opposite, as it's about exponential expressions. But I don't exactly understand the example because the phrase " it must take log n bits(Base 10)" doesn't make sense to me. A number n actually has about log n (base 2)
bits, not base 10, as you assert.