Lets say we have an array A and its first element is a with adress adr. When we want to access nth element, at machine level we must first calculate adr + n*a[size], then access the element.
As we see the calculation of nth element's adress depend on n, and big n means much more step of calculation
So why saying that accessing array is constant time?
I searching why its like that, but i didnt found
Yes, it is possible that multiplication of larger numbers takes a bit longer on some processors, though here these days, we're talking about a few extra cycles even for very, very large numbers. Put another way, multiplication scales very well indeed.
Even using a shift and add approach we can put a maximum on the complexity of the multiplication, of say 32 or 64 iterations of the shift-add loop.
As we see the calculation of nth element's address depend on n, and big n means much more step of calculation
Array indexing is done using the following types:
A base pointer to the start of the array, which is treated by the program and processor as an unsigned integer whose size is the address width of the processor (e.g. 64 bits or 32 bits depending on processor).
An index to the array, which needs to be scaled by size of elements of the array, this index ultimately is also limited to the same bit width as the above base pointer.
I'd like to point out that in most languages where we are talking about the built-in array mechanisms, there is no way to allocate an array for whom the array indexing calculation will overflow the fixed bit width. The allocation will fail asking for more memory than the processor can address. So, this means that all indexing and pointer arithmetic operations that access valid memory (i.e. allocation succeeded) will not overflow and do not need to be checked for overflow in calculation. (Unless indexing operations are done using a smaller size than the unsigned bit width of the processor, which, sadly, can happen by programmers' choices and/or language defaults.)
Because you cannot allocate an array that exceeds the processors addressing capabilities, we can use fixed-sized arithmetic in indexing computations. And because of that, we know that the processor can do such indexing in O(1).