Search code examples
time-complexitybig-odynamic-programmingfibonaccibiginteger

Time complexity of this dynamic programming algorithm to get nth fibonacci number


I'm confused about the time complexity of this algorithm:

function fib(n)
    if n = 0
        return 0
    else
        var previousFib := 0, currentFib := 1
        repeat n − 1 times // loop is skipped if n = 1
            var newFib := previousFib + currentFib
            previousFib := currentFib
            currentFib  := newFib
        return currentFib

My intuition was that this algorithm takes O(n2) time with O(n) space. But according to this wikipedia page, the time complexity is O(n) and the space is O(1). My reasoning was that previousFib + currentFib gets very expensive very quickly. Specifically, if the size of previousFib and currentFib grows linearly with n, then shouldn't they both constitute O(n) space, and shouldn't previousFib + currentFib constitute an O(n) operation?


Solution

  • There are different models for computational complexity. As Wikipedia - Computational complexity states:

    The number of arithmetic operations is another resource that is commonly used. In this case, one talks of arithmetic complexity. If one knows an upper bound on the size of the binary representation of the numbers that occur during a computation, the time complexity is generally the product of the arithmetic complexity by a constant factor.

    For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore, the time complexity, generally called bit complexity in this context, may be much larger than the arithmetic complexity.

    This can be a source of confusion, more so because different models are used in different Wikipedia articles without really being specific on the model they used.

    The Wikipedia article you referenced uses the first (arithmetic complexity) model, where it is assumed that arithmetic operations like addition happen in constant time as the involved integers are assumed to be stored in fixed size memory (like 64-bit integers).

    You are correct in your analysis that Fibonacci numbers grow quickly: soon enough the fixed-size integer model will not be able to cope with these. For instance, an unsigned 64-bit representation can only hold up to 𝐹93. As Fibonacci numbers grow that quickly (having O(𝑛) bits) it only seems fair and practical to use the second (bit complexity) model, where an implementation would use a dynamic "big" integer type (like for instance Python offers out of the box). With those, arithmetic operations do not take constant time. Addition then has a O(𝑛) time complexity (where 𝑛 is the index of the Fibonacci number), as you rightly put it.