Search code examples
performancealgorithmrecursionsequences

Efficient Algorithm to Solve a Recursive Formula


I am given a formula f(n) where f(n) is defined, for all non-negative integers, as:

f(0) = 1

f(1) = 1

f(2) = 2

f(2n) = f(n) + f(n + 1) + n (for n > 1)

f(2n + 1) = f(n - 1) + f(n) + 1 (for n >= 1)

My goal is to find, for any given number s, the largest n where f(n) = s. If there is no such n return None. s can be up to 10^25.

I have a brute force solution using both recursion and dynamic programming, but neither is efficient enough. What concepts might help me find an efficient solution to this problem?


Solution

  • I want to add a little complexity analysis and estimate the size of f(n).

    If you look at one recursive call of f(n), you notice, that the input n is basically divided by 2 before calling f(n) two times more, where always one call has an even and one has an odd input.
    So the call tree is basically a binary tree where always the half of the nodes on a specific depth k provides a summand approx n/2k+1. The depth of the tree is log₂(n).

    So the value of f(n) is in total about Θ(n/2 ⋅ log₂(n)).

    Just to notice: This holds for even and odd inputs, but for even inputs the value is about an additional summand n/2 bigger. (I use Θ-notation to not have to think to much about some constants).

    Now to the complexity:

    Naive brute force

    To calculate f(n) you have to call f(n) Θ(2log₂(n)) = Θ(n) times.
    So if you want to calculate the values of f(n) until you reach s (or notice that there is no n with f(n)=s) you have to calculate f(n) s⋅log₂(s) times, which is in total Θ(s²⋅log(s)).

    Dynamic programming

    If you store every result of f(n), the time to calculate a f(n) reduces to Θ(1) (but it requires much more memory). So the total time complexity would reduce to Θ(s⋅log(s)).

    Notice: Since we know f(n) ≤ f(n+2) for all n, you don't have to sort the values of f(n) and do a binary search.

    Using binary search

    Algorithm (input is s):

    1. Set l = 1 and r = s
    2. Set n = (l+r)/2 and round it to the next even number
    3. calculate val = f(n).
    4. if val == s then return n.
    5. if val < s then set l = n
      else set r = n.
    6. goto 2

    If you found a solution, fine. If not: try it again but round in step 2 to odd numbers. If this also does not return a solution, no solution exists at all.

    This will take you Θ(log(s)) for the binary search and Θ(s) for the calculation of f(n) each time, so in total you get Θ(s⋅log(s)).

    As you can see, this has the same complexity as the dynamic programming solution, but you don't have to save anything.

    Notice: r = s does not hold for all s as an initial upper limit. However, if s is big enough, it holds. To be save, you can change the algorithm:
    check first, if f(s) < s. If not, you can set l = s and r = 2s (or 2s+1 if it has to be odd).