Search code examples
algorithmrecursiontime-complexitydynamic-programming

Is is possible to move from (a,b) to (c,d)


The problem was to output whether it is possible to move from a given point (a,b) to target (c,d)

We are restricted to positive co-ordinates only

The following two moves are possible

(a,b) -> (a+b,b)
(a,b) -> (a,b+a)

For instance, (1,1) to (5,4) is True You can do the following: Using 2nd move 3 times, (1,1) -> (1,2) -> (1,3) -> (1,4) Using 1st move 1 time (1,4) -> (5,4)

I came up with the following recursive method

def move(a,b,c,d):
    if a==c and b==d:
        return True
    elif a>c or b>d:
        return False
    else:
        ans = False
        if a < c:
            if move(a+b,b,c,d):
                return True
        if b < d:
            if move(a,b+a,c,d):
                return True
    return False

a) Does my solution cover all the possible cases. I am unable to verify for sure since I do not have test cases, but I think I did take everything into account.

b) What is the time complexity of my solution? I think it is exponential but can't say for sure.

c) Is there a better solution to this (in terms of time complexity). Can we use dynamic programming?

Thank you for any input.


Solution

  • If all the numbers have to be positive, then I believe there is a much quicker solution.

    Trying to find if we can get from (a, b) to, say (14, 31), we can note that the only way with positive numbers to reach (14, 31) is to apply the second rule to (14, 17). The only way to get to (14, 17) is to apply the second rule to (14, 3). The only way to get to (14, 3) is to apply the first rule to (11, 3). The only way to (11, 3) is to apply the first rule to (8, 3), and so on. So the only values that can reach (14, 31) are

    (14, 31) # Final
    (14, 17) # Rule 2
    (14, 3)  # Rule 2
    (11, 3)  # Rule 1
    (8, 3)   # Rule 1
    (5, 3)   # Rule 1
    (2, 3)   # Rule 1
    (2, 1)   # Rule 2
    (1, 1)   # Rule 1
    

    So an algorithm is pretty simple. Loop on (c, d), replacing it with (c - d, d) if c > d and with (c, d - c) if c < d, stopping when you hit a match, when c == d, when c < a or d < b.

    A variant of this described in a comment by Paul Hankin is O(log n), although I'm not going to try to prove it. This version is O(n), where n is the larger of c and d. Consecutive Fibonacci numbers will probably take the most steps.

    Of course all this is meaningless if you can have negative numbers, since the first rule applied to (-17, 31) would also yield (14, 31) and you're back to exponential.