Search code examples
algorithmrecursiondata-structuresdynamic-programmingknapsack-problem

0-1 Knapsack with Large Weights and Values: How much faster is recursive dp than iterative dp?


Suppose I wish to solve 0-1 knapsack but both weights and values of elements can get large. (Weight[i] < 1e5 and Value[i] < 1e9)

Also, number of items is ~2000 and size of knapsack is ~4e6.

Obviously dp[index][weight] exceeds memory and time expectations.

However, somewhat magically memoization dp (values stored in std::map or std::unordered_map) works well. I can see why it may be bit faster: after all, in recursive dp, we compute only the states we need.

But, is it actually significantly faster? In other words, the usual computation requires 8e9 operations, but how much speedup can we expect, on average, by computing dp this way?

Since computing a state takes constant time, the question boils down to - how many (expected / on average) states can be there to compute? (normally it's 8e9)

You can assume:

  1. We use std::unordered_map.

  2. Our hash function works well enough.

  3. Recursive Overheads can be ignored.

Thanks!


Solution

  • In the worst case, the recursive version can compute nearly as many states as the iterative version. Since you've implemented the recursive version without exceeding your available memory, you are pretty far from the worst case.

    In special cases, the recursive version can be faster than the naive iterative DP. For example, if all or most of your weights have a large common factor, then that factor will divide the number of states that need to be computed. If you implement the iterative version so that it only considers accessible weights, then it will see this speed-up as well.