I am using Python 2.7, and have a program that solves a recursive optimization problem, that is, a dynamic programming problem. A simplified version of the code is:
from math import log
from scipy.optimize import minimize_scalar
class vT(object):
def __init__(self,c):
self.c = c
def x(self,w):
return w
def __call__(self,w):
return self.c*log(self.x(w))
class vt(object):
def __init__(self,c,vN):
self.c = c
self.vN = vN
def objFunc(self,x,w):
return -self.c*log(x) - self.vN(w - x)
def x(self,w):
x_star = minimize_scalar(self.objFunc,args=(w,),method='bounded',
bounds=(1e-10,w-1e-10)).x
return x_star
def __call__(self,w):
return self.c*log(self.x(w)) + self.vN(w - self.x(w))
p3 = vT(2.0)
p2 = vt(2.0,p3)
p1 = vt(2.0,p2)
w1 = 3.0
x1 = p1.x(w1)
w2 = w1 - x1
x2 = p2.x(w2)
w3 = w2 - x2
x3 = w3
x = [x1,x2,x3]
print('Optimal x when w1 = 3 is ' + str(x))
If enough periods are added, the program can begin to take a long time to run. When x1 = p1.x(w1)
is run, p2
and p3
are being evaluated multiple times by the minimize_scalar
. Also, when x2 = p2(w2)
is run, we know the ultimate solution will involve evaluating p2
and p3
in ways that were already done in the first step.
I have two questions:
vT
and vt
classes to speed up this program?minimize_scalar
is run, will it benefit from this memoization?In my actually application, the solutions can take hours to solve currently. So, speeding this up would be of great value.
UPDATE: A response below points out that the example above could be written without the use of classes, and the normal decoration can be used for functions. In my actual application, I do have to use classes, not function. Moreover, my first question is whether the call of the function or method (when it's a class) inside of minimize_scalar
will benefit from the memoization.
I found out the answer. Below is an example of how to memoize the program. There may be an even more efficient approach, but this approach memoizes the methods of the class. Furthermore, when minimize_scalar
is run, the memoize wrapper records the results each time it evaluates the functions:
from math import log
from scipy.optimize import minimize_scalar
from functools import wraps
def memoize(obj):
cache = obj.cache = {}
@wraps(obj)
def memoizer(*args, **kwargs):
key = str(args) + str(kwargs)
if key not in cache:
cache[key] = obj(*args, **kwargs)
return cache[key]
return memoizer
class vT(object):
def __init__(self,c):
self.c = c
@memoize
def x(self,w):
return w
@memoize
def __call__(self,w):
return self.c*log(self.x(w))
class vt(object):
def __init__(self,c,vN):
self.c = c
self.vN = vN
@memoize
def objFunc(self,x,w):
return -self.c*log(x) - self.vN(w - x)
@memoize
def x(self,w):
x_star = minimize_scalar(self.objFunc,args=(w,),method='bounded',
bounds=(1e-10,w-1e-10)).x
return x_star
@memoize
def __call__(self,w):
return self.c*log(self.x(w)) + self.vN(w - self.x(w))
p3 = vT(2.0)
p2 = vt(2.0,p3)
p1 = vt(2.0,p2)
x1 = p1.x(3.0)
len(p3.x.cache) # how many times was p3.x evaluated?
Out[3]: 60
x2 = p2.x(3.0 - x1)
len(p3.x.cache) # how many additional times was p3.x evaluated?
Out[5]: 60