I have a function
function [output1 output2] = func(v1,v2,v3,v4,v5,v6,v7,v8,v9,v10)
that I want to discretize. I am going to be performing optimization that involves this function and I think the optimization's efficiency would benefit from discretizing the function and then doing spline interpolation on the data instead of having to evaluate the continuous function. Essentially I would want a 10-D double for each of output1 and output2 that correlates with varying values of v1, v2, ... v10.
With infinite time and memory I would do the following:
n_pts = 100;
v1 = linspace(v1_min, v1_max, n_pts);
...
v10 = linspace(v10_min, v10_max, n_pts);
[v1g v2g ... v10g] = ndgrid(v1, v2, ... v10);
[output1, output2] = arrayfun(@func, v1g, v2g, ... v10g);
Time and memory (needed to execute ndgrid and arrayfun) obviously do not allow for this. Can anyone think of work-around, or is this problem of discretizing a function of 10 variables totally intractable?
You are on a totally wrong path. Assuming you had infinite memory, you would call your function 100^10 times in the last line. That would require a lot of time. No reasonable optimisation strategy would call your function that many times, that's the reason why all those complicated strategies are developed.
You may use your strategy to pre-compute computation intensive sub terms of your function. Replacing a very cost-intensive term with only three variables with a 100^3 lookup table might increase the performance significantly without using to much memory.