I have a function which looks like this: f(x) = min(f_1(x_1), ..., f_n(x_n))
where n
is about 10 and all f_i
are positive monotonic, smooth, and their values are almost always (for all x_i
for all f_i
) are different less than by a factor of ten. So they seem to be rather good for analysing.
What's the best (fast?) way to maximize it having such constrains:
- all x_i
are integers and less than ~100
- product of all x_i
should lie near a specified value (assume, not further than 10% from it)
Algorithm description in any language is appreciated, but if it is in Python, then it's ten times better :)
P.S.: earlier I've worked with genetic algorithms, and first applied them to this task. However, it doesn't seem to be the best choice: GAs are quite slow, also I couldn't think of efficient crossover operation for this problem.
I don't immediately see a better solution than simply choosing a starting point randomly, evaluating each function f_i with each input x_i, determining the minimum input, and then incrementing the input of the function that gave the lowest result. It's not elegant, it's not complex, but it's a good baseline brute force approach.
int (**f_is)(int n);
//...
int xs[10];
//...
while(true) {
int i = 0;
int cmin = f_is[0](i);
int cminIndex = 0;
for(i = 1; i < 10; ++i) {
int cfunc = (f_is[i])(i);
if(cmin < cfunc) {
cmin = cfunc;
cminIndex = i;
}
}
++xs[cminIndex];
}
EDIT: a couple more things: if you compute f_i(n_i) in parallel and the join and take the min, it'll be a lot faster but you still need a way to communicate the index of the function that produced the smallest value back to the caller. I would recommend Haskell as a great language to write this in because it's way way faster than python and in some cases you can get great concurrency support without much effort.