I have an unknown function, say, F(x), which I use a back-propagation neural network to approximate. Surely this can be done, as it is in the standard repertoire of neural networks.
F(x) does not explicitly exist. It is learned from a training set of data points.
Say, the NN learns a function G(x) which approximates F(x).
AFTER the learning of G is finished, I want to find the global maximum value of G(x), and the position of x when that occurs.
Given that G is implicitly realized by the NN, I don't have the explicit form of G.
Is there any quick algorithm that allows me to find arg-max(x) of G(x) ?
Neural networks would give rise to discontinuous functions (in general) since it consists of a network of discontinuous functions (a neuron fires at a certain threshold which is a jump discontinuity). But -- if in you application it makes sense to think of G(x) as (approximately) continuous or even differentiable, you could use hill-climbing techniques where you start at a random point, estimate the derivative (or gradient if x
is a vector rather than a scalar) and move in the direction of steepest increase a short step, repeating the process until no more improvement is found. This gives you an approximate local maximum. You can repeat the process with different randomly starting values. If you always get the same result then you can be reasonably confident (though not certain) that it is in fact a global max.
Without any assumptions on G(x)
it is hard to say anything definite. If x
is chosen randomly then G(x)
is a random variable. You can use statistical methods to estimate e.g. its 99th percentile. You could also try using an evolutionary algorithm in which G(x)
plays the role of a fitness function.