The sigmoid function is defined as
S(t) = 1 / (1 + e^(-t))
(where ^
is pow
)
I found that using the C built-in function exp()
to calculate the value of f(x)
is slow. Is there any faster algorithm to calculate the value of f(x)
?
you don't have to use the actual, exact sigmoid function in a neural network algorithm but can replace it with an approximated version that has similar properties but is faster the compute.
For example, you can use the "fast sigmoid" function
f(x) = x / (1 + abs(x))
Using first terms of the series expansion for exp(x)
won't help too much if the arguments to f(x)
are not near zero, and you have the same problem with a series expansion of the sigmoid function if the arguments are "large".
An alternative is to use table lookup. That is, you precalculate the values of the sigmoid function for a given number of data points, and then do fast (linear) interpolation between them if you want.