In my attempt to fit a function of the form y = a * (1 - exp(-x / b))
to some given data, I'm a bit lost. I suspect the optimization package of apache-common-math might be of help, but I've not yet managed to use it successfully. Below you can find some code explaining what I'd like to achieve.
import kotlin.math.exp
import kotlin.random.Random
// Could be interpreted as a capacitor-charging curve with Vs = a and t = b
fun fGeneric(a: Double, b: Double, x: Double) = a * (1 - exp(-x / b))
fun fGiven(x: Double) = fGeneric(a = 10.0, b = 200.0, x = x)
fun fGivenWithNoise(x: Double) = fGiven(x) + Random.nextDouble(-0.1, 0.1)
fun main() {
val xs = (0..1000).map(Int::toDouble).toDoubleArray()
val ys = xs.map { x -> fGivenWithNoise(x) }.toDoubleArray()
// todo: From data, find a and b, such that fGeneric fits optimally.
}
Do I need to provide an implementation of the MultivariateDifferentiableVectorFunction
interface? And if so, how would it need to look like?
Found a solution by using lbfgs4j
instead:
package com.jaumo.ml.lifetimevalue
import com.github.lbfgs4j.LbfgsMinimizer
import com.github.lbfgs4j.liblbfgs.Function
import kotlin.math.exp
import kotlin.random.Random
// Could be interpreted as a capacitor-charging curve with Vs = a and t = b
fun fGeneric(a: Double, b: Double, x: Double) = a * (1 - exp(-x / b))
fun fGiven(x: Double) = fGeneric(a = 10.0, b = 200.0, x = x)
fun fGivenWithNoise(x: Double) = fGiven(x) + Random.nextDouble(-0.1, 0.1)
private fun subtractVectors(a: DoubleArray, b: DoubleArray): DoubleArray {
assert(a.size == b.size)
val result = DoubleArray(a.size)
(a.indices).forEach { dim ->
result[dim] = a[dim] - b[dim]
}
return result
}
fun main() {
val xs = (0..1000).map(Int::toDouble).toDoubleArray()
val ys = xs.map { x -> fGivenWithNoise(x) }.toDoubleArray()
val f = object : Function {
override fun getDimension(): Int {
return 2
}
override fun valueAt(x: DoubleArray): Double {
val maxVal = x[0]
val slowness = x[1]
val capacitorFunc = { x0: Double ->
maxVal * (1 - exp(-x0 / slowness))
}
return subtractVectors(xs.map(capacitorFunc).toDoubleArray(), ys)
.map { it * it }
.sum()
}
override fun gradientAt(x: DoubleArray): DoubleArray {
val a = valueAt(doubleArrayOf(x[0] - 0.001, x[1]))
val b = valueAt(doubleArrayOf(x[0] + 0.001, x[1]))
val c = valueAt(doubleArrayOf(x[0], x[1] - 0.001))
val d = valueAt(doubleArrayOf(x[0], x[1] + 0.001))
return doubleArrayOf(b - a, d - c)
}
}
val minimizer = LbfgsMinimizer()
val x = minimizer.minimize(f, doubleArrayOf(1.0, 10.0))
println(x[0])
println(x[1])
}
The result looks good:
9.998170586347115
200.14238710377768