I am using lmfit
in Python to fit some data, which includes fitting the variables a, b, and c. I need to ensure that a < b < c. I found http://cars9.uchicago.edu/software/python/lmfit_MinimizerResult/constraints.html which talks about constraints needing to be defined as inequalities and setting up dummy variables. For example, if I wanted a + b <= 10, I could do:
pars.add('a', value = 5, vary=True)
pars.add('delta', value = 5, max=10, vary=True)
pars.add('b', expr='delta-a')
And this would ensure that a + b <= 10.
I suppose that I would need c - b > 0 and b - a > 0 (or alternatively a - b < 0 and b - c < 0), but I'm not sure how to code this.
Following the hint from the doc you link to, an inequality constraint of x > y
should be translated to x = y + something
where something
has a lower bound of 0.
So, applying that approach twice, I think this should do what you want:
from lmfit import Parameters
params = Parameters()
params.add('a', value=5, vary=True)
params.add('b_minus_a', value=1, vary=True, min=0)
params.add('c_minus_b', value=1, vary=True, min=0)
params.add('b', expr='a + b_minus_a')
params.add('c', expr='b + c_minus_b')
That still uses three variables (a
, b_minus_a
, and c_minus_b
) and imposes the inequality constraints, with the caveat the differences could actually be 0. With floating point numbers, that's usually sufficient, but depending on the scale of the variables, you could change 0 to something like 1.e-12
.