I'm using python lmfit to minimise the residuals of a fit function with three variables; A, P and S. I've provided a simplified version here:
from lmfit import minimize, Parameters
import numpy as np
# define model
def my_model(params, x):
A = params['A']
P = params['P']
S = params['S']
return A*x**2 + B*x + C
# define objective function
def residual(params, x, y, errs):
return (y - my_model(params, x)) / errs
# import data
data = np.loadtxt('data/test.csv')
xs, ys, yerrs = data[0], data[1], data[2]
# initialise parameters
params = Parameters()
params.add('A', value=0.8, vary=True, min=0, max=1)
params.add('P', value=0.15, vary=True, min=0, max=1)
params.add('S', value=0.05, vary=True, min=0, max=1)
# minimise objective function to find optimal parameters
lsq = minimize(my_model, params, args=(xs, ys, yerrs))
My question is, how can I constrain my parameters such that they satisfy A + P + S = 1, while still allowing them all to freely vary to best fit the data? Perhaps something along the lines of:
lsq = minimize(my_model, params, args=(xs, ys, yerrs), constraints=['A+P+S=1'])
I have tried using an additional parameter params.add('epsilon', expr='1 - A - P - S', min=0, max=0.0001) but this does not constrain the data properly and condition is not satisfied - I'm not sure why. Also setting vary=False for this parameter seems to be ignored, and it simply varies such that the condition is satisfied with a negative value for epsilon, allowing A + P + S to be greater than 1.
The important thing to recognize is that if you want to constrain your Parameters such that A + P + S = 1, then they are not actually all freely varying. You have two free variables and one Parameter that is constrained by those two variables. I think what you want to do is
That will almost guarantee that the sum is 1 -- I say almost because the added bounds on all Parameters may lead to a marginal violation of the constraint.