I am trying to use linear constraints with shgo. Here is a simple MWE:
from scipy.optimize import shgo, rosen
# Set up the constraints list
constraints = [{'type': 'ineq', 'fun': lambda x, i=i: x[i+1] - x[i] - 0.1} for i in range(2)]
# Define the variable bounds
bounds = [(0, 20)]*3
# Call the shgo function with constraints
result = shgo(rosen, bounds, constraints=constraints)
# Print the optimal solution
print("Optimal solution:", result.x)
print("Optimal value:", result.fun)
An example solution satisfying these constraints is:
rosen((0.1, 0.21, 0.32))
13.046181
But if you run the code you get:
Optimal solution: None
Optimal value: None
It doesn't find a feasible solution at all! Is this a bug?
Update
@Reinderien showed that the problem is the default sampling method 'simplicial'. Either of the other two options make the optimization work. Concretely, replace the result = line with
result = shgo(rosen, bounds, sampling_method='halton',constraints=constraints)
and you get
Optimal solution: [1.08960341 1.18960341 1.41515624]
Optimal value: 0.04453888080946618
If you use simplicial you get
Failed to find a feasible minimizer point. Lowest sampling point = None
although I don't know why.
(Deleted incorrect part of question)
Read the documentation:
shgodoes not support direct linear constraints, only callables. You should usually prefer matrix-based constraints when they're linear, which in this case requires usingminimizer_kwargsinstead.But! You also should not use the default sampling method simplicial which performs poorly on your problem and produces a non-optimal minimum. Either halton or sobol do better, regardless of whether they're provided matrix constraints or callables. Efficiency varies depending on whether you also provide the Jacobian. All methods together:
But repeated runs produce iteration counts all over the place; these methods are non-deterministic. Given that the inter-method performance is a wash, I recommend the matrix-based approach and either sobol or halton. You ask:
Correct-ish, but if there's no difference in performance between your callable constraints and matrix constraints, matrix constraints should be preferred as they're inherently simpler to an optimizer and imply the Jacobian with no extra work.
No, it doesn't fail; it does exactly what you tell it to.
No, it doesn't. Re-examine your math, and invert the coefficient signs for your decision variables.