I'm running into some situations where it seems like Gekko is getting stuck in local maximums and was wondering what approaches could be used to get around this or dig deeper into the cause (including default settings below).
For example, running the scenario below yields an objective of "-5127.34945104756"
m = GEKKO(remote=False)
m.options.NODES = 3
m.options.IMODE = 3
m.options.MAX_ITER = 1000
m.options.SOLVER=1
#Limit max lnuc weeks
m.Equation(sum(x8)<=6)
m.Maximize(m.sum(simu_total_volume))
m.solve(disp = True)
#Objective : -5127.34945104756
Now If I simply change "m.Equation(sum(x8)<=6)" to "m.Equation(sum(x8)==6)", it returns a better solution (-5638.55528892101):
m = GEKKO(remote=False)
m.options.NODES = 3
m.options.IMODE = 3
m.options.MAX_ITER = 1000
m.options.SOLVER=1
#Limit max lnuc weeks
m.Equation(sum(x8)==6)
m.Maximize(m.sum(simu_total_volume))
m.solve(disp = True)
# Objective : -5638.55528892101
Given that "6" falls in the range of <=6, is there a reason why Gekko wouldn't try to go all the way up to 6 here? Posting the full code/values would also be difficult given size/scale of the problem, so appreciate any feedback based on this.
Gekko solvers are gradient-based Nonlinear Programming (NLP) solvers that find local minima. There are a few strategies to help Gekko find the global optimum.
Here is an example that can help with this important topic of local vs. global minima. The following script produces the local (not global) solution of (7,0,0) with objective 951.0.
There are gradient-based methods for global optimization found in solvers such as BARON, genetic algorithms, simulated annealing, etc. An easy approach is to perform a multi-start method with different initial conditions (guesses) over a grid search or intelligently with a Bayesian approach to more intelligently search if the number of initial guesses is small.
Multi-Start with Parallel Threading
A grid search is easy to parallelize to simultaneously start from multiple locations. Here is the same optimization problem where the global solution is found with a parallelized gekko optimizations.
Bayesian Optimization
Another approach is to intelligently search by mapping the initial conditions to the performance of the optimized solution. It searches in areas where it expects the best performance or where it hasn't been tested and the uncertainty is high.
Both multi-start methods find the global solution:
If you determine that equality constraints always produce the global optimum, then you could also switch from inequality constraints to enforce the constraint boundary.
Additional information on these multi-start methods is in the Engineering Optimization course on the page for Global Optimization and Solver Tuning.