Parallelization of the SciPy shgo optimizer

154 Views Asked by At

I am trying to optimize eight parameters for my model using the optimizer, but I am struggling with the slow speed of the Scipy-shgo optimizer.

opt = shgo( objective_function,                     # Per-loop callee? [ns/call]
            bounds           = bnds,                # Static
            iters            = 2,                   # 2
            minimizer_kwargs = { 'method': 'SLSQP', # 0(n^3) in time
                                 'ftol':    1e-3    # FTOL 0.001
                                  }
            )

How can I parallelize the Scipy-shgo optimizer?

2

There are 2 best solutions below

0
user3666197 On

How can I parallelize SciPy-shgo optimizer?

The generations of SciPy developers have made their best to design as much optimisation tricks into the internalities of this FORTRAN-originated library, one would have to be indeed a superadvanced architect if trying to improve the already ultimately good product. That does not say one cannot do it, yet it warns, one would have to be very good at trying to do that.

What to do with this?

a)
we can always check, if the most expensive parts could get some boosted improvements to run way faster (here it is the case of the per-loop run callee-fun - the passed objective_function())

If skills, RAM and some smart CPU-(registers + cache-lines friendly)-vectorisation tricks permit, this could help in every case, sometimes a lot.

Tweaking a default value of eps and other method-specific hyper-parameters might help in smooth-model cases, if still insisting to remain using Sequential Least Square as a solver's driving-method.

b)
we can opt for a less expensive optimiser method, the actually chosen SLSQP-one is both expensive and (IIRC) cannot use sparse-matrix representations of data (if these get into your use-case). With ~ O(n^2) [SPACE]-domain scaling and ~ O(n^3) [TIME]-domain scaling for n-dimensions, it makes it less practical for optimizing jobs with a scale of more than a few thousands

c)
we may analyse and try, if the problem and other conditions permit, to run the global-optimisation in many split-cases, at lower dimensionality of the aProblemParametersVectorSPACE[...], for finding sub-space optima and try to augment / rerun the most promising solutions received from the subSpace-s as a full-scale, all-dimension global optimum starter, hopefully in faster time, than to let evolve the same without those many (faster) sub-space hints. Here only our time, resources, and imagination is our limit.

2
Andrew Nelson On

Use the workers keyword for parallelisation.