I am computing the posterior probability of some parameters θ with a likelihood function that is slow to compute, and I wonder if there is a way to speed things up.
The slow part of my code is the computation of the chi squared that presents an integral that needs to be computed for every sample in z_sn.
import numpy as np
import scipy.constants as cte
from scipy.integrate import quad
def luminosity_integrand(z, omgM):
Ez = np.sqrt((1 - omgM) + omgM * np.power(1 + z, 3))
return 1. / Ez
def luminosity_distance(z, h, omgM):
integral, _ = quad(luminosity_integrand, 0, z, epsrel=1e-8, args=(omgM))
return (cte.c / 10. ** 5) / h * (1 + z) * integral
def distance_modulus(z, h, omgM):
return 5. * np.log10(luminosity_distance(z, h, omgM)) + 25.
def chisq_sn(h, omgM):
m_model = np.array([distance_modulus(z, h, omgM) for z in z_sn])
diffs = m_obs-m_model
maha_distances = np.dot(np.dot(diffs, inv_cov_plus), diffs) # mahalanobis distance
return maha_distances
I have used list comprehension, but I don't know if it's the fastest route. I have also seen some comments about using Cython, but I have never used it. I am using emcee for the computation of the posterior, if it matters.
If you don't mind using a private function (which will not necessarily be available in future versions of SciPy and is not officially supported), there is a vectorized integrator in SciPy 1.12.0 that gives a 10x speedup.