I am not proficient in statistics. Following a thread: How to compute standard deviation errors with scipy.optimize.least_squares, I got my std to be
std = np.sqrt(np.diagonal(np.linalg.inv(J.T @ J) * (res_lsq.fun.T @ res_lsq.fun / (res_lsq.fun.size - res_lsq.x.size))))
where
J = res_lsq.jac
My question now is how do I go from std to standard error? I know SE = STD/sqrt(N), but what is N in this case, is it simply number of datapoints ie
res_lsq.fun.size
?
Thanks!