I'm trying to perform an evaluation of total floating-point operations (FLOPs) of a neural network.
My problem is the following. I'm using a sigmoid function. My question is how to eval the FLOPs of the exponential function. I'm using Tensorflow which relies on NumPy for the exp function.
I tried to dig into the Numpy code but didn't find the implementation ... I saw some subjects here talking about fast implementation of exponential but it doesn't really help.
My guess is that it would use a Taylor implementation or Chebychev.
Do you have any clue about this? And if so an estimation of the amount of FLOPs. I tried to find some references as well on Google but nothing really standardized ...
Thank you a lot for your answers.
I looked into it for a bit and what i found is that numpy indeed uses the C implementation as seen here.
Tensorflow though doesnt use nmpy implementation, instead it uses the
scalar_logistics_opfunction from the C++ library called Eigen. The source for that can be found here.