C++ : Is there an objective universal way to compare the speed of iterative algorithms?

94 Views Asked by At

I have a couple of iterative algorithms written in C++ to solve the same problem. When running these algorithms on my machine for very large input sets, it is quite easy to classify them based on the time each algorithm takes for the given input set.

But these runtimes are based on my machine and the number of background processes running on my machine at the time. They don't seem to be reproducible.

Is there any objective universal metric to quantify the run-time and speed of these algorithms? I want to find a method that gives the same results irrespective of the machine it is run on, something that is reproducible.

I was looking at ways to compute the number of CPU cycles, but this looks hard to do on modern microprocessors.

So there are two main functions in each of algorithms I mentioned, a search function and the computation function. Will it be acceptable it I manually count the no. of operations done in both of these functions and then use this as a metric to compare my algorithms?

1

There are 1 best solutions below

1
Ground On

How precise and reliable of an answer do you need, and how much work are you willing to invest into finding it?

  • If you just want to run a benchmark and avoid it being affected by other processes on your system, you can use std::clock, which only counts the CPU time of that process. Such benchmarks should be part of every optimization endeavor, since false assumptions or unfortunate approximations in your theoretical considerations might lead to code that performs worse in practice.
  • If you want something more reliable (less dependent on your machine and concrete inputs), you could try to determine your algorithms’ asymptotic time complexities (or try to look them up in literature). (In my experience, this topic is taught to CS students very early on. Introductory books on computer science should also cover it.)
  • If you need maximum precision, yes, you could theoretically try to estimate the minimum/maximum/average number of CPU cycles each algorithm would need. Simply counting the number of instructions would lead to imperfect results because different instructions can take up different amounts of cycles (division typically takes longer than addition; consult your CPU’s manual), and the amount of cycles can vary depending on the inputs (1 / 1 might still be fast). And then you would also have to account for the fact that your CPU might be able to run multiple instructions in parallel, for branch prediction, and possibly for other things. So in all likelihood you would have to use approximations to keep it feasible, and even then it would be quite a hassle. Only in extremely runtime-critical code would this be worth the effort. Here is another question on this subject.

An approach that is probably sufficient for most applications would be to choose the algorithm with the best asymptotic complexity and let compiler optimizations handle the rest.