The best way to profile a group of processes on Linux for their times to completion?

24 Views Asked by At

I am investigating a case where a test consisting of multiple git checkouts and compilation passes, all run concurrently, takes much more time to complete on a seemingly more powerful host computer.

Before I could drill down into what the individual processes could be doing slower, I want to get a high-level view. Namely, for each process started and finished by the test (git, gcc, ld, etc.), I want to see the time it took it to complete. I do not want to see any details of what any process was doing (yet), but having the list of command line arguments passed to it at the start would be useful.

I know how to use perf for profiling a single process, but I am not sure how to address this task of a more system-wide and coarse-grained profiling. Is there a way to instruct perf or any other tool to trace execution time of programs, rather than the innards of their working?

I have another "reference" host which runs the same test much faster. If there is a method to generate a report that could be directly compared across these two systems, that would be the best.

0

There are 0 best solutions below