I am using gprof
to calculate the time spent during the execution of my program, for each function .
The last week I noticed that when CPU usage reached 100%, the program could not even start !
stress
" tool : http://weather.ou.edu/~apw/projects/stress/I have read the thread :
and read the Mike Dunlavey's response :
What about problems that are not so localized? Do those not matter? Don't place expectations on gprof that were never claimed for it. It is only a measurement tool, and only of CPU-bound operations.
and also Norman Ramsey's response that had the high score :
Valgrind has an instruction-count profiler with a very nice visualizer called KCacheGrind. As Mike Dunlavey recommends, Valgrind counts the fraction of instructions for which a procedure is live on the stack, although I'm sorry to say it appears to become confused in the presence of mutual recursion. But the visualizer is very nice and light years ahead of gprof
.
but as the thread is closed, as non constructive, I was wondering if this is the good direction to follow
Thanks in advance
P.S. While using google search, I didn't find something relevant when asking questions like
"why gprof doesn't work when cpu reach 100 %"
Thanks in advance
All that 100% means is it's hung, and it's not doing I/O.
You're saying the program hangs when you run it with gprof
, but not if you don't?
That's weird, but I wouldn't bother trying to figure it out.
As I've said over and over, I would just grab several stack samples manually. Then the percent of time used by any routine is just the fraction of samples it appears on, more or less. If you think you need high-precision measurements, try a stack-sampler like Zoom or OProfile.