I have some gperf tool
files:
the first one was running about 2 minites,file is 18M;
others running about 2 hours and the files are about 800M
when I try to use :pprof --text
to get the report, found the the first one has 1300 samples but these 2 hours running just 5500 samples.
I excepted the larger files have about 2*3600*100 samples
(because "by default the gperf tools take 100 samples a second").
The same procedures and the same operating environment, why the samples too few? sorry for my poor english.
I looks like it's I/O bound. In the 120-second job, you're getting 13 seconds of samples. In the 120-minute job, you're getting about 1 minute of samples. The actual fraction of time spent computing vs. I/O can vary pretty widely, especially if there is some constant startup overhead.
If the time ought to be roughly linear in file size, that 120-minute job should really only be about 40 minutes, so I would do some manual sampling on the big job, to see what's happening.