I am trying to learn how to use the Pytorch profiler API to measure the difference in performance when training a model using different methods. In the dedicated tutorial, there is one part where they show how to do just that using the "schedule" parameter of the profiler.
My problem is that when I want to use it in my code, calling step the first "wait" times prints a message
[W kineto_shim.cpp:337] Profiler is not initialized: skipping step() invocation
Since I want my profiler to sleep most of the time, my "wait" value is quite high so it pollutes my terminal with a bunch of those lines until the profiler is actually executed for the first time
How can I get rid of it ?
Here's a minimal code sample that reproduces the problem
import torch
from torch.profiler import profile, record_function, ProfilerActivity
with profile(
activities=[torch.profiler.ProfilerActivity.CUDA],
schedule=torch.profiler.schedule(wait=15, warmup=1, active=4),
profile_memory=False,
record_shapes=True,
with_stack=True,
) as prof:
for _ in range(20):
y = torch.randn(1).cuda() + torch.randn(1).cuda()
prof.step()
print(prof.key_averages())
This was just recently fixed/added in a pull request
now you can set the env variable KINETO_LOG_LEVEL
.
For example in a bash script: export KINETO_LOG_LEVEL=3
The levels according to the source code are:
enum LoggerOutputType {
VERBOSE = 0,
INFO = 1,
WARNING = 2,
ERROR = 3,
STAGE = 4,
ENUM_COUNT = 5
};
Thats atleast how it should work, according to this issue the changes for the log level have not been merged yet