I have a chainer model which sometimes crash due to cupy.cuda.memory.OutOfMemoryError
. As the exact emplacement where it happens depends on the size of the elements of the batch, I was wondering if there is a way to identify memory bottlenecks in a chainer model ?
You may refer CupyMemoryProfileHook
.
Code example:: from chainer.function_hooks import CupyMemoryProfileHook hook = CupyMemoryProfileHook() with hook: trainer.run() hook.print_report() Output example:: FunctionName UsedBytes AcquiredBytes Occurrence LinearFunction 5.16GB 179.98MB 3900 ReLU 991.82MB 458.97MB 2600 SoftmaxCrossEntropy 7.71MB 5.08MB 1300 Accuracy 617.97KB 351.00KB 700