currently I'm using Theano for machine learning, now I wanted to try out Torch.
In Theano there is an option to set flags for GPU Memory usage:
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN, device=gpu,floatX=float32,lib.cnmem=0.9"
So Theano uses the set capacity of the GPU, here 90%. In Torch however a similar network uses around 30% GPU load.
Is there any way to set a higher GPU load in Torch similar to Theano?
Torch will use as much GPU memory as it needs based on it's standard allocator.
The amount of memory torch uses does not need to be pre-specified like the example you do in Theano.