Search code examples
gpuhuggingface-transformershuggingface-tokenizersgpt-3fine-tuning

GPU out of memory fine tune flan-ul2


OutOfMemoryError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 15.78 GiB total capacity; 14.99 GiB already allocated; 3.50 MiB free; 14.99 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I have Standard_NC24s_v3 single node GPU with 448GB memory and 4 GPUs. However the error message says the total capacity is 15.78GiB. Is the fine tune not using 4 GPUs? How to get all the 4 GPUs used in the fine tune of Flan-UL2 using huggingface transformers?


Solution

  • I solve the issue by using the following package versions.

    !pip install transformers==4.28.1
    !pip install sentencepiece==0.1.97
    !pip install accelerate==0.18.0
    !pip install bitsandbytes==0.37.2
    !pip install torch==1.13.1