Search code examples
darknetyolov4

yolov4..cfg : increasing subdivisions parameter consequences


I'm trying to train a custom dataset using Darknet framework and Yolov4. I built up my own dataset but I get a Out of memory message in google colab. It also said "try to change subdivisions to 64" or something like that. I've searched around the meaning of main .cfg parameters such as batch, subdivisions, etc. and I can understand that increasing the subdivisions number means splitting into smaller "pictures" before processing, thus avoiding to get the fatal "CUDA out of memory". And indeed switching to 64 worked well. Now I couldn't find anywhere the answer to the ultimate question: is the final weight file and accuracy "crippled" by doing this? More specifically what are the consequences on the final result? If we put aside the training time (which would surely increase since there are more subdivisions to train), how will be the accuracy?

In other words: if we use exactly the same dataset and train using 8 subdivisions, then do the same using 64 subdivisions, will the best_weight file be the same? And will the object detections success % be the same or worse? Thank you.


Solution

  • first read comments suppose you have 100 batches.

    • batch size = 64
    • subdivision = 8
    • it will divide your batch = 64/8 => 8
    • Now it will load and work one by one on 8 divided parts into the RAM, because of LOW RAM capacity you can change the parameter according to ram capacity. you can also reduce batch size , so it will take low space in ram. It will do nothing to the datasets images. It is just splitting the large batch size which can't be load in RAM, so divided into small pieces.