Before I am using RTX2070 SUPER to run Pytorch Yolov4 and now my PC is changed to use RTX3060, ASUS KO GeForce RTX™ 3060 OC.
I have deleted the existing cuda11.2 and install again with cuda11.4 and Nvidia Driver 470.57.02
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:07:00.0 Off | N/A |
| 0% 42C P8 16W / 170W | 403MiB / 12053MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1236 G /usr/lib/xorg/Xorg 9MiB |
| 0 N/A N/A 1264 G /usr/bin/gnome-shell 6MiB |
| 0 N/A N/A 2124 C python 153MiB |
+-----------------------------------------------------------------------------+
However with the cuda11.4 and RTX3060, I cannot run Pytorch Yolov4 detection. When I run the detection, the detection will be stuck after loading weights, Loading weights from ./data/people.weights... Done!
. In the meantime, nvidia-smi can show that a "python" (above PID 2124) is using the GPU memory and the used GPU memory of "python" will keep increasing.
Is cuda11.4 not support RTX3060 or Pytorch1.4 yet?
Environment:
ASUS KO GeForce RTX™ 3060 OC
Ubuntu 18.04.5 LTS
cuda 11.4
nvidia driver 470.57.02
conda 4.8.3
python 3.8.5
pytorch 1.4
Solved by reinstalling the pytorch in my Conda Env.
You may try reinstalling the Pytorch or create a new Conda Environment to do it again.