Search code examples
pythonpython-3.xpytorchrabbitmqyolov7

undefined symbol: _ZN15TracebackLoggerC1EPKc, version libcudnn_ops_infer.so.8


this is my memory, cpu, gpu and torch version info:

MemTotal:       30794980 kB
MemFree:        26650464 kB
MemAvailable:   28247716 kB
Buffers:           73680 kB
Cached:          1840696 kB
SwapCached:            0 kB
Active:           981320 kB
Inactive:        2659044 kB
Active(anon):       1356 kB
Inactive(anon):  1736720 kB
Active(file):     979964 kB
Inactive(file):   922324 kB
Unevictable:       18600 kB
Mlocked:           18600 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                40 kB
Writeback:             0 kB
AnonPages:       1744600 kB
Mapped:           640532 kB
Shmem:              3876 kB
KReclaimable:      92008 kB
Slab:             182620 kB
SReclaimable:      92008 kB
SUnreclaim:        90612 kB
KernelStack:        8256 kB
PageTables:        19732 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    15397488 kB
Committed_AS:    9151572 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       80276 kB
VmallocChunk:          0 kB
Percpu:             6496 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:      406328 kB
DirectMap2M:     7979008 kB
DirectMap1G:    25165824 kB
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Byte Order:                         Little Endian
Address sizes:                      46 bits physical, 48 bits virtual
CPU(s):                             8
On-line CPU(s) list:                0-7
Thread(s) per core:                 2
Core(s) per socket:                 4
Socket(s):                          1
NUMA node(s):                       1
Vendor ID:                          GenuineIntel
CPU family:                         6
Model:                              63
Model name:                         Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping:                           0
CPU MHz:                            2299.998
BogoMIPS:                           4599.99
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          128 KiB
L1i cache:                          128 KiB
L2 cache:                           1 MiB
L3 cache:                           45 MiB
NUMA node0 CPU(s):                  0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Mitigation; PTE Inversion
Vulnerability Mds:                  Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown:             Mitigation; PTI
Vulnerability Mmio stale data:      Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed:             Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI
                                     Syscall hardening, KVM SW loop
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2
                                     ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_kn
                                    own_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c r
                                    drand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx
                                    2 smep bmi2 erms invpcid xsaveopt arat md_clear arch_capabilities
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.171.04             Driver Version: 535.171.04   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla T4                       Off | 00000000:00:04.0 Off |                    0 |
| N/A   38C    P8               9W /  70W |    105MiB / 15360MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
Name: torch
Version: 2.2.1
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: https://pytorch.org/
Author: PyTorch Team
Author-email: [email protected]
License: BSD-3
Location: /usr/local/lib/python3.8/dist-packages
Requires: nvidia-cufft-cu12, nvidia-cusparse-cu12, nvidia-cuda-cupti-cu12, nvidia-nccl-cu12, nvidia-nvtx-cu12, triton, filelock, fsspec, nvidia-cusolver-cu12, nvidia-cuda-nvrtc-cu12, nvidia-curand-cu12, nvidia-cublas-cu12, sympy, nvidia-cudnn-cu12, jinja2, typing-extensions, nvidia-cuda-runtime-cu12, networkx
Required-by: torchvision

previously, it's breaking at

try:  # FLOPS
        from thop import profile
        connect.loginfo("imported thop profile")
        stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32
        connect.loginfo(f"stride: {stride}")
        img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device)  # input
        connect.loginfo(f"zero torch tensor: {img}")
        flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2  # stride GFLOPS
        connect.loginfo(f"stride_flops: {flops}")
        img_size = img_size if isinstance(img_size, list) else [img_size, img_size]  # expand if int/float
        connect.loginfo(f"img_size: {img_size}")
        fs = ', %.1f GFLOPS' % (flops * img_size[0] / stride * img_size[1] / stride)  # 640x640 GFLOPS
        connect.loginfo(f"640_flops: {fs}")

profile, I uninstalled thop module. Previously the same script is working fine in a different server. Now I have changed the server, and old server has torch version 2.2.0 with same cuda version.

The catch here is I am triggering the script with rabbitmq, It's breaking then only, but if I call it normally with the same arguments, It's working fine. Can anyone tell what's wrong here. The _model I am using is returned by the yolov7 attempt_load function(models.experimental.py)


Solution

  • I didn't actually observed the output properly, I was getting

    /usr/bin/python3: symbol lookup error: /usr/local/lib/python3.8/dist-packages/torch/lib/../../nvidia/cudnn/lib/libcudnn_cnn_infer.so.8: undefined symbol: _ZN15TracebackLoggerC1EPKc, version libcudnn_ops_infer.so.8

    It seems the error occurs when there's a mismatch between the version of cuDNN used during compilation and the version available at runtime(I got this answer from the chatGPT).

    Downgrading torch and torchvision to torch==2.0.1 torchvision==0.15.2 worked for me. If some one finds this more information I wanted to know is this answer from gpt is correct if not why'd this error occur