Search code examples
pythonpytorchgpu

nameError '_C' is not defined


I'm writing code in jupyter notebook. Tried a whole bunch of staff, but didn't succeeded. Here's my full setup and errors:

Setup: Windows 10, python 3.9, venv

pip list:
Package                   Version
------------------------- ------------
addict                    2.4.0
annotated-types           0.6.0
anyio                     4.3.0
argon2-cffi               23.1.0
argon2-cffi-bindings      21.2.0
asttokens                 2.4.1
async-lru                 2.0.4
attrs                     23.2.0
Babel                     2.14.0
beautifulsoup4            4.12.3
bleach                    6.1.0
blis                      0.7.11
catalogue                 2.0.10
certifi                   2024.2.2
cffi                      1.16.0
charset-normalizer        3.3.2
click                     8.1.7
cloudpathlib              0.16.0
colorama                  0.4.6
comm                      0.2.1
confection                0.1.4
contourpy                 1.1.1
cycler                    0.12.1
cymem                     2.0.8
Cython                    3.0.8
dataclasses-json          0.6.4
debugpy                   1.8.1
decorator                 5.1.1
defusedxml                0.7.1
exceptiongroup            1.2.0
executing                 2.0.1
fastjsonschema            2.19.1
filelock                  3.13.1
fonttools                 4.49.0
fsspec                    2024.2.0
h11                       0.14.0
httpcore                  1.0.4
httpx                     0.27.0
huggingface-hub           0.20.3
idna                      3.6
importlib-metadata        7.0.1
importlib-resources       6.1.1
ipykernel                 6.29.2
ipython                   8.18.1
ipywidgets                8.1.2
jedi                      0.19.1
Jinja2                    3.1.3
json5                     0.9.17
jsonschema                4.21.1
jsonschema-specifications 2023.12.1
jupyter                   1.0.0
jupyter_client            8.6.0
jupyter-console           6.6.3
jupyter_core              5.7.1
jupyter-events            0.9.0
jupyter-lsp               2.2.2
jupyter_server            2.12.5
jupyter_server_terminals  0.5.2
jupyterlab                4.1.2
jupyterlab_pygments       0.3.0
jupyterlab_server         2.25.3
jupyterlab_widgets        3.0.10
kiwisolver                1.4.5
langcodes                 3.3.0
MarkupSafe                2.1.5
marshmallow               3.20.2
matplotlib                3.8.3
matplotlib-inline         0.1.6
mistune                   3.0.2
mpmath                    1.3.0
murmurhash                1.0.10
mypy-extensions           1.0.0
nbclient                  0.9.0
nbconvert                 7.16.1
nbformat                  5.9.2
nest-asyncio              1.6.0
notebook                  7.1.0
notebook_shim             0.2.4
numpy                     1.23.5
opencv-python             4.9.0.80
opencv-python-headless    4.9.0.80
overrides                 7.7.0
packaging                 23.2
pandocfilters             1.5.1
parso                     0.8.3
pickleshare               0.7.5
pillow                    10.2.0
pip                       24.0
platformdirs              4.2.0
preshed                   3.0.9
prometheus_client         0.20.0
prompt-toolkit            3.0.43
psutil                    5.9.8
pure-eval                 0.2.2
pycocotools               2.0.7
pycparser                 2.21
pydantic                  2.6.2
pydantic_core             2.16.3
Pygments                  2.17.2
pyparsing                 3.1.1
python-dateutil           2.8.2
python-json-logger        2.0.7
pywin32                   306
pywinpty                  2.0.12
PyYAML                    6.0.1
pyzmq                     25.1.2
qtconsole                 5.5.1
QtPy                      2.4.1
referencing               0.33.0
regex                     2023.12.25
requests                  2.31.0
rfc3339-validator         0.1.4
rfc3986-validator         0.1.1
rpds-py                   0.18.0
safetensors               0.3.0
scipy                     1.12.0
Send2Trash                1.8.2
setuptools                69.1.1
six                       1.16.0
smart-open                6.4.0
sniffio                   1.3.0
soupsieve                 2.5
spacy                     3.7.4
spacy-legacy              3.0.12
spacy-loggers             1.0.5
srsly                     2.4.8
stack-data                0.6.3
supervision               0.4.0
sympy                     1.12
terminado                 0.18.0
thinc                     8.2.3
timm                      0.9.16
tinycss2                  1.2.1
tokenizers                0.13.3
tomli                     2.0.1
torch                     1.9.1+cu111
torchaudio                0.9.1
torchvision               0.10.1+cu111
tornado                   6.4
tqdm                      4.66.2
traitlets                 5.14.1
transformers              4.29.2
typer                     0.9.0
typing_extensions         4.9.0
typing-inspect            0.9.0
urllib3                   2.2.1
wasabi                    1.1.2
wcwidth                   0.2.13
weasel                    0.3.4
webencodings              0.5.1
websocket-client          1.7.0
wheel                     0.42.0
widgetsnbextension        4.0.10
yapf                      0.40.2
zipp                      3.17.0

All the tests:

import torch
!nvcc --version
TORCH_VERSION = ".".join(torch.__version__.split(".")[:2])
CUDA_VERSION = torch.__version__.split("+")[-1]
print("torch: ", TORCH_VERSION, "; cuda: ", CUDA_VERSION)

print(torch.cuda.is_available())
print(torch.cuda.device_count())
print(torch.cuda.current_device())
print(torch.cuda.device(0))
print(torch.cuda.get_device_name(0))
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Nov_30_19:15:10_Pacific_Standard_Time_2020
Cuda compilation tools, release 11.2, V11.2.67
Build cuda_11.2.r11.2/compiler.29373293_0
torch:  1.9 ; cuda:  cu111
True
1
0
<torch.cuda.device object at 0x000002454179D7C0>
NVIDIA GeForce GTX 1650 Ti

My environment

Problem that lead to nameError:

C:\Users\nikit\Всякое\MyProjects\gpu gd\GroundingDINO
C:\Users\nikit\Всякое\MyProjects\gpu gd\GroundingDINO\groundingdino\models\GroundingDINO\ms_deform_attn.py:31: UserWarning: Failed to load custom C++ ops. Running on CPU mode Only!
  warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!")
final text_encoder_type: bert-base-uncased

From:

%cd {HOME}
%cd {HOME}/GroundingDINO
from groundingdino.util.inference import Model
model = Model(model_config_path=CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH)

And the topic one:

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
Cell In[6], line 7
      4 image = cv2.imread(SOURCE_IMAGE_PATH)
      5 height, width, depth = image.shape
----> 7 detections = model.predict_with_classes(
      8     image=image,
      9     classes=enhance_class_name(class_names=CLASSES_NAME),
     10     box_threshold=BOX_TRESHOLD,
     11     text_threshold=TEXT_TRESHOLD
     12 )
     14 detections = detections[detections.class_id != None]
     15 #detections = detections[detections.class_id != 'both hands']
File ~\Всякое\MyProjects\gpu gd\GroundingDINO\groundingdino\util\inference.py:219, in Model.predict_with_classes(self, image, classes, box_threshold, text_threshold)
    217 caption = ". ".join(classes)
    218 processed_image = Model.preprocess_image(image_bgr=image).to(self.device)
--> 219 boxes, logits, phrases = predict(
    220     model=self.model,
    221     image=processed_image,
    222     caption=caption,
    223     box_threshold=box_threshold,
    224     text_threshold=text_threshold,
    225     device=self.device)
    226 source_h, source_w, _ = image.shape
    227 detections = Model.post_process_result(
    228     source_h=source_h,
    229     source_w=source_w,
    230     boxes=boxes,
    231     logits=logits)
File ~\Всякое\MyProjects\gpu gd\GroundingDINO\groundingdino\util\inference.py:68, in predict(model, image, caption, box_threshold, text_threshold, device, remove_combined)
     65 image = image.to(device)
     67 with torch.no_grad():
---> 68     outputs = model(image[None], captions=[caption])
     70 prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0]  # prediction_logits.shape = (nq, 256)
     71 prediction_boxes = outputs["pred_boxes"].cpu()[0]  # prediction_boxes.shape = (nq, 4)
File c:\users\nikit\всякое\myprojects\gpu gd\gpu\lib\site-packages\torch\nn\modules\module.py:1051, in Module._call_impl(self, *input, **kwargs)
   1047 # If we don't have any hooks, we want to skip the rest of the logic in
   1048 # this function, and just call forward.
   1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051     return forward_call(*input, **kwargs)
   1052 # Do not call functions when jit is used
   1053 full_backward_hooks, non_full_backward_hooks = [], []
File ~\Всякое\MyProjects\gpu gd\GroundingDINO\groundingdino\models\GroundingDINO\groundingdino.py:327, in GroundingDINO.forward(self, samples, targets, **kw)
    324         self.poss.append(pos_l)
    326 input_query_bbox = input_query_label = attn_mask = dn_meta = None
--> 327 hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer(
    328     srcs, masks, input_query_bbox, self.poss, input_query_label, attn_mask, text_dict
    329 )
    331 # deformable-detr-like anchor update
    332 outputs_coord_list = []
File c:\users\nikit\всякое\myprojects\gpu gd\gpu\lib\site-packages\torch\nn\modules\module.py:1051, in Module._call_impl(self, *input, **kwargs)
   1047 # If we don't have any hooks, we want to skip the rest of the logic in
   1048 # this function, and just call forward.
   1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051     return forward_call(*input, **kwargs)
   1052 # Do not call functions when jit is used
   1053 full_backward_hooks, non_full_backward_hooks = [], []
File ~\Всякое\MyProjects\gpu gd\GroundingDINO\groundingdino\models\GroundingDINO\transformer.py:258, in Transformer.forward(self, srcs, masks, refpoint_embed, pos_embeds, tgt, attn_mask, text_dict)
    253 enc_topk_proposals = enc_refpoint_embed = None
    255 #########################################################
    256 # Begin Encoder
    257 #########################################################
--> 258 memory, memory_text = self.encoder(
    259     src_flatten,
    260     pos=lvl_pos_embed_flatten,
    261     level_start_index=level_start_index,
    262     spatial_shapes=spatial_shapes,
    263     valid_ratios=valid_ratios,
    264     key_padding_mask=mask_flatten,
    265     memory_text=text_dict["encoded_text"],
    266     text_attention_mask=~text_dict["text_token_mask"],
    267     # we ~ the mask . False means use the token; True means pad the token
    268     position_ids=text_dict["position_ids"],
    269     text_self_attention_masks=text_dict["text_self_attention_masks"],
    270 )
    271 #########################################################
    272 # End Encoder
    273 # - memory: bs, \sum{hw}, c
   (...)
    277 # - enc_intermediate_refpoints: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c)
    278 #########################################################
    279 text_dict["encoded_text"] = memory_text
File c:\users\nikit\всякое\myprojects\gpu gd\gpu\lib\site-packages\torch\nn\modules\module.py:1051, in Module._call_impl(self, *input, **kwargs)
   1047 # If we don't have any hooks, we want to skip the rest of the logic in
   1048 # this function, and just call forward.
   1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051     return forward_call(*input, **kwargs)
   1052 # Do not call functions when jit is used
   1053 full_backward_hooks, non_full_backward_hooks = [], []
File ~\Всякое\MyProjects\gpu gd\GroundingDINO\groundingdino\models\GroundingDINO\transformer.py:576, in TransformerEncoder.forward(self, src, pos, spatial_shapes, level_start_index, valid_ratios, key_padding_mask, memory_text, text_attention_mask, pos_text, text_self_attention_masks, position_ids)
    574 # main process
    575 if self.use_transformer_ckpt:
--> 576     output = checkpoint.checkpoint(
    577         layer,
    578         output,
    579         pos,
    580         reference_points,
    581         spatial_shapes,
    582         level_start_index,
    583         key_padding_mask,
    584     )
    585 else:
    586     output = layer(
    587         src=output,
    588         pos=pos,
   (...)
    592         key_padding_mask=key_padding_mask,
    593     )
File c:\users\nikit\всякое\myprojects\gpu gd\gpu\lib\site-packages\torch\utils\checkpoint.py:211, in checkpoint(function, *args, **kwargs)
    208 if kwargs:
    209     raise ValueError("Unexpected keyword arguments: " + ",".join(arg for arg in kwargs))
--> 211 return CheckpointFunction.apply(function, preserve, *args)
File c:\users\nikit\всякое\myprojects\gpu gd\gpu\lib\site-packages\torch\utils\checkpoint.py:90, in CheckpointFunction.forward(ctx, run_function, preserve_rng_state, *args)
     87 ctx.save_for_backward(*tensor_inputs)
     89 with torch.no_grad():
---> 90     outputs = run_function(*args)
     91 return outputs
File c:\users\nikit\всякое\myprojects\gpu gd\gpu\lib\site-packages\torch\nn\modules\module.py:1051, in Module._call_impl(self, *input, **kwargs)
   1047 # If we don't have any hooks, we want to skip the rest of the logic in
   1048 # this function, and just call forward.
   1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051     return forward_call(*input, **kwargs)
   1052 # Do not call functions when jit is used
   1053 full_backward_hooks, non_full_backward_hooks = [], []
File ~\Всякое\MyProjects\gpu gd\GroundingDINO\groundingdino\models\GroundingDINO\transformer.py:785, in DeformableTransformerEncoderLayer.forward(self, src, pos, reference_points, spatial_shapes, level_start_index, key_padding_mask)
    780 def forward(
    781     self, src, pos, reference_points, spatial_shapes, level_start_index, key_padding_mask=None
    782 ):
    783     # self attention
    784     # import ipdb; ipdb.set_trace()
--> 785     src2 = self.self_attn(
    786         query=self.with_pos_embed(src, pos),
    787         reference_points=reference_points,
    788         value=src,
    789         spatial_shapes=spatial_shapes,
    790         level_start_index=level_start_index,
    791         key_padding_mask=key_padding_mask,
    792     )
    793     src = src + self.dropout1(src2)
    794     src = self.norm1(src)
File c:\users\nikit\всякое\myprojects\gpu gd\gpu\lib\site-packages\torch\nn\modules\module.py:1051, in Module._call_impl(self, *input, **kwargs)
   1047 # If we don't have any hooks, we want to skip the rest of the logic in
   1048 # this function, and just call forward.
   1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051     return forward_call(*input, **kwargs)
   1052 # Do not call functions when jit is used
   1053 full_backward_hooks, non_full_backward_hooks = [], []
File ~\Всякое\MyProjects\gpu gd\GroundingDINO\groundingdino\models\GroundingDINO\ms_deform_attn.py:338, in MultiScaleDeformableAttention.forward(self, query, key, value, query_pos, key_padding_mask, reference_points, spatial_shapes, level_start_index, **kwargs)
    335     sampling_locations = sampling_locations.float()
    336     attention_weights = attention_weights.float()
--> 338 output = MultiScaleDeformableAttnFunction.apply(
    339     value,
    340     spatial_shapes,
    341     level_start_index,
    342     sampling_locations,
    343     attention_weights,
    344     self.im2col_step,
    345 )
    347 if halffloat:
    348     output = output.half()
File ~\Всякое\MyProjects\gpu gd\GroundingDINO\groundingdino\models\GroundingDINO\ms_deform_attn.py:53, in MultiScaleDeformableAttnFunction.forward(ctx, value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, im2col_step)
     42 @staticmethod
     43 def forward(
     44     ctx,
   (...)
     50     im2col_step,
     51 ):
     52     ctx.im2col_step = im2col_step
---> 53     output = _C.ms_deform_attn_forward(
     54         value,
     55         value_spatial_shapes,
     56         value_level_start_index,
     57         sampling_locations,
     58         attention_weights,
     59         ctx.im2col_step,
     60     )
     61     ctx.save_for_backward(
     62         value,
     63         value_spatial_shapes,
   (...)
     66         attention_weights,
     67     )
     68     return output

NameError: name '_C' is not defined

From:

import cv2
import supervision as sv

image = cv2.imread(SOURCE_IMAGE_PATH)
height, width, depth = image.shape

detections = model.predict_with_classes(
    image=image,
    classes=enhance_class_name(class_names=CLASSES_NAME),
    box_threshold=BOX_TRESHOLD,
    text_threshold=TEXT_TRESHOLD
)

detections = detections[detections.class_id != None]
#detections = detections[detections.class_id != 'both hands']
detections = detections[(detections.area / (height * width)) < 0.5]
#detections = detections[(detections.area / (height * width)) >= 0.2]

box_annotator = sv.BoxAnnotator()
labels = [
    f"{CLASSES_NAME[class_id]} {confidence:0.2f}" 
    for _, confidence, class_id, _ 
    in detections]
annotated_frame = box_annotator.annotate(scene=image.copy(), detections=detections, labels=labels)


%matplotlib inline
sv.plot_image(annotated_frame, (16, 16))

SOS!

I tried the following:

Adding environmental arg:

os.environ['CUDA_HOME'] = r'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2'
!echo %CUDA_HOME%
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin

Cuda and torch+cuxxx don't need to bee matched, but cuxxx <= CUDAxxx. I tried matched ones, nothing.

Tried all the packages that I could find in other topics:

pip install sympy spacy Cyton
pip install numpy==1.23.5

Importing torch to empty folder:

%cd {HOME}/empty_dir
import torch

After every action I was restarting my kernel, even venv

All the code was running perfectly fine using cpu.

Main issue: I can't find direct solution neither for this:

C:\Users\nikit\Всякое\MyProjects\gpu gd\GroundingDINO\groundingdino\models\GroundingDINO\ms_deform_attn.py:31: UserWarning: Failed to load custom C++ ops. Running on CPU mode Only!
  warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!")

Nor for this: NameError: name '_C' is not defined

But in .py file, where _C is imports, it throws this: UserWarning: Failed to load custom C++ ops. Running on CPU mode Only! Because _C isn't imported, that's the thing. If I can resolve any problem, gpu'll be set.

P.S. Google Collab works perfectly fine. How did they achieve this? Who knows.


Solution

  • I SOLVED IT!

    INVESTIGATION

    So basically I went to collab and made print(_C) to find it. Then I went to my venv and grabbed _C folder from torch module. I pasted it into every folder of a grindingDINO github folder. Then it solved the problem and I started clearing folders, so GrindingDINO/grindingdino only contained _C folder. Then I got another error: AttributeError: module 'torch._C' has no attribute 'ms_deform_attn_forward' After this i googled it and find, that you need to build setup.py in GroundingDINO root folder. I went to cmd, went in this folder, printed py setup.py build. After build I got build folder in GroundingDINO folder. From there I found groundingdino folder and cut it into GroundingDINO and all started working super fine.

    INSTRUCTION

    open cmd
    enter venv
    cd path\to\your\GroundingDINO local github rep
    py setup.py build
    cut path\to\your\GroundingDINO\build\lib.win-amd64-cpython-39(in my case)\groundingdino
    paste path\to\your\GroundingDINO folder 
    

    MINIMAL JUPYTER NOTEBOOK NEEDS

    import os
    HOME = os.getcwd()
    
    CONFIG_PATH = os.path.join(HOME, "GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py")
    WEIGHTS_PATH = os.path.join(HOME, "weights", "groundingdino_swint_ogc.pth")
    
    %cd {HOME}/GroundingDINO
    from groundingdino.util.inference import Model
    model = Model(model_config_path=CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH)