Search code examples
c#.netopencvemgucv

SEHException in CUDA with EmguCV


I try to build project with CUDA in EmguCV (wrapper of OpenCV). I installed CUDA version 10.0, NuGet package Emgu.CV.runtime.windows version 4.3.0.3890, Emgu.CV.runtime.windows.cuda version 4.3.0.3890. When I test this with command CudaInvoke.HasCuda, it is true. When I do this everything works:

Net net = DnnInvoke.ReadNetFromTensorflow(_modelFile, _configFile);
net.SetInput(DnnInvoke.BlobFromImage(image.Mat, 1, new System.Drawing.Size(300, 300), default(MCvScalar), false, false));
Mat mat = net.Forward();

But when I add CUDA background and target:

Net net = DnnInvoke.ReadNetFromTensorflow(_modelFile, _configFile);
net.SetPreferableBackend(Emgu.CV.Dnn.Backend.Cuda);
net.SetPreferableTarget(Emgu.CV.Dnn.Target.Cuda);                
net.SetInput(DnnInvoke.BlobFromImage(image.Mat, 1, new System.Drawing.Size(300, 300), default(MCvScalar), false, false));                
Mat mat = net.Forward();

it doesn't work. It fails with "System.Runtime.InteropServices.SEHException".

Exception appears in string Mat mat = net.Forward();

My GPU is NVIDIA GeForce 940MX (Maxwell 1.0).

May anyone help with this problem, please?


Solution

  • Now I found solution. Firstly, you can't use quantized model (UINT8) with CUDA. You need to use Caffe Model for this detector. You must choose necessary CUDA and cuDNN for your GPU. After that you can meet this problem again. But it is from your hardware. For a long time I couldn't understand what was wrong with me. But my code works successfully on other machines. So I think the problem is with my hardware. If you have encountered the same problem, do not hesitate to contact me.