Search code examples
c#nvidiacntk

CNTK NVidia RTX 3060 Cublas Failure 13 with layers larger than 512


I have a LSTM network with 2000 neurons in CNTK 2.7 using EasyCNTK C# which is working fine with CPU and with Gigabyte NVidia RTX 2060 6GB, but with Gigabyte NVidia RTX 3060 12GB I get this error if I increase the number of neurons over 512 (using the same NVidia driver version 461.72 on both cards)

This is my neural network configuration

    int minibatchSize = 8;
    int epochCount = 10;
    int inputDimension = 10200;
    var device = DeviceDescriptor.GPUDevice(0);

    // check the current device for running neural networks
    Console.WriteLine($"Using device: {device.AsString()}");

    var model = new Sequential<double>(device, new[] { inputDimension }, inputName: "Input");            
    model.Add(new LSTM(2000, isLastLstm: false));
    model.Add(new LSTM(500, selfStabilizerLayer: new SelfStabilization<double>()));
    model.Add(new Residual2(128, new Tanh()));
    model.Add(new Residual2(1, new Tanh()));    

And This is the error, I get the error also with Dense or any other layer type

Unhandled Exception: System.ApplicationException: CUBLAS failure 13: CUBLAS_STATUS_EXECUTION_FAILED ; GPU=0 ; hostname=EVO ; expr=cublasgemmHelper(cuHandle, transA, transB, m, n, k, &alpha, a.Data(), (int) a.m_numRows, b.Data(), (int) b.m_numRows, &beta, c.Data(), (int) c.m_numRows)

[CALL STACK]
    > Microsoft::MSR::CNTK::TensorView<half>::  Reshaped
    - Microsoft::MSR::CNTK::CudaTimer::  Stop
    - Microsoft::MSR::CNTK::GPUMatrix<double>::  MultiplyAndWeightedAdd
    - Microsoft::MSR::CNTK::Matrix<double>::  MultiplyAndWeightedAdd
    - Microsoft::MSR::CNTK::TensorView<double>::  DoMatrixProductOf
    - Microsoft::MSR::CNTK::TensorView<double>::  AssignMatrixProductOf
    - std::enable_shared_from_this<Microsoft::MSR::CNTK::MatrixBase>::  shared_from_this (x3)
    - CNTK::Internal::  UseSparseGradientAggregationInDataParallelSGD
    - CNTK::  CreateTrainer
    - CNTK::Trainer::  TotalNumberOfUnitsSeen
    - CNTK::Trainer::  TrainMinibatch (x2)
    - CSharp_CNTK_Trainer__TrainMinibatch__SWIG_2
    - 00007FFF157B7E55 (SymFromAddr() error: The specified module could not be found.)

Solution

  • Looks like CNTK is not supporting CUDA 11 and RTX 3060 is not working with CUDA 10 or older.