Search code examples
c#odbcclrdeadlock

Debugging a CLR hang


I've uploaded a log of a WinDBG session that I'll refer to: https://pastebin.com/TvYD9500

So, I'm debugging a hang that has been reported by a customer. The reproducer is a small C# program:

using System;
using System.Data.Odbc;
using System.Threading;

namespace ConnectionPoolingTest
{
    class Program
    {
        static void Main(string[] args)
        {
            String connString = "DSN=DotNetUltraLightDSII";
            using (OdbcConnection connnection = new OdbcConnection(connString))
            {
                connnection.Open();
                connnection.Close();
            }
        }
    }
}

We sell a framework to build ODBC drivers with, and the customer is testing an ODBC driver built with the framework. One detail which may be relevant is that they're using a component that allows their business logic to be written in C#, and that component is written in C++/CLI to bridge between our native code & the customer's code (So, the ODBC driver DLL is a mixed-mode DLL which exposes a C interface to the ODBC Driver Manager).

(If needed, I might be able to upload the driver binary as well.)

What happens in this reproducer (which must be run with connection pooling enabled on the DSN used), is that the process ends up hanging with a single thread with a stack that looks like:

RetAddr           : Args to Child                                                           : Call Site
000007fe`fcea10dc : 00000000`00470000 00000000`770d0290 00000000`00000000 00000000`009ae8e0 : ntdll!ZwWaitForSingleObject+0xa
000007fe`f0298407 : 00000000`00999a98 00000000`770d5972 00000000`00000000 00000000`00000250 : KERNELBASE!WaitForSingleObjectEx+0x79
000007fe`f0294d04 : 00000000`00999a98 00000000`00a870e0 00000000`00999a68 00000000`00991a10 : comsvcs!UTSemReadWrite::LockWrite+0x90
000007fe`f0294ca8 : 00000000`00999a68 00000000`00999a98 00000000`00999a20 00000000`7717ba58 : comsvcs!CDispenserManager::~CDispenserManager+0x2c
000007fe`f02932a8 : 00000000`00999a20 00000000`00a871c0 00000000`77182e70 00000000`7717ba58 : comsvcs!ATL::CComObjectCached<ATL::CComClassFactorySingleton<CDispenserManager> >::`scalar deleting destructor'+0x68
000007fe`f0293a00 : 000007fe`f0290000 00000000`00000001 00000000`00000001 00000000`00a87198 : comsvcs!ATL::CComObjectCached<ATL::CComClassFactorySingleton<CDispenserManager> >::Release+0x20
000007fe`f02949aa : 00000000`00000000 00000000`00a87188 00000000`00992c20 00000000`00992c30 : comsvcs!ATL::CComModule::Term+0x35
000007fe`f0293543 : 00000000`00000000 00000000`00a87190 00000000`00000001 00000000`00a87278 : comsvcs!`dynamic atexit destructor for 'g_ModuleWrapper''+0x22
000007fe`f029355b : 00000000`00000001 00000000`00000000 000007fe`f0290000 00000000`76f515aa : comsvcs!CRT_INIT+0x96
00000000`7708778b : 000007fe`f0290000 00000000`00000000 00000000`00000001 00000000`7717ba58 : comsvcs!__DllMainCRTStartup+0x187
00000000`7708c1e0 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : ntdll!LdrShutdownProcess+0x1db
000007fe`efb4ee58 : 00000000`00486b10 00000000`00000001 00000000`00482460 00000000`00000000 : ntdll!RtlExitUserProcess+0x90
000007fe`efb4efd4 : 00000000`00000000 000007fe`efb4efc0 ffffffff`00000000 00000000`004868a0 : mscoreei!RuntimeDesc::ShutdownAllActiveRuntimes+0x287
000007fe`eefa9535 : 00000000`0042f4b8 000007fe`ef53d6c0 00000000`0042f488 00000000`004868a0 : mscoreei!CLRRuntimeHostInternalImpl::ShutdownAllRuntimesThenExit+0x14
000007fe`eefa9495 : 00000000`00000000 00000000`0042f488 00000000`00000000 00000000`00000000 : clr!EEPolicy::ExitProcessViaShim+0x95
000007fe`eee83336 : 00000000`00000006 00000000`0042f870 00000000`00000000 00000000`00000000 : clr!SafeExitProcess+0x9d
000007fe`eee61c51 : 00000000`01000000 00000000`0042f870 00000000`00000000 00000000`00000000 : clr!HandleExitProcessHelper+0x3e
000007fe`eee62034 : ffffffff`ffffffff 000007fe`eee62020 00000000`00000000 00000000`00000000 : clr!_CorExeMainInternal+0x101
000007fe`efb47b2d : 00000000`00000000 00000000`00000091 00000000`00000000 00000000`0042f7c8 : clr!CorExeMain+0x14
000007fe`efbe5b21 : 00000000`00000000 000007fe`eee62020 00000000`00000000 00000000`00000000 : mscoreei!CorExeMain+0x112
00000000`76f4556d : 000007fe`efb40000 00000000`00000000 00000000`00000000 00000000`00000000 : MSCOREE!CorExeMain_Exported+0x57
00000000`770a385d : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : KERNEL32!BaseThreadInitThunk+0xd
00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : ntdll!RtlUserThreadStart+0x1d

I was able to find some source code for the UTSemReadWrite class (but it seems to be a bit different than what I'm actually running): https://github.com/dotnet/coreclr/blob/616fea550548af750b575f3c304d1a9b4b6ef9a6/src/utilcode/utsem.cpp

Putting a breakpoint into UTSemReadWrite::LockWrite, I was able to debug the last call which hung, and found that the cause was that m_dwFlag (which was being used for the atomicity) was non-zero, so it goes to wait on an event (for the owning thread to signal it when it released it), and it does so by calling UTSemReadWrite::GetWriteWaiterEvent, but that call creates the event (and there's no other threads at this point...), and then waits on it. Boom, deadlock.

from debugging through the assembly, I was able to deduce that m_dwFlag, was offset 4-bytes into the object, and putting a breakpoint into UTSemReadWrite::UTSemReadWrite, I was able to get the address of the UTSemReadWrite instance involved in the hang, and put a data breakpoint on m_dwFlag.

Doing that, I could see that indeed, a thread with the thread function comsvcs!PingThread had called comsvcs!UTSemReadWrite::LockRead (and presumably gotten the lock), before being killed before a call to comsvcs!UTSemReadWrite::UnlockRead. I've seen something like this before, and it was caused by an unhandled SEH exception killing the PingThread, but the application preventing a crash with setunhandledexceptionfilter(), so I thought that maybe some exception was killing the thread, but it turned out that it was the CLR itself!

RetAddr           : Args to Child                                                           : Call Site
00000000`7708c198 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : ntdll!ZwTerminateProcess+0xa
000007fe`efb4ee58 : 00000000`00486b10 00000000`00000001 00000000`00482460 00000000`00000000 : ntdll!RtlExitUserProcess+0x48
000007fe`efb4efd4 : 00000000`00000000 000007fe`efb4efc0 ffffffff`00000000 00000000`004868a0 : mscoreei!RuntimeDesc::ShutdownAllActiveRuntimes+0x287
000007fe`eefa9535 : 00000000`0042f4b8 000007fe`ef53d6c0 00000000`0042f488 00000000`004868a0 : mscoreei!CLRRuntimeHostInternalImpl::ShutdownAllRuntimesThenExit+0x14
000007fe`eefa9495 : 00000000`00000000 00000000`0042f488 00000000`00000000 00000000`00000000 : clr!EEPolicy::ExitProcessViaShim+0x95
000007fe`eee83336 : 00000000`00000006 00000000`0042f870 00000000`00000000 00000000`00000000 : clr!SafeExitProcess+0x9d
000007fe`eee61c51 : 00000000`01000000 00000000`0042f870 00000000`00000000 00000000`00000000 : clr!HandleExitProcessHelper+0x3e
000007fe`eee62034 : ffffffff`ffffffff 000007fe`eee62020 00000000`00000000 00000000`00000000 : clr!_CorExeMainInternal+0x101
000007fe`efb47b2d : 00000000`00000000 00000000`00000091 00000000`00000000 00000000`0042f7c8 : clr!CorExeMain+0x14
000007fe`efbe5b21 : 00000000`00000000 000007fe`eee62020 00000000`00000000 00000000`00000000 : mscoreei!CorExeMain+0x112
00000000`76f4556d : 000007fe`efb40000 00000000`00000000 00000000`00000000 00000000`00000000 : MSCOREE!CorExeMain_Exported+0x57
00000000`770a385d : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : KERNEL32!BaseThreadInitThunk+0xd
00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : ntdll!RtlUserThreadStart+0x1d

(This brings up a question; so ntdll!ZwTerminateProcess doesn't actually terminate the process? Because it's obviously returned and calling atexit handlers... I guess this is a different function with the same name? https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/content/ntddk/nf-ntddk-zwterminateprocess)

So, my question is, am I interpreting what the debugger is showing me correctly? Is this actually a bug in the CLR? Shouldn't the CLR gracefully end threads first?

Something that the customer noticed was that the hang didn't occur if he created one of the threads in the driver as a background thread, which is curious, because even a foreground thread should be stopped quite quickly when the driver is unloaded (via finalizers calling SQLFreeHandle() on the driver's handles), unless the finalizer thread is being slowed down by something, I guess?

the background thread in the reproducer driver which was sent to us was basically

public Driver()
{
     this.tokenSource= new CancellationTokenSource();
            this.token = this.tokenSource.Token;
            this.worker= new Thread(this.DoWork) { IsBackground = false };
            this.worker.Start();
}

public override void Dispose()
        {
            this.tokenSource.Cancel();
            this.worker.Join();
            this.tokenSource.Dispose();

            base.Dispose();
        }

private void DoWork() {
            while (!this.token.WaitHandle.WaitOne(200)) {
                log(this.Log, "Doing some work....");
            }
            log(this.Log, "Done with work.");
        }

and Dispose() is getting called correctly, and exits.

I'm not sure how to approach this next.

Edit: After reading this, I have the feeling this is a bug/'quirk' of the CLR. In my scenario, the last foreground .NET thread is in the ODBC driver. When the ODBC Driver Manager calls into SQLFreeHandle to unload the driver (from some thread either in the windows threadpool or owned by the driver manager itself, not sure), this causes the driver to terminate that last foreground thread. From my understanding of the CLR shutdown process gained from that article, the CLR will end up killing the thread calling SQLFreeHandle before it gets the chance to actually return from it, and this is expected behaviour.

But that thread seems to hold that UTSemReadWrite lock, so later on during atexit handling it'll deadlock.

My only idea on how to workaround this, if it is actually the CLR's fault, is to start another (foreground) .NET thread on the final call to SQLFreeHandle that will end up killing itself after some timeout (hopefully long enough for the SQLFreeHandle thread to release any locks it holds), to delay CLR shutdown. Not really ideal if that ends up holding up application shutdown...

Edit: Actually, even that idea doesn't really work, since it would mean that the ODBC Driver Manager might actually unload the driver while a thread was executing code from it, leading to a crash.


Solution

  • I've spoken with microsoft support, and they've confirmed this is an issue with the comsvcs component, which they may fix in a future version of windows. I'll update this if they tell me they've fixed it.