I know that the general guideline for implementing the dispose pattern warns against implementing a virtual Dispose()
. However, most often we're only working with managed resources inside a class and so the full dispose pattern seems like an overkill - i.e. we don't need a finalizer. In such cases, is it OK to have a virtual Dispose()
in the base class?
Consider this simple example:
abstract class Base : IDisposable
{
private bool disposed = false;
public SecureString Password { get; set; } // SecureString implements IDisposable.
public virtual void Dispose()
{
if (disposed)
return;
if (Password != null)
Password.Dispose();
disposed = true;
}
}
class Sub : Base
{
private bool disposed = false;
public NetworkStream NStream { get; set; } // NetworkStream implements IDisposable.
public override void Dispose()
{
if (disposed)
return;
if (NStream != null)
NStream.Dispose();
disposed = true;
base.Dispose();
}
}
I find this more readable than a complete dispose pattern. I understand that this example would become problematic if we introduced an unmanaged resource into the Base
class. But suppose that this won't happen. Is the above example then valid/safe/robust, or does it pose any problem? Should I stick to the standard full-blown dispose pattern, even if no unmanaged resources are used? Or is the above approach indeed perfectly OK in this situation - which in my experience is far more common than dealing with unmanaged resources?
I have given up on the full-blown IDisposable
pattern in cases where I am not dealing with unmanaged resources.
I don't see any reason why you shouldn't be able to forego the pattern and make Dispose
virtual if you can ensure that derived classes won't introduce unmanaged resources!
(And here lies a problem, because you cannot enforce this. You have no control over what fields a derived class will add, so there is no absolute safety with a virtual Dispose
. But then even if you used the full-blown pattern, a derived class could get things wrong, so there is always some trust involved that subtypes adhere to certain rules.)
First, in cases where we only deal with managed objects, having a finalizer would be pointless: If Dispose
is called from the finalizer (Dispose(disposing: false)
according to the full-blown pattern), we may not safely access/dereference other reference-typed managed objects because they might already be gone. Only value-typed objects reachable from the object being finalized may be safely accessed.
If the finalizer is pointless, it is also pointless to have the disposing
flag, which is used to distinguish whether Dispose
was explicitly called or whether it was called by the finalizer.
It is also unnecessary to do GC.SuppressFinalize
if there is no finalizer.
I am not even sure whether it is then still imperative that (See @usr's comment below.)Dispose
not throw any exceptions in a managed-only scenario. AFAIK this rule exists mostly to protect the finalizer thread.
As you can see, once the finalizer goes, much of the classic Disposable pattern is no longer necessary.