Wasserstein GAN (https://arxiv.org/abs/1701.07875) is a big improvement to DCGAN for better training stability and less model collapse. But when seeing the implementations, WGAN is remarkably less used than the original DCGAN. What is the cause of this fact?
I don’t have a definitive answer but one possibility is simply ease of use and open source implementations. A quick search shows a Pytorch implementation of WGAN and a TensorFlow tutorial on DCGAN. TensorFlow was previously the more popular option (according to this link) so people probably opted for the simpler option when implementing a comparison.
Also, bear in mind a stable implementation where you know you’ve probably implemented it correctly and your competing technique surpasses it is more desirable than learning a new framework for a GAN that will be harder to beat.