Search code examples
deep-learningcomputer-visioncomputer-sciencegenerative-adversarial-network

Why GAN's (Generative Adverserial networks) are called "implicit" generative networks?


Adverserial Networks, such as GAN's, are called "implicit" networks. What does it mean? And, how do they differ from "explicit" generative networks? And what are "explicit" generative networks?


Solution

  • Let me provide you a simple answer by starting a conversation between human and computer algorithms:

    The human says: I have some data, I want to be able to generate more data that look like these. To be precise, I need to be able to generate more data from the main distribution that my training data is coming from.

    A Generative model says: give me your data, I'll find a way to help you. Eighter I give you back the distribution that your data comes from, or I will provide you another clever way so you can generate more samples similar to your original data without even bothering with the distributions.

    • Explicit models are like: give us training data, we give you the distribution of your data, so you can do whatever you want with it.

      • If you at least know the type of the distribution, that would be a huge help, my work is easier, I only need to find the parameters of the distribution ( for example, if you tell me that the distribution is Gaussian, so as my "learning process" is to estimate the Mean and Variance for you. It doesn't need to be Gaussian, I can work with more complex and high dimensional distributions too)
      • If you don't know the type of the distribution, but still want me to give you a distribution, explicitly, I can go with non-parametric models such as Kernel Density Estimation.
    • Implicit method say: we give you the capability of generating new samples similar to your training data in crazy ways! For example, look at GANs, by playing a game on your data, in the end, we give you something, a black box, we call it a Generator, you can pass a random number to it and it will magically give you back a new sample. In training this black box, we won't bother with any distribution stuff directly.