Search code examples
theanodeep-learningdbn

Fine tuning weights in DBN


In a Deep Belief Network, I have pretrained the net using CD-1. I have the weights and biases stored. Now can I run a supervised mlp code with dropout and initialise the weights as those obtained from pre training. Will it be equivalent to a DBN implemented with dropout fine tuning?


Solution

  • dropout fine tuning on DBN

    means

    run a supervised mlp code with dropout and initialise the weights as those obtained from pre training

    So yes, they are equivalent.