I have saved my text vectors by using gensim library which consists of some negative numbers. will it effect the training? If not then why am i getting nan loss value first for discriminator and then for both discriminator and generator after certain steps of training?
There are several reasons for a NaN loss and why models diverge. Most common ones I've seen are:
1e-8
to your output probability.assert not np.any(np.isnan(x))
on the input data.If none of the above helps, try to check the activation function, the optimizer, the loss function, the size and the shape of the network.
Finally, though less likely, there might be a bug with the framework you are using. Check the repo of the framework if there are others having the same issue.