To train the denoising autoencoder, I constructed x+n in the input data and x in the output data(x: original data, n: noise). After learning was completed, I obtained noise-removed data through a denoising autoencoder (x_test + n_test -> x_test).
Then, as a test, I trained autoencoder by constructing the input and output data to the same value, just like the conventional autoencoder
(x -> x).
As a result, i obtained noise-removed data similar to a denoising autoencoder in the test phase.
Why is noise removed through the conventional autoencoder?
Please tell me the difference between these two autoencoder.
An autoencoder's purpose is to map high dimensional data (e.g images) to a compressed form (i.e. hidden representation), and build up the original image from the hidden representation.
A denoising autoencoder, in addition to learning to compress data (like an autoencoder), it learns to remove noise in images, which allows to perform well even when the inputs are noisy. So denoising autoencoders are more robust than autoencoders + they learn more features from the data than a standard autoencoder.
And one of the uses of autoencoders was to find a good initialization for deep neural networks (in the late 2000s). However, with good initializations (e.g. Xavier) and activation functions (e.g. ReLU), their advantage has disappeared. Now they are more used in generative tasks (e.g. variational autoencoder)