How to train auto encoder with noise

37 Views Asked by At

I have an auto encoder with 2 encoder blocks and one concatenation block and 1 decoder block. the reconstruction works fine for my simulated data with 0 added noise. Even if the Gaussian noise with a very small standard deviation is added, the model performs poorly. How do I solve this?

Input - simulated data that visually represents blocks of data like biclusters. Input layers in each block - 3 layers

class Autoencoder(nn.Module):
    def __init__(self, input_size_encoder1, input_size_encoder2):
        super(Autoencoder, self).__init__()
        torch.manual_seed(1)

        # Encoder 1
        self.encoder1 = nn.Sequential(
            nn.Linear(input_size_encoder1, input_size_encoder1),
            nn.Linear(input_size_encoder1, 10),
            nn.Linear(10, 1)
        )

        # Encoder 2
        self.encoder2 = nn.Sequential(
            nn.Linear(input_size_encoder2, input_size_encoder2),
            nn.Linear(input_size_encoder2, 10),
            nn.Linear(10, 1)
        )

        # Decoder
        self.decoder = nn.Sequential(
            nn.Linear(2,2),
            nn.Linear(2, 10),
            nn.Linear(10, 1),
        )

    def forward(self, x1, x2):
        # Encoding
        encoded1 = self.encoder1(x1)
        encoded2 = self.encoder2(x2)

        # Concatenate the encodings
        concatenated_output = torch.cat((encoded1, encoded2), dim=1)

        # Decoding
        decoded = self.decoder(concatenated_output)

        return decoded

    learning_rate = 0.001
    epochs = 100
    reg_hyperparameter = 1e-10
    batch_size = 200

I have experimented with adding more layers and changing batch_size, epochs and learning rate. None seem to show promising results. Can someone point to where I should start looking at least?

0

There are 0 best solutions below