How to calculate the validation loss in this model?

42 Views Asked by At

I'm training an LSTM model. I'm confusing about the validation loss of the model. Which value better represents the validation loss of the model? Is it the last value I obtain in the floowing loop, or I should calculate the mean of all the history values?

This is my model

for epoch in range(n_epochs):
        lstm.train()
        outputs = lstm.forward(X_train) # forward pass
        optimiser.zero_grad() # calculate the gradient, manually setting to 0
        # obtain the loss function
        loss = loss_fn(outputs, y_train)
        loss.backward() # calculates the loss of the loss function
        optimiser.step() # improve from loss, i.e backprop
        train_loss_history.append(loss.item())
        # test loss
        lstm.eval()
        test_preds = lstm(X_test)
        MSEtest_loss = loss_fn(test_preds, y_test)
        val_loss_history.append(MSEtest_loss.item())
        if epoch % 100 == 0:
            print("Epoch: %d, train loss: %1.5f, val MSE loss: %1.5f " % (epoch, 
                                                                      loss.item(), 
                                                                      MSEtest_loss.item(),
                                                                      )) 

Now, is the last value of MSEtest_loss.item() represents the validation loss of the model? or Should I calculate the val_loss_history to represent the validation loss of the model ?

0

There are 0 best solutions below