Need help as I am new to Keras and was reading on dropout and how using dropout can have an impact on loss calculation during training and validation phase. This is because dropout is only present at training time and not validation time, so comparing two losses can be misleading.
Question is
- The use of learning_phase_scope(1)
- how does it impact validation
- What steps to do to correct for testing loss when dropout is used?
It's not only
Dropout
butBatchNormalization
as well that need to be changed or it'll affect validation performance.If you use keras and just want to get validation loss (and or accuracy or other metrics) then you better use
model.evaluate()
or addvalidation_data
whilemodel.fit
and don't do anything withlearning_phase_scope
.The
learning_phase_scope(1)
means it's for training, 0 is for predict/validate.Personally I use
learning_phase_scope
only when I want to train something that not end with simplymodel.fit
(visualize CNN filter) but only once so far in past 3 years.