Im using this TensorFlow resource to create a deep dream image (https://www.tensorflow.org/tutorials/generative/deepdream), but instead of InceptionV3 I'm loading in my own convolutional neural net (model summary at the bottom). Instead of their "mixed3" and "mixed5" layers, I use my "conv2d_40" and "conv2d_41" layers. When I run the "run_deep_dream_simple" step using my model's layers, I get an InvalidArgumentError: Graph execution error: Argument saying:
Inputs to operation while/body/_1/gradient_tape/while/model_1/conv2d_41/ReluGrad of type ReluGrad must have the same size and shape. Input 0: [1,213,141,64] != input 1: [1,87,157,64] [[{{node gradient_tape/while/model_1/conv2d_41/ReluGrad}}]] [Op:__inference___call___8564]
It seems like I need to change a tensor shape or deprocess the image I'm feeding into it, which is (180, 320, 3). Any ideas for how to do this?
Model Summary:
Layer (type):
conv2d_40 (Conv2D) (None, 430, 286, 32) 896
batch_normalization_53 (Bat (None, 430, 286, 32) 128
chNormalization)
max_pooling2d_40 (MaxPoolin (None, 215, 143, 32) 0
g2D)
dropout_53 (Dropout) (None, 215, 143, 32) 0
conv2d_41 (Conv2D) (None, 213, 141, 64) 18496
batch_normalization_54 (Bat (None, 213, 141, 64) 256
chNormalization)
max_pooling2d_41 (MaxPoolin (None, 106, 70, 64) 0
g2D)
dropout_54 (Dropout) (None, 106, 70, 64) 0
conv2d_42 (Conv2D) (None, 104, 68, 128) 73856
batch_normalization_55 (Bat (None, 104, 68, 128) 512
chNormalization)
max_pooling2d_42 (MaxPoolin (None, 52, 34, 128) 0
g2D)
dropout_55 (Dropout) (None, 52, 34, 128) 0
flatten_14 (Flatten) (None, 226304) 0
dense_27 (Dense) (None, 512) 115868160
batch_normalization_56 (Bat (None, 512) 2048
chNormalization)
dropout_56 (Dropout) (None, 512) 0
dense_28 (Dense) (None, 2) 1026