I've tried to create my own Deep Dream Algorithm with this Code:
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import inception
img = np.random.rand(1,500,500,3)
net = inception.get_inception_model()
tf.import_graph_def(net['graph_def'], name='inception')
graph = tf.get_default_graph()
sess = tf.Session()
layer = graph.get_tensor_by_name('inception/mixed5b_pool_reduce_pre_relu:0')
gradient = tf.gradients(tf.reduce_mean(layer), graph.get_tensor_by_name('inception/input:0'))
softmax = sess.graph.get_tensor_by_name('inception/softmax2:0')
iters = 100
init = tf.global_variables_initializer()
sess.run(init)
for i in range(iters):
prediction = sess.run(softmax, \
{'inception/input:0': img})
grad = sess.run(gradient[0], \
{'inception/input:0': img})
grad = (grad-np.mean(grad))/np.std(grad)
img = grad
plt.imshow(img[0])
plt.savefig('output/'+str(i+1)+'.png')
plt.close('all')
But even after running this loop for 100 iterations the resulting picture still looks random (I will attach said picture to this Question).
Can someone please help me to optimize my code?
Using the Inception network for Deep Dream is a bit fiddly. On the CADL course that you have borrowed the helper library from, the instructor chooses to use VGG16 as the instruction network instead. If you use this and make a few small modifications to your code, you should get something that works (if you swap in the Inception network here it will kind of work, but results will look even more disappointing):
Doing all this gets images that are clearly working, but still need some refinement:
To get better, full colour images of the type you may have seen online requires more changes. For instance you could re-normalise or blur the image slightly between each iteration.
If you want to get more sophisticated, you could try the TensorFlow Jupyter notebook walk-through, although it is somewhat harder to understand from first principles due to combining multiple ideas.