Im working on a project for uni and I am completly stuck at a problem. I want to train a cnn model to do image style transfer from one picture to another and I looked it up in this toutorial: https://www.tensorflow.org/tutorials/generative/style_transfer#learn_more but I don't want to train just one image but I have to train it on one style and then put a video in, so that each picture is converted to this style. So I have to put my pictures in the trained model and I can't get the output right to show me a image.
I worked on said toutorial and after I trained the Model I tried to put a new image in the model. like this:
image = load_img('Tests\hut.jpeg')
def show_processed_image(image):
outputs = extractor(image)
generated_image = outputs['content']['block5_conv2']
print(generated_image)
tensor_to_image(generated_image)
plt.show()
show_processed_image(image)
The extractor is the traindes model, it looks like this:
class StyleContentModel(tf.keras.models.Model):
def __init__(self, style_layers, content_layers):
super(StyleContentModel, self).__init__()
self.vgg = vgg_layers(style_layers + content_layers)
self.style_layers = style_layers
self.content_layers = content_layers
self.num_style_layers = len(style_layers)
self.vgg.trainable = False
def call(self, inputs):
"Expects float input in [0,1]"
inputs = inputs*255.0
preprocessed_input = tf.keras.applications.vgg19.preprocess_input(inputs)
outputs = self.vgg(preprocessed_input)
style_outputs, content_outputs = (outputs[:self.num_style_layers],
outputs[self.num_style_layers:])
style_outputs = [gram_matrix(style_output)
for style_output in style_outputs]
content_dict = {content_name: value
for content_name, value
in zip(self.content_layers, content_outputs)}
style_dict = {style_name: value
for style_name, value
in zip(self.style_layers, style_outputs)}
return {'content': content_dict, 'style': style_dict}