I want to implement a similar loss function like in this paper: https://arxiv.org/pdf/1511.08861.pdf
They are combining here the l1 (Mean Average Error) and the MS-SSIM Loss like in following equation:
L_Mix = α · L_MSSSIM + (1 − α) · GaussFilter· L_1
There is a caffe code available on GitHub: https://github.com/NVlabs/PL4NN/blob/master/src/loss.py
But I dont know how to use this in TF. Is there already a similar existing code for TF?
I started trying this:
def ms_ssim_loss(y_true, y_pred):
ms_ssim = tf.image.ssim_multiscale(y_true, y_pred, 1.0)
loss = 1-ms_ssim
return loss
def mix_loss(y_true, y_pred):
alpha = 0.84
ms_ssim = ms_ssim_loss(y_true, y_pred)
l1 = tf.keras.losses.MeanAbsoluteError(y_true, y_pred)
gauss = gaussian(...)
loss = ms_ssim * alpha + (1-alpha) * gauss * l1
return loss
But don't know how to implement and use the gaussian filter here.
Thanks in advance and best regards!
Hi I was having the same issue as you, I wanted to use the ms-ssim +l1 loss using tensorflow , hopefully I think i managed to make it myself.
note (the links to the paper you mention isnt the original paper for the ms-ssim + l1 loss) its is rather this one : https://arxiv.org/pdf/1511.08861.pdf
if we look on the cafe implementation we can read in comment for the ms-ssim + l1 function line 216
so i implemented it like this :
max_picture_valuerepresent your picture range (I use 16bit picture)this code use a mix of the ssim and ms-ssim function to avoid a bug caussing loss to go 'nan'
so you need to set a treshold using the
if mae_loss <= Xwhere X is is a value low enough for ms-ssim to be over 50% accurate (from what I have read, it is related to ms-ssim not liking negative value) i have also made some experiment, and it bug below that valuei hope this answer can help you or other like me that was looking for a way to use it in tensorflow :)