Pytorch: torch.autograd.grad returns NoneType

399 Views Asked by At

Here is my code:

import torch
#Captum Attribution
from captum.attr import Saliency

model = torch.hub.load('pytorch/vision:v0.10.0', 'squeezenet1_1', pretrained=True)
model.eval()

sal = Saliency(model)

#X, y is an image and label
original_label = y
test_image = X.reshape([1,3,227,227]).float()

#I need gradient w.r.t. this test_image
test_image.requires_grad = True
test_image.retain_grad()

#Calculate saliency
attribution = sal.attribute(test_image, target=original_label)
attribution = torch.sum(torch.abs(attribution[0]), dim=0)
attribution = 227 * 227 * attribution / torch.sum(attribution)
attribution = attribution.view(-1)
elem1 = torch.argsort(attribution)[-1000:]
elements1 = torch.zeros(227 * 227)
elements1[elem1] = 1

#I need gradient of topK_loss w.r.t. test_image
topK_loss = torch.sum(attribution * elements1)
topK_loss.requires_grad = True
topK_loss.retain_grad()
gradients = -torch.autograd.grad(outputs=topK_loss, inputs=test_image, allow_unused=True)[0]

I get this error: bad operand type for unary -: 'NoneType'

I was told and searched that it means that the code is not able to find the path to the gradient.

Can anyone please help me resolve this issue and guide me to it?

Thanks in advance!

1

There are 1 best solutions below

3
V12 On

None is returned when the grad attribute isn’t populated during a backward call because of requires_grad being False for those tensors (parameters in your case). Going by your code, I would say the for loop where you set params.requires_grad = True should be placed before any loss calculations

Do you have a more minimal snippet without the extra package dependency that reproduces this issue? It'll be easier for me to reproduce this issue on our end and try to root cause this issue.