I am using PyTorch 1.6.0 to learn a tensor (lets say x) with autograd. After x is learnt, how can I reset .requires_grad of every tensor that was a node in the autograd comp. graph to zero? I know about torch.detach() and about setting .requires_grad to False manually. I am searching for an one-shot instruction.
Ps: I want to do that because I still want to use these tensors after the part of my code that learns x is executed. Plus, some are to be converted to numpy.
There is no "one shot instruction" to switch
.requires_gradfor all tensors in graph.Usually
parametersare kept intorch.nn.Moduleinstances but in case they are elsewhere, you can always add them to somelistand iterate over it, I'd do something like this:Usually there is no need for that, also if you don't want gradient for some part of computation you can always use
with torch.no_grad()context manager.