Writing memory for parameters in PyTorch `named_parameters`

99 Views Asked by At
class MyAlgo(torch.optim.Optimizer):

    def __init__(self, params, model):

        self.model = model

    def step(self, closure = None):

        for name, param in self.model.named_parameters(): 
            param = "a Tensor in size of param"

In PyTorch, can the returned param from model.named_parameters() method written as the approach above? An answer (updated): one should use an in-place operation: param.copy_(torch.Tensor-like) to write into param.

Another question would be, is this the best approach to manipulate parameters? Could self.param_groups-based approach have any better efficiency benefits?

0

There are 0 best solutions below