I'm working on a PINN model in which the network has multiple outputs and multiple inputs. Since I'm using a PINN, I need to find the Jacobian matrix according to the inputs for each assigned input.
I expected that the autograd.functional.jacobian would give me the matrix directly, but it seems I'm getting something else. I receive a matrix with one extra dimension, and I need to extract its diagonal values in order to find the required Jacobian.
I'm attaching an example code to illustrate what I mean. I need to obtain the tensor jacobian_p instead of jacobian (Using the fix I applied will not work due to memory issues).
import torch
from torch import autograd
N = 2
M = 2
class CustomWeightNN(torch.nn.Module):
def __init__(self):
super(CustomWeightNN, self).__init__()
# Define custom weights for a linear layer
custom_weights = torch.tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)
self.fc1 = torch.nn.Linear(N, M, bias=False)
self.fc1.weight = torch.nn.Parameter(custom_weights)
def forward(self, x):
x = torch.relu(self.fc1(x))
return x
net = CustomWeightNN()
x = torch.tensor([[1.0, 2.0],[1.0, 2.0],[1.0, 2.0],[3.0, 4.0]], requires_grad=True)
y = net(x)
jacobian = autograd.functional.jacobian(net, x)
jacobian_p = jacobian[torch.arange(x.shape[0]), :, torch.arange(x.shape[0]), :]
print("Input x:", x)
print("Output y:", y)
print("Jacobian matrix:", jacobian)
print("Jacobian_p matrix:", jacobian_p)
Iv'e tried to use autograd.functional.jacobian and I'm expecting to get the Jacobi matrix for each input