I am using torch.norm to calculate Lp norms with relatively large values for p (in the range of 10-50). The vectors I do this for have relatively small values and I notice that the result incorrectly becomes 0. In the example above, this already happens at p=9!
import torch
lb = 1e-6 # Lower bound
ub = 1e-5 # Upper bound
# Construct vector
v = torch.rand(100) * (ub - lb) + lb
# Calculate Lp norms
for p in range(1,20):
print(p, torch.norm(v, p=p))
# It should approach the maximum value for p -> inf
print(torch.max(v))
Is there a way to circumvent this issue? Or is it inherent to machine precision and its associated rounding errors? Ideally, I would like a solution that maintains the graph. But I am also iterested in numerical approximations.
One way to mitigate this is to normalize the vector
vwith its mean:But I am not sure how well this works when there is a large difference in magnitudes in the vector elements.