[Python]$ python3 playingWithGrads.py a=tensor([0., 0., 0., 0.]) b=tensor(4) d=tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000]) e=tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000], dtype=torch.float64) g=tensor([1., 2., 4.]) h=tensor([1., 3., 5.], device='mps:0') i=tensor([[3., 0.], [4., 5.]]) j=tensor([[ 9., 0.], [32., 25.]])
[Python]$ python3 playingWithGrads.py tensor(450., grad_fn=<SumBackward0>) tensor([ 6., 24., 54., 96., 150.])
[Python]$ python3 playingWithGrads.py tensor(55296000., grad_fn=<ProdBackward0>) tensor([1.6589e+08, 8.2944e+07, 5.5296e+07, 4.1472e+07, 3.3178e+07])
[Python]$ python3 playingWithGrads.py tensor(1.6589e+08) tensor(82944000.) tensor(55296000.) tensor(41472000.) tensor(33177600.) tensor([1.6589e+08, 8.2944e+07, 5.5296e+07, 4.1472e+07, 3.3178e+07])
x values: tensor([0.8823, 0.9150, 0.3829, 0.9593], requires_grad=True) w values: tensor([0.3904, 0.6009, 0.2566, 0.7936], requires_grad=True) z: tensor(3.0761, grad_fn=<PowBackward0>) w grad values: tensor([3.0948, 3.2096, 1.3430, 3.3650])
manual grad calculation: tensor([3.0948, 3.2096, 1.3430, 3.3650]) torch grad calculation: tensor([3.0948, 3.2096, 1.3430, 3.3650])
x is leaf: True w is leaf: True y is leaf: False z is leaf: False
a is leaf True a is leaf False a is leaf False a is leaf True a is leaf True a is leaf True a is leaf False