Variable(class torch.autograd.Variable)
声明一个tensor
- torch.zeros
- torch.ones
- torch.rand
- torch.full()
- torch.empyt()
- torch.rand()
- torch.randn()
- torch.ones_like()
- torch.zeros_like()
- torch.randn_like()
- torch.Tensor
代码示例
1 | import torch |
tensor的各种操作
1 | import torch |
加操作
1 | print(a+b) #方法1 |
转置操作
1 | print(a.t()) # 打印出tensor a的转置 |
求最大行和列
1 | torch.max(tensor,dim) |
和relu功能比较类似。
1 | torch.clamp(tensor, min, max,out=None) |
tensor和numpy转化
convert tensor to numpy
1 | a = torch.ones(3,4) |
convert numpy to tensor
1 | a = numpy.ones(4,3) |
Variable和Tensor
Variable
图1.Variable
属性
如图1,Variable wrap a Tensor,and it has six attributes,data,grad,requies_grad,volatile,is_leaf and grad_fn.We can acess the raw tensor through .data operation, we can accumualte gradients w.r.t this Variable into .grad,.Finally , creator attribute will tell us how the Variable were created,we can acess the creator attibute by .grad_fn,if the Variable was created by the user,then the grad_fn is None,else it will show us which Function created the Variable.
if the grad_fn is None,we call them graph leaves
1 | Variable.shape #查看Variable的size |
parameters
1 | torch.autograd.Variable(data,requires_grad=False,volatile=False) |
requires_grad : indicate whether the backward() will ever need to be called
backward
backward(gradient=None,retain_graph=None,create_graph=None,retain_variables=None)
如果Variable是一个scalar output,我们不需要指定gradient,但是如果Variable不是一个scalar,而是有多个element,我们就需要根据output指定一下gradient,gradient的type可以是tensor也可以是Variable,里面的值为梯度的求值比例,例如
1 | x = Variable(torch.Tensor([3,6,4]),requires_grad=True) |
这里[0.1,1,10]分别表示的是对正常梯度分别乘上$0.1,1,10$,然后将他们累积在leaves Variable上
1 | detach() # |