torch中的變量(Variable)用於構建計算圖(computational graph),可是相對Tensorflow和Theano的靜態圖不一樣的是,Torch的graph是動態的。 所以,torch中不須要佔位符(placeholder),能夠直接傳遞變量(variable)到計算圖中。python
依賴軟件包git
import torch from torch.autograd import Variable
tensor = torch.FloatTensor([[1,2],[3,4]]) # build a tensor variable = Variable(tensor, requires_grad=True) # build a variable, usually for compute gradients print(tensor) # [torch.FloatTensor of size 2x2] print(variable) # [torch.FloatTensor of size 2x2]
tensor([[1., 2.], [3., 4.]]) tensor([[1., 2.], [3., 4.]], requires_grad=True)
到此,tensor 和 variable 看起來同樣。github
可是, tvariable是graph的一部分, 而不是auto-gradient的一部分。ui
t_out = torch.mean(tensor*tensor) # x^2 v_out = torch.mean(variable*variable) # x^2 print(t_out) print(v_out)
7.5 Variable containing: 7.5000 [torch.FloatTensor of size 1]
v_out.backward() # backpropagation from v_out
$$ v_{out} = {{1} \over {4}} sum(variable^2) $$this
the gradients w.r.t the variable,spa
$$ {d(v_{out}) \over d(variable)} = {{1} \over {4}} 2 variable = {variable \over 2}$$code
let's check the result pytorch calculated for us below:orm
variable.grad
Variable containing: 0.5000 1.0000 1.5000 2.0000 [torch.FloatTensor of size 2x2]
variable # this is data in variable format
Variable containing: 1 2 3 4 [torch.FloatTensor of size 2x2]
variable.data # this is data in tensor format
1 2 3 4 [torch.FloatTensor of size 2x2]
variable.data.numpy() # numpy format
array([[ 1., 2.], [ 3., 4.]], dtype=float32)
Note that we did .backward()
on v_out
but variable
has been assigned new values on it's grad
.教程
As this lineget
v_out = torch.mean(variable*variable)
will make a new variable v_out
and connect it with variable
in computation graph.
type(v_out)
torch.autograd.variable.Variable
type(v_out.data)
torch.FloatTensor