- 重點掌握基本張量使用及與numpy的區別html
- 掌握張量維度操做(拼接、維度擴展、壓縮、轉置、重複……)node
numpy基本操做:python
1. Tensorflowapache
Tensorflow 和numpy區別session
相同點: 都提供n位數組
不一樣點: numpy支持ndarray,而Tensorflow裏有tensor;numpy不提供建立張量函數和求導,也不提供GPU支持。dom
- tensorflow中tf.random_normal和tf.truncated_normal的區別ide
代碼 a = tf.Variable(tf.random_normal([2,2],seed=1)) b = tf.Variable(tf.truncated_normal([2,2],seed=2)) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print(sess.run(a)) print(sess.run(b)) 輸出: [[-0.81131822 1.48459876] [ 0.06532937 -2.44270396]] [[-0.85811085 -0.19662298] [ 0.13895047 -1.22127688]]
拼接函數:tf.concat()等價於torch.cat()函數
轉置函數:torch中:對二維Tensor轉置操做transpose(dim1,dim2)或者直接t();
對多維Tensor轉置操做permute(dim1,dim2,...,dimn),
增長維度:TensorFlow中,想要維度增長一維,能夠使用tf.expand_dims(input,dim,name=None)函數
Torch中,使用nn.unsqueeze(pos [,numInputDims])在pos位置上插入1.
2. Pytorch
import torch import numpy as np np_data = np.arange(6).reshape((2, 3)) # numpy 轉爲 pytorch格式 torch_data = torch.from_numpy(np_data) print( '\n numpy', np_data, '\n torch', torch_data, ) ''' numpy [[0 1 2] [3 4 5]] torch 0 1 2 3 4 5 [torch.LongTensor of size 2x3] ''' # torch 轉爲numpy tensor2array = torch_data.numpy() print(tensor2array) """ [[0 1 2] [3 4 5]] """ # 運算符 # abs 、 add 、和numpy相似 data = [[1, 2], [3, 4]] tensor = torch.FloatTensor(data) # 轉爲32位浮點數,torch接受的都是Tensor的形式,因此運算前先轉化爲Tensor print( '\n numpy', np.matmul(data, data), '\n torch', torch.mm(tensor, tensor) # torch.dot()是點乘 ) ''' numpy [[ 7 10] [15 22]] torch 7 10 15 22 [torch.FloatTensor of size 2x2] '''
- pytorch0.3: x = Variable(torch.rand(5, 3, 224, 224), requires_grad=True).cuda()
# -*- coding: utf-8 -*- """ Neural Networks https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py =============== Neural networks can be constructed using the ``torch.nn`` package. Now that you had a glimpse of ``autograd``, ``nn`` depends on ``autograd`` to define models and differentiate them. An ``nn.Module`` contains layers, and a method ``forward(input)``\ that returns the ``output``. For example, look at this network that classifies digit images: .. figure:: /_static/img/mnist.png :alt: convnet convnet It is a simple feed-forward network. It takes the input, feeds it through several layers one after the other, and then finally gives the output. A typical training procedure for a neural network is as follows: - Define the neural network that has some learnable parameters (or weights) - Iterate over a dataset of inputs - Process input through the network - Compute the loss (how far is the output from being correct) - Propagate gradients back into the network’s parameters - Update the weights of the network, typically using a simple update rule: ``weight = weight - learning_rate * gradient`` Define the network ------------------ Let’s define this network: """ import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution # kernel self.conv1 = nn.Conv2d(1, 6, 5) self.conv2 = nn.Conv2d(6, 16, 5) # an affine operation: y = Wx + b self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): # Max pooling over a (2, 2) window x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # If the size is a square you can only specify a single number x = F.max_pool2d(F.relu(self.conv2(x)), 2) x = x.view(-1, self.num_flat_features(x)) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x def num_flat_features(self, x): size = x.size()[1:] # all dimensions except the batch dimension num_features = 1 for s in size: num_features *= s return num_features net = Net() print(net) ######################################################################## # You just have to define the ``forward`` function, and the ``backward`` # function (where gradients are computed) is automatically defined for you # using ``autograd``. # You can use any of the Tensor operations in the ``forward`` function. # # The learnable parameters of a model are returned by ``net.parameters()`` params = list(net.parameters()) print(len(params)) print(params[0].size()) # conv1's .weight ######################################################################## # Let try a random 32x32 input. # Note: expected input size of this net (LeNet) is 32x32. To use this net on # MNIST dataset, please resize the images from the dataset to 32x32. input = torch.randn(1, 1, 32, 32) out = net(input) print(out) ######################################################################## # Zero the gradient buffers of all parameters and backprops with random # gradients: net.zero_grad() out.backward(torch.randn(1, 10)) ######################################################################## # .. note:: # # ``torch.nn`` only supports mini-batches. The entire ``torch.nn`` # package only supports inputs that are a mini-batch of samples, and not # a single sample. # # For example, ``nn.Conv2d`` will take in a 4D Tensor of # ``nSamples x nChannels x Height x Width``. # # If you have a single sample, just use ``input.unsqueeze(0)`` to add # a fake batch dimension. # # Before proceeding further, let's recap all the classes you’ve seen so far. # # **Recap:** # - ``torch.Tensor`` - A *multi-dimensional array* with support for autograd # operations like ``backward()``. Also *holds the gradient* w.r.t. the # tensor. # - ``nn.Module`` - Neural network module. *Convenient way of # encapsulating parameters*, with helpers for moving them to GPU, # exporting, loading, etc. # - ``nn.Parameter`` - A kind of Tensor, that is *automatically # registered as a parameter when assigned as an attribute to a* # ``Module``. # - ``autograd.Function`` - Implements *forward and backward definitions # of an autograd operation*. Every ``Tensor`` operation creates at # least a single ``Function`` node that connects to functions that # created a ``Tensor`` and *encodes its history*. # # **At this point, we covered:** # - Defining a neural network # - Processing inputs and calling backward # # **Still Left:** # - Computing the loss # - Updating the weights of the network # # Loss Function # ------------- # A loss function takes the (output, target) pair of inputs, and computes a # value that estimates how far away the output is from the target. # # There are several different # `loss functions <https://pytorch.org/docs/nn.html#loss-functions>`_ under the # nn package . # A simple loss is: ``nn.MSELoss`` which computes the mean-squared error # between the input and the target. # # For example: output = net(input) target = torch.randn(10) # a dummy target, for example target = target.view(1, -1) # make it the same shape as output criterion = nn.MSELoss() loss = criterion(output, target) print(loss) ######################################################################## # Now, if you follow ``loss`` in the backward direction, using its # ``.grad_fn`` attribute, you will see a graph of computations that looks # like this: # # :: # # input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d # -> view -> linear -> relu -> linear -> relu -> linear # -> MSELoss # -> loss # # So, when we call ``loss.backward()``, the whole graph is differentiated # w.r.t. the loss, and all Tensors in the graph that has ``requires_grad=True`` # will have their ``.grad`` Tensor accumulated with the gradient. # # For illustration, let us follow a few steps backward: print(loss.grad_fn) # MSELoss print(loss.grad_fn.next_functions[0][0]) # Linear print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU ######################################################################## # Backprop # -------- # To backpropagate the error all we have to do is to ``loss.backward()``. # You need to clear the existing gradients though, else gradients will be # accumulated to existing gradients. # # # Now we shall call ``loss.backward()``, and have a look at conv1's bias # gradients before and after the backward. net.zero_grad() # zeroes the gradient buffers of all parameters print('conv1.bias.grad before backward') print(net.conv1.bias.grad) loss.backward() print('conv1.bias.grad after backward') print(net.conv1.bias.grad) ######################################################################## # Now, we have seen how to use loss functions. # # **Read Later:** # # The neural network package contains various modules and loss functions # that form the building blocks of deep neural networks. A full list with # documentation is `here <https://pytorch.org/docs/nn>`_. # # **The only thing left to learn is:** # # - Updating the weights of the network # # Update the weights # ------------------ # The simplest update rule used in practice is the Stochastic Gradient # Descent (SGD): # # ``weight = weight - learning_rate * gradient`` # # We can implement this using simple python code: # # .. code:: python # # learning_rate = 0.01 # for f in net.parameters(): # f.data.sub_(f.grad.data * learning_rate) # # However, as you use neural networks, you want to use various different # update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc. # To enable this, we built a small package: ``torch.optim`` that # implements all these methods. Using it is very simple: import torch.optim as optim # create your optimizer optimizer = optim.SGD(net.parameters(), lr=0.01) # in your training loop: optimizer.zero_grad() # zero the gradient buffers output = net(input) loss = criterion(output, target) loss.backward() optimizer.step() # Does the update ############################################################### # .. Note:: # # Observe how gradient buffers had to be manually set to zero using # ``optimizer.zero_grad()``. This is because gradients are accumulated # as explained in `Backprop`_ section.
咱們會常常擺弄數據的維度,有時候是擴展(cat,expand),有時候又要壓縮裁剪(squeeze,view),因此這些操縱向量維度的方法就尤爲重要了,我把這些方法總結到這裏,以供本身和朋友們參考。
涉及到的方法有:
torch.cat() torch.Tensor.expand()
torch.squeeze() torch.Tensor.repeat()
torch.Tensor.narrow() torch.Tensor.view()
torch.Tensor.resize_() torch.Tensor.permute()
另前三期總結的傳送門:
4鍾生成隨機數Tensor的方法總結
Tensor經常使用數學運算
Tensor比大小
3.MXNet
- MXNet for PyTorch users in 10 minutes : https://beta.mxnet.io/guide/to-mxnet/pytorch.html
- 一種原生API,一種Gluon API
- 原生API mnist實現: https://mxnet.incubator.apache.org/versions/master/tutorials/python/mnist.html
tensorlfow numpy轉tensor a = np.zeros((3, 3)) ta = tf.convert_to_tensor(a) with tf.Session() as sess: print(sess.run(ta)) tensor轉numpy import tensorflow as tf img1 = tf.constant(value=[[[[1],[2],[3],[4]],[[1],[2],[3],[4]],[[1],[2],[3],[4]],[[1],[2],[3],[4]]]],dtype=tf.float32) img2 = tf.constant(value=[[[[1],[1],[1],[1]],[[1],[1],[1],[1]],[[1],[1],[1],[1]],[[1],[1],[1],[1]]]],dtype=tf.float32) img = tf.concat(values=[img1,img2],axis=3) sess=tf.Session() #sess.run(tf.initialize_all_variables()) sess.run(tf.global_variables_initializer()) print("out1=",type(img)) #轉化爲numpy數組 #經過.eval函數能夠把tensor轉化爲numpy類數據 img_numpy=img.eval(session=sess) print("out2=",type(img_numpy)) #轉化爲tensor img_tensor= tf.convert_to_tensor(img_numpy) print("out2=",type(img_tensor)) mxnet from mxnet import nd x = nd.ones((2,3)) a = x.asnumpy() (type(a), a) nd.array(a) pytorch import torch import numpy as np np_data = np.arange(6).reshape((2, 3)) torch_data = torch.from_numpy(np_data) tensor2array = torch_data.numpy() print( '\nnumpy array:', np_data, # [[0 1 2], [3 4 5]] '\ntorch tensor:', torch_data, # 0 1 2 \n 3 4 5 [torch.LongTensor of size 2x3] '\ntensor to array:', tensor2array, # [[0 1 2], [3 4 5]] )