使用python建立mxnet操做符(網絡層)

對cuda瞭解很少,因此使用python建立新的操做層是個不錯的選擇,固然這個性能不如cuda編寫的代碼。python

在MXNET源碼的example/numpy-ops/下有官方提供的使用python編寫新操做符的實例。分別跑ndarray_softmax.py、numpy_softmax.py和custom_softmax.py 發現ndarray_softmax.py中訓練速度將近其餘兩種方法的3倍,分析發現ndarray_softmax.py中調用cuda核,而其餘兩種方法都是numpy在cpu上的運行。網絡

這裏總結一下我對ndarray_softmax.py的理解。dom

分析一下ndarray_softmax.py源碼,重寫了父類的一些基本方法,其中最重要的是前向和後向操做:ide

 1 def forward(self, in_data, out_data):
 2       x = in_data[0]
 3       y = out_data[0]
 4       if self.fwd_kernel is None:
 5           self.fwd_kernel = mx.rtc('softmax', [('x', x)], [('y', y)], """
 6   int i = threadIdx.x + blockIdx.x*blockDim.x;
 7   float max_x = x[i*x_dims[1]];
 8   for (int j = 1; j < x_dims[1]; ++j) {
 9       if (max_x < x[i*x_dims[1]+j]) {
10           max_x = x[i*x_dims[1]+j];
11       }
12   }
13   float sum = 0.0f;
14   for (int j = 0; j < x_dims[1]; ++j) {
15       sum += expf(x[i*x_dims[1]+j]-max_x);
16   }
17   for (int j = 0; j < x_dims[1]; ++j) {
18       y[i*x_dims[1]+j] = expf(x[i*x_dims[1]+j]-max_x)/sum;
19   }
20   """)
21       self.fwd_kernel.push([x], [y], (1, 1, 1), (x.shape[0], 1, 1))
22 23   def backward(self, out_grad, in_data, out_data, in_grad):
24       l = in_data[1]
25       y = out_data[0]
26       dx = in_grad[0]
27       if self.bwd_kernel is None:
28           self.bwd_kernel = mx.rtc('softmax_grad', [('y', y), ('l', l)], [('dx', dx)], """
29   int i = blockIdx.x;
30   int j = threadIdx.x;
31   int k = static_cast<int>(l[i]);
32   if (j == k) {
33       dx[i*dx_dims[1]+j] = y[i*dx_dims[1]+j] - 1.0f;
34   } else {
35       dx[i*dx_dims[1]+j] = y[i*dx_dims[1]+j];
36   }
37   """)
38       self.bwd_kernel.push([y,l], [dx], (y.shape[0],1,1), (y.shape[1], 1, 1))
View Code

 

使用mx.rtc(...)定義的就是cuda 編譯相關內容了,查看/python/mxnet/rtc.py中Rtc類的定義,其參數部分描述以下:性能

 1 """MXRtc object in mxnet.
 2       This class allow you to write CUDA kernels in Python
 3       and call them with NDArray.
 4  5       Parameters
 6       ----------
 7       name : str
 8           Name of the kernel.
 9       inputs : tuple of (str, mxnet.ndarray)
10           List of input names and ndarray.
11       outputs : tuple of (str, mxnet.ndarray)
12           List of output names and ndarray.
13       kernel : str
14           The actual kernel code.
15           Note that this is only the body of the kernel, i.e.
16           after { and before }. Rtc will decorate the kernel.
17           For example, if ``name = "mykernel"`` and
18           inputs = [('x', mx.nd.zeros((10,)))]
19           outputs = [('y', mx.nd.zeros((10,)))]
20           kernel = "y[threadIdx.x] = x[threadIdx.x];",
21           then the compiled kernel will be:
22           extern "C" __global__ mykernel(float *x, float *y) {
23               const int x_ndim = 1;
24               const int x_dims = { 10 };
25               const int y_ndim = 1;
26               const int y_dims = { 10 };
27 28               y[threadIdx.x] = x[threadIdx.x];
29           }
30       """
View Code

 

以ndarray_softmax.py爲例, softmax層輸入數據shape=(100,10),輸出數據shape=(100,10),那麼forward方法裏的x_dim=(100,10), 第三個參數即cuda編譯的要執行的語句。 在forward方法中看到最後一句push方法,gridDim={'x':1,'y':1,'z':1}, blockDim={'x':100,'y':1,'z':1} (cuda存儲參見cudaMemcpy與kernel),因而每個線程操做一個sample的10個elements,threadIdx.x表示線程在block塊中的索引,那麼threadIdx.x+blockIdx.x*blockDim.x就是對應線程總的索引,blockDim對應的是block中threads的個數,而後後面softmax前向計算就容易理解了。this

再看backward方法,這個kernel將gradDim劃分紅(100,1,1), blockDim劃分紅(10,1,1),即每個element對應着一個線程,而後在每個線程中計算該element對應的梯度。spa

example:.net

實現一個reorganize層,也就是yolo中特徵重組層,具體功能YOLO v2 reorg 固然,最清楚的方式是看darknet中源碼如何實現。線程

這個例子只是想繼承mx.operator.NDArrayOp實現新的操做層,該操做層中沒有權重參數,對於有權重的層要在forward和backward中操做對應的值。code

  1  # -*- coding: utf-8 -*-
  2   import mxnet as mx
  3   import numpy as np
  4   import logging
  5   6   class NDArrayReorg(mx.operator.NDArrayOp):
  7       def __init__(self, stride=2):
  8           super(NDArrayReorg, self).__init__(True)
  9           self.stride = stride
 10           self.fwd_kernel = None
 11           self.bwd_kernel = None
 12           
 13       def list_arguments(self):
 14           return ['data']
 15       
 16       def list_outputs(self):
 17           return ['output']
 18       
 19       def infer_shape(self, in_shape):
 20           data_shape = in_shape[0]
 21           output_shape = [in_shape[0][0], in_shape[0][1]*4
 22                           , in_shape[0][2]/self.stride, in_shape[0][3]/self.stride]
 23                           
 24           return [data_shape], [output_shape]
 25           
 26       def forward(self, in_data, out_data):
 27           x = in_data[0]
 28           y = out_data[0]
 29           if self.fwd_kernel is None:
 30               self.fwd_kernel = mx.rtc('reorg',[('x',x)],[('y',y)],"""
 31               int i = threadIdx.x + blockIdx.x*blockDim.x ;
 32               int yw=y_dims[3];
 33               int yh = y_dims[2];
 34               int N = yw*yh;
 35               int xw=x_dims[3];
 36               int xh = x_dims[2];
 37               int len_block = x_dims[2]*x_dims[3];
 38               for(int j =0; j<xh; j+=2)
 39                   for(int k=0; k<xw; k+=2)
 40                   {   int t=j/2;
 41                       y[i*len_block+t*yw+k/2] = x[i*len_block+j*xw+k];
 42                       y[i*len_block+t*yw+k/2+N] = x[i*len_block + j*xw+k+1];
 43                       y[i*len_block+t*yw+k/2+2*N] = x[i*len_block +(j+1)*xw+k];
 44                       y[i*len_block+t*yw+k/2+3*N] = x[i*len_block +(j+1)*xw+k+1];                 
 45                   }       
 46               """)
 47           self.fwd_kernel.push([x],[y],(x.shape[0]*x.shape[1],1,1),(1,1,1))
 48       
 49       def backward(self, out_grad, in_data, out_data, in_grad):
 50           y = out_grad[0]
 51           dx = in_grad[0]
 52           if self.bwd_kernel is None:
 53               self.bwd_kernel = mx.rtc('reorg_grad',[('y',y)],[('dx', dx)],"""
 54               int i = threadIdx.x + blockIdx.x * blockDim.x;
 55               int yh = y_dims[2];
 56               int yw = y_dims[3];
 57               int N = yw*yh;
 58               int old_block = dx_dims[2]*dx_dims[3];
 59               for(int k=0;k<4;++k)
 60                   for(int j=0; j<yw; ++j)
 61                       for(int t=0; t<yh; ++t){
 62                           dx[i*old_block+2*j*yw+t*2+k]=y[i*old_block+k*N+j*yw+t];     
 63                   }           
 64               """)
 65           self.bwd_kernel.push([y],[dx],(y.shape[0]*y.shape[1]/4,1,1),(1,1,1))
 66           
 67   mnist = mx.test_utils.get_mnist()
 68   batch_size = 100
 69   train_iter = mx.io.NDArrayIter(mnist['train_data'], mnist['train_label'], batch_size, shuffle=True)
 70   val_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size)
 71  72  73   data = mx.sym.var('data')
 74   conv1 = mx.sym.Convolution(data=data, kernel=(5,5), num_filter=20)
 75   tanh1 = mx.sym.Activation(data=conv1, act_type="tanh")
 76   # pool1 = mx.sym.Pooling(data=tanh1, pool_type="max", kernel=(2,2), stride=(2,2))
 77  78   reorg = NDArrayReorg(stride=2)
 79   reg = reorg(data=tanh1, name='reorg')  
 80 conv2 = mx.sym.Convolution(data=reg, kernel=(5,5), num_filter=20)  
 81 tanh2 = mx.sym.Activation(data=conv2, act_type="tanh") # 80x8x8  
 82  83 conv2 = mx.sym.Convolution(data=tanh2, kernel=(5,5), num_filter=50)  
 84 tanh2 = mx.sym.Activation(data=conv2, act_type="tanh")  
 85 # pool2 = mx.sym.Pooling(data=tanh2, pool_type="max", kernel=(2,2), stride=(2,2))  
 86  87 flatten = mx.sym.flatten(data=tanh2)  
 88 fc1 = mx.sym.FullyConnected(data=flatten,num_hidden=500)  
 89 tanh3 = mx.sym.Activation(data=fc1, act_type="tanh")  
 90  91 fc2 = mx.sym.FullyConnected(data=tanh3, num_hidden=10)  
 92  93 mynet = mx.sym.SoftmaxOutput(data=fc2, name='softmax')  
 94  95 print(mynet.infer_shape(data=(100,1,28,28)))  
 96 mynet_model = mx.mod.Module(symbol=mynet, context=mx.gpu())  
 97  98 mynet_model.fit(train_iter,  
 99 eval_data=val_iter,  
100 optimizer='sgd',  
101 optimizer_params = {'learning_rate':0.1},  
102 eval_metric='acc',  
103 batch_end_callback=mx.callback.Speedometer(100,100),  
104 num_epoch=10)  
105 
106 test_iter = mx.io.NDArrayIter(mnist['test_data'], None, batch_size)  
107 prob = mynet_model.predict(test_iter)  
108 test_iter = mx.io.NDArrayIter(mnist['test_data'], mnist['test_label'], batch_size)  
109 # predict accuracy for lenet  
110 acc = mx.metric.Accuracy()  
111 mynet_model.score(test_iter, acc)  
112 print(acc) # 網絡是隨便構建的,參數也是隨便選的,因此出來的值並無什麼參考價值,只是爲了驗證調用mx.rtc建立cuda的kernel
View Code

 

所以,對於定製的層,但是使用相似的方法定義,該方法顯然比使用numpy要快的多。

相關文章
相關標籤/搜索