卷積操做詳解

今天研究了一下卷積計算。
卷積涉及到的兩個輸入爲: 圖像和filterhtml

  • 圖像: 維度爲 C*H*W C是channel, 也叫作 depth, H和W就是圖像的寬和高了。git

  • filter, 維度爲 K*K, 假設 filter的個數爲 M個
    直接進行卷積的僞代碼爲github

 

for w in 1..W (img_width)
  for h in 1..H (img_height)
    for x in 1..K (filter_width)
      for y in 1..K (filter_height)
        for m in 1..M (num_filters)
          for c in 1..C (img_channel)
            output(w, h, m) += input(w+x, h+y, c) * filter(m, x, y, c)
          end
        end
      end
    end
  end
end

 

-使用矩陣進行卷積操做,計算量:算法

卷積運算,輸入M層,輸出N層,核尺寸k。輸入數據大小H*W。
卷積參數數量:weight + bias = M*N*k*k+N
卷積運算量:H*W*N*M^2*K^4 ??

這裏寫圖片描述 

  • 卷積就變成了矩陣乘法 (Gemm in BLAS) . BLAS 庫有(MKL, Atlas, CuBLAS)。這個算法最近被戰勝:Alex Krizhevsky’s 在 cuda-convnet [2]的優化 code [3]:

https://github.com/soumith/convnet-benchmarkside

  • 卷積以後的輸出的維度爲 num_filter* out_h * out_w;注意out_h 和 out_w 是img_h, img_w通過pading 和stride以後的寬高。

hout=himg+2PadingKfilterhS+1優化

wout=wimg+2PadingKfilterwS+1atom


這裏寫圖片描述
這個後續再看
這裏寫圖片描述spa

///////////////////////////////////////////////////////////////////////////////
// Simplest 2D convolution routine. It is easy to understand how convolution
// works, but is very slow, because of no optimization.
///////////////////////////////////////////////////////////////////////////////
bool convolve2DSlow(unsigned char* in, unsigned char* out, int dataSizeX, int dataSizeY,
                    float* kernel, int kernelSizeX, int kernelSizeY)
{
    int i, j, m, n, mm, nn;
    int kCenterX, kCenterY;                         // center index of kernel
    float sum;                                      // temp accumulation buffer
    int rowIndex, colIndex;

    // check validity of params
    if(!in || !out || !kernel) return false;
    if(dataSizeX <= 0 || kernelSizeX <= 0) return false;

    // find center position of kernel (half of kernel size)
    kCenterX = kernelSizeX / 2;
    kCenterY = kernelSizeY / 2;

    for(i=0; i < dataSizeY; ++i)                // rows
    {
        for(j=0; j < dataSizeX; ++j)            // columns
        {
            sum = 0;                            // init to 0 before sum
            for(m=0; m < kernelSizeY; ++m)      // kernel rows
            {
                mm = kernelSizeY - 1 - m;       // row index of flipped kernel

                for(n=0; n < kernelSizeX; ++n)  // kernel columns
                {
                    nn = kernelSizeX - 1 - n;   // column index of flipped kernel

                    // index of input signal, used for checking boundary
                    rowIndex = i + m - kCenterY;
                    colIndex = j + n - kCenterX;

                    // ignore input samples which are out of bound
                    if(rowIndex >= 0 && rowIndex < dataSizeY && colIndex >= 0 && colIndex < dataSizeX)
                        sum += in[dataSizeX * rowIndex + colIndex] * kernel[kernelSizeX * mm + nn];
                }
            }
            out[dataSizeX * i + j] = (unsigned char)((float)fabs(sum) + 0.5f);
        }
    }

    return true;
}
相關文章
相關標籤/搜索