CUDA學習之二:shared_memory使用,矩陣相乘

CUDA中使用shared_memory能夠加速運算,在矩陣乘法中是一個體現。數組

矩陣C = A * B,正常運算時咱們運用 C[i,j] = A[i,:] * B[:,j] 能夠計算出結果。可是在CPU上完成這個運算咱們須要大量的時間,設A[m,n],B[n,k],那麼C矩陣爲m*k,整體,咱們須要作m*n*k次乘法運算,m*(b-1)*k次加法運算,而且是串行執行,整體的複雜度爲O(m*n*k) 。學習

矩陣類:spa

1 class Matrix
2 {
3 public:
4     int cols;   // x
5     int rows;   // y
6     float *data;  //數據,一位數組
7 }

CPU上的程序,一個三層循環.net

for(int i =0;i< C.rows;i++)
    {
        for(int j =0;j< C.cols;j++)
        {
            float *a = A.data;
            float *b = B.data;
            for(int k=0;k<A.cols;k++)
                C.data[i*C.cols+j]+=a[i*A.cols+k] * b[k*B.cols+j];
        }
    }
}

咱們想到用GPU加速,在CUDA上實現,咱們這麼寫kernel:線程

__global__ void matrixMulKernel(const Matrix A, const Matrix B, Matrix C)
{  
    // Each thread computes one element of C  
    // by accumulating results into Cvalue  
    float Cvalue = 0;  
    int row = blockIdx.y * blockDim.y + threadIdx.y;  
    int col = blockIdx.x * blockDim.x + threadIdx.x;  
    for (int e = 0; e < A.cols; ++e)  
    Cvalue += A.data[row * A.cols + e]* B.data[e * B.cols + col];  
    C.data[row * C.cols + col] = Cvalue;  
}  

此時,計算過程是並行的,可是訪問A,B矩陣時,不能同時訪問,所以主要的時間花在內存讀取,每一個線程讀取A的一行,B的一列,計算C的對應值;因此這樣須要從global memory中讀n次A,m次B。時間複雜度是O(m+n)次內存訪問,以及k次乘法運算。code

 

實際上還有一種辦法,能夠用shared memory,這裏咱們把A,B矩陣按照blocksize劃分爲子矩陣subA[blocksize][blocksize]、subB[blocksize][blocksize]。並將子矩陣設置爲__shared__。 thread block內全部threads共用(可讀可寫)shared memory。如此一來,A只從global memory讀了n/block_size次,B只讀了m/block_size次;時間複雜度是O(m/block_size+n/block_size)次內存訪問,以及k次乘法運算。進一步減小的時間複雜度。代碼以下:blog

__global__ void matrixMulKernel(const float *A, const float *B, float *C,int Aw ,int Bw)
{
    const int bs = CUDA_LG::block_size;
    int tx = threadIdx.x;
    int ty = threadIdx.y;
    int bx = blockIdx.x;
    int by = blockIdx.y;

    int aBlockFisrt = by * bs * Aw ; 
    int aBlockStep  = bs ; 
    int aBlockLast  = by * bs * Aw  + Aw - 1 ; 
    int bBlockFisrt = bx * bs ;
    int bBlockStep  = bs * Bw ;
    
    float subC=0;

    for(int a = aBlockFisrt,int b = bBlockFisrt; a <= aBlockLast ;a+=aBlockStep,b+=bBlockStep )
    {
        //定義兩個shared memory的子矩陣
        __shared__ float  subA[bs][bs];
        __shared__ float  subB[bs][bs];

        subA[ty][tx] = A[a + ty * Aw + tx];
        subB[ty][tx] = B[b + ty * Bw + tx];

        __syncthreads(); 

        for(int i = 0;i<bs;i++)
        {
            subC += subA[ty][i] * subB[i][tx];
        }    

        __syncthreads();
    }
    C[ by*bs*Bw + bx*bs + ty * Bw +tx] = subC;

}

 

參考sample_6.5\0_Simple\matrixMul程序。裏面註釋詳細內存

參考Rachel zhang的博客CUDA學習系列之二:http://blog.csdn.net/abcjennifer/article/details/42528569element

相關文章
相關標籤/搜索