梳理caffe代碼common(八)

由於想梳理data_layer的過程。整理一半發現有幾個很重要的頭文件就是題目列出的這幾個:linux

追本溯源,先從根基開始學起。這裏面都是些什麼鬼呢?ios

common類c++

命名空間的使用:google、cv、caffe{boost、std}。dom

而後在項目中就可以任意使用google、opencv、c++的標準庫、以及c++高級庫boost。ide

caffe採用單例模式封裝boost的智能指針(caffe的靈魂)、std一些標準的使用方法、重要的初始化內容(隨機數生成器的內容以及google的gflags和glog的初始化)。函數

提供一個統一的接口。方便移植和開發。爲毛使用隨機數?我也不是很是清楚,知乎的一個解釋:post

隨機數在caffe中是很重要的,最重要的應用是權值的初始化,如高斯、xavier等。初始化的好壞直接影響終於的訓練結果,其它的應用如訓練圖像的隨機crop和mirror、dropout層的神經元的選擇。RNG類是對Boost以及STL中隨機數函數的封裝,以方便使用。至於想每次產生一樣的隨機數,僅僅要設定固定的種子就能夠。見caffe.proto中random_seed的定義:
    // If non-negative, the seed with which the Solver will initialize the Caffe
    // random number generator -- useful for reproducible results. Otherwise,
    // (and by default) initialize using a seed derived from the system clock.
    optional int64 random_seed = 20 [default = -1];
ui

頭文件:this

#ifndef CAFFE_COMMON_HPP_
#define CAFFE_COMMON_HPP_

#include <boost/shared_ptr.hpp>
#include <gflags/gflags.h>
#include <glog/logging.h>

#include <climits>
#include <cmath>
#include <fstream>  // NOLINT(readability/streams)
#include <iostream>  // NOLINT(readability/streams)
#include <map>
#include <set>
#include <sstream>
#include <string>
#include <utility>  // pair
#include <vector>

#include "caffe/util/device_alternate.hpp"

// Convert macro to string
// 將宏轉換爲字符串
#define STRINGIFY(m) #m
#define AS_STRING(m) STRINGIFY(m)

// gflags 2.1 issue: namespace google was changed to gflags without warning.
// Luckily we will be able to use GFLAGS_GFLAGS_H_ to detect if it is version
// 2.1. If yes, we will add a temporary solution to redirect the namespace.
// TODO(Yangqing): Once gflags solves the problem in a more elegant way, let's
// remove the following hack.
// 檢測gflags2.1
#ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif  // GFLAGS_GFLAGS_H_

// Disable the copy and assignment operator for a class.
// 禁止某個類經過構造函數直接初始化還有一個類
// 禁止某個類經過賦值來初始化還有一個類
#define DISABLE_COPY_AND_ASSIGN(classname) \
private:\
  classname(const classname&);\
  classname& operator=(const classname&)

// Instantiate a class with float and double specifications.
#define INSTANTIATE_CLASS(classname) \
  char gInstantiationGuard##classname; \
  template class classname<float>; \
  template class classname<double>

// 初始化GPU的前向傳播函數
#define INSTANTIATE_LAYER_GPU_FORWARD(classname) \
  template void classname<float>::Forward_gpu( \
      const std::vector<Blob<float>*>& bottom, \
      const std::vector<Blob<float>*>& top); \
  template void classname<double>::Forward_gpu( \
      const std::vector<Blob<double>*>& bottom, \
      const std::vector<Blob<double>*>& top);

// 初始化GPU的反向傳播函數
#define INSTANTIATE_LAYER_GPU_BACKWARD(classname) \
  template void classname<float>::Backward_gpu( \
      const std::vector<Blob<float>*>& top, \
      const std::vector<bool>& propagate_down, \
      const std::vector<Blob<float>*>& bottom); \
  template void classname<double>::Backward_gpu( \
      const std::vector<Blob<double>*>& top, \
      const std::vector<bool>& propagate_down, \
      const std::vector<Blob<double>*>& bottom)

// 初始化GPU的前向反向傳播函數
#define INSTANTIATE_LAYER_GPU_FUNCS(classname) \
  INSTANTIATE_LAYER_GPU_FORWARD(classname); \
  INSTANTIATE_LAYER_GPU_BACKWARD(classname)

// A simple macro to mark codes that are not implemented, so that when the code
// is executed we will see a fatal log.
// NOT_IMPLEMENTED實際上調用的LOG(FATAL) << "Not Implemented Yet"
#define NOT_IMPLEMENTED LOG(FATAL) << "Not Implemented Yet"

// See PR #1236
namespace cv { class Mat; }
/*
Caffe類裏面有個RNG。RNG這個類裏面還有個Generator類在RNG裏面會用到Caffe裏面的Get()函數來獲取一個新的Caffe類的實例。而後RNG裏面用到了Generator。

Generator是實際產生隨機數的。 */ namespace caffe { // We will use the boost shared_ptr instead of the new C++11 one mainly // because cuda does not work (at least now) well with C++11 features. using boost::shared_ptr; // Common functions and classes from std that caffe often uses. using std::fstream; using std::ios; //using std::isnan;//vc++的編譯器不支持這兩個函數 //using std::isinf; using std::iterator; using std::make_pair; using std::map; using std::ostringstream; using std::pair; using std::set; using std::string; using std::stringstream; using std::vector; // A global initialization function that you should call in your main function. // Currently it initializes google flags and google logging. void GlobalInit(int* pargc, char*** pargv); // A singleton class to hold common caffe stuff, such as the handler that // caffe is going to use for cublas, curand, etc. class Caffe { public: ~Caffe(); // Thread local context for Caffe. Moved to common.cpp instead of // including boost/thread.hpp to avoid a boost/NVCC issues (#1009, #1010) // on OSX. Also fails on Linux with CUDA 7.0.18. //Get函數利用Boost的局部線程存儲功能實現 static Caffe& Get(); //Brew就是CPU,GPU的枚舉類型,這個名字是否是來自Homebrew???Mac的軟件包管理器,我猜的。google

。。。 enum Brew { CPU, GPU }; // This random number generator facade hides boost and CUDA rng // implementation from one another (for cross-platform compatibility). class RNG { public: RNG();//利用系統的熵池或者時間來初始化RNG內部的generator_ explicit RNG(unsigned int seed); explicit RNG(const RNG&); RNG& operator=(const RNG&); void* generator(); private: class Generator; shared_ptr<Generator> generator_; }; // Getters for boost rng, curand, and cublas handles inline static RNG& rng_stream() { if (!Get().random_generator_) { Get().random_generator_.reset(new RNG()); } return *(Get().random_generator_); } #ifndef CPU_ONLY// GPU inline static cublasHandle_t cublas_handle() { return Get().cublas_handle_; }// cublas的句柄 inline static curandGenerator_t curand_generator() {//curandGenerator句柄 return Get().curand_generator_; } #endif //如下這一塊就是設置CPU和GPU以及訓練的時候線程並行數目吧 // Returns the mode: running on CPU or GPU. inline static Brew mode() { return Get().mode_; } // The setters for the variables // Sets the mode. It is recommended that you don't change the mode halfway // into the program since that may cause allocation of pinned memory being // freed in a non-pinned way, which may cause problems - I haven't verified // it personally but better to note it here in the header file. inline static void set_mode(Brew mode) { Get().mode_ = mode; } // Sets the random seed of both boost and curand static void set_random_seed(const unsigned int seed); // Sets the device. Since we have cublas and curand stuff, set device also // requires us to reset those values. static void SetDevice(const int device_id); // Prints the current GPU status. static void DeviceQuery(); // Parallel training info inline static int solver_count() { return Get().solver_count_; } inline static void set_solver_count(int val) { Get().solver_count_ = val; } inline static bool root_solver() { return Get().root_solver_; } inline static void set_root_solver(bool val) { Get().root_solver_ = val; } protected: #ifndef CPU_ONLY cublasHandle_t cublas_handle_;// cublas的句柄 curandGenerator_t curand_generator_;// curandGenerator句柄 #endif shared_ptr<RNG> random_generator_; Brew mode_; int solver_count_; bool root_solver_; private: // The private constructor to avoid duplicate instantiation. //避免實例化 Caffe(); // 禁止caffe這個類被複制構造函數和賦值進行構造 DISABLE_COPY_AND_ASSIGN(Caffe); }; } // namespace caffe #endif // CAFFE_COMMON_HPP_

cpp文件:

#include <boost/thread.hpp>
#include <glog/logging.h>
#include <cmath>
#include <cstdio>
#include <ctime>

#include "caffe/common.hpp"
#include "caffe/util/rng.hpp"

namespace caffe {

// Make sure each thread can have different values.
// boost::thread_specific_ptr是線程局部存儲機制
// 一開始的值是NULL
static boost::thread_specific_ptr<Caffe> thread_instance_;

Caffe& Caffe::Get() {
  if (!thread_instance_.get()) {// 假設當前線程沒有caffe實例
    thread_instance_.reset(new Caffe());// 則新建一個caffe的實例並返回
  }
  return *(thread_instance_.get());
}

// random seeding
// linux下的熵池下獲取隨機數的種子
int64_t cluster_seedgen(void) {
  int64_t s, seed, pid;
  FILE* f = fopen("/dev/urandom", "rb");
  if (f && fread(&seed, 1, sizeof(seed), f) == sizeof(seed)) {
    fclose(f);
    return seed;
  }

  LOG(INFO) << "System entropy source not available, "
              "using fallback algorithm to generate seed instead.";
  if (f)
    fclose(f);
  // 採用傳統的基於時間來生成隨機數種子
  pid = getpid();
  s = time(NULL);
  seed = std::abs(((s * 181) * ((pid - 83) * 359)) % 104729);
  return seed;
}
// 初始化gflags和glog
void GlobalInit(int* pargc, char*** pargv) {
  // Google flags.
  ::gflags::ParseCommandLineFlags(pargc, pargv, true);
  // Google logging.
  ::google::InitGoogleLogging(*(pargv)[0]);
  // Provide a backtrace on segfault.
  ::google::InstallFailureSignalHandler();
}
#ifdef CPU_ONLY  // CPU-only Caffe.
Caffe::Caffe()
    : random_generator_(), mode_(Caffe::CPU),// shared_ptr<RNG> random_generator_;   Brew mode_;
      solver_count_(1), root_solver_(true) { }// int solver_count_;   bool root_solver_;
Caffe::~Caffe() { }
//  手動設定隨機數生成器的種子
void Caffe::set_random_seed(const unsigned int seed) {
  // RNG seed
  Get().random_generator_.reset(new RNG(seed));
<span style="font-family:Microsoft YaHei;">}</span>
void Caffe::SetDevice(const int device_id) {
  NO_GPU;
}
void Caffe::DeviceQuery() {
  NO_GPU;
}
// 定義RNG內部的Generator類
class Caffe::RNG::Generator {
 public:
  Generator() : rng_(new caffe::rng_t(cluster_seedgen())) {}// linux下的熵池生成隨機數種子,注意typedef boost::mt19937 rng_t;這個在utils/rng.hpp頭文件中面
  explicit Generator(unsigned int seed) : rng_(new caffe::rng_t(seed)) {}// 採用給定的種子初始化
  caffe::rng_t* rng() { return rng_.get(); }// 屬性
 private:
  shared_ptr<caffe::rng_t> rng_;// 內部變量
};
// 實現RNG內部的構造函數
Caffe::RNG::RNG() : generator_(new Generator()) { }
Caffe::RNG::RNG(unsigned int seed) : generator_(new Generator(seed)) { }
// 實現RNG內部的運算符重載
Caffe::RNG& Caffe::RNG::operator=(const RNG& other) {
  generator_ = other.generator_;
  return *this;
}
void* Caffe::RNG::generator() {
  return static_cast<void*>(generator_->rng());
}
#else  // Normal GPU + CPU Caffe.
// 構造函數,初始化cublas和curand庫的句柄
Caffe::Caffe()
    : cublas_handle_(NULL), curand_generator_(NULL), random_generator_(),
    mode_(Caffe::CPU), solver_count_(1), root_solver_(true) {
  // Try to create a cublas handler, and report an error if failed (but we will
  // keep the program running as one might just want to run CPU code).
  // 初始化cublas並得到句柄
  if (cublasCreate(&cublas_handle_) != CUBLAS_STATUS_SUCCESS) {
    LOG(ERROR) << "Cannot create Cublas handle. Cublas won't be available.";
  }
  // Try to create a curand handler.
  if (curandCreateGenerator(&curand_generator_, CURAND_RNG_PSEUDO_DEFAULT)
      != CURAND_STATUS_SUCCESS ||
      curandSetPseudoRandomGeneratorSeed(curand_generator_, cluster_seedgen())
      != CURAND_STATUS_SUCCESS) {
    LOG(ERROR) << "Cannot create Curand generator. Curand won't be available.";
  }
}

Caffe::~Caffe() {
  // 銷燬句柄
  if (cublas_handle_) CUBLAS_CHECK(cublasDestroy(cublas_handle_));
  if (curand_generator_) {
    CURAND_CHECK(curandDestroyGenerator(curand_generator_));
  }
}
// 初始化CUDA的隨機數種子以及cpu的隨機數種子
void Caffe::set_random_seed(const unsigned int seed) {
  // Curand seed
  static bool g_curand_availability_logged = false;// 推斷是否log了curand的可用性。假設沒有則log一次,log以後則不再log。用的是靜態變量
  if (Get().curand_generator_) {
    // CURAND_CHECK見/utils/device_alternate.hpp中的宏定義
    CURAND_CHECK(curandSetPseudoRandomGeneratorSeed(curand_generator(),
        seed));
    CURAND_CHECK(curandSetGeneratorOffset(curand_generator(), 0));
  } else {
    if (!g_curand_availability_logged) {
        LOG(ERROR) <<
            "Curand not available. Skipping setting the curand seed.";
        g_curand_availability_logged = true;
    }
  }
  // RNG seed
  // CPU code
  Get().random_generator_.reset(new RNG(seed));
}

// 設置GPU設備並初始化句柄以及隨機數種子
void Caffe::SetDevice(const int device_id) {
  int current_device;
  CUDA_CHECK(cudaGetDevice(¤t_device));// 獲取當前設備id
  if (current_device == device_id) {
    return;
  }
  // The call to cudaSetDevice must come before any calls to Get, which
  // may perform initialization using the GPU.
  // 在Get以前必須先運行cudasetDevice函數
  CUDA_CHECK(cudaSetDevice(device_id));
  // 清理曾經的句柄
  if (Get().cublas_handle_) CUBLAS_CHECK(cublasDestroy(Get().cublas_handle_));
  if (Get().curand_generator_) {
    CURAND_CHECK(curandDestroyGenerator(Get().curand_generator_));
  }
  // 建立新句柄
  CUBLAS_CHECK(cublasCreate(&Get().cublas_handle_));
  CURAND_CHECK(curandCreateGenerator(&Get().curand_generator_,
      CURAND_RNG_PSEUDO_DEFAULT));
  // 設置隨機數種子
  CURAND_CHECK(curandSetPseudoRandomGeneratorSeed(Get().curand_generator_,
      cluster_seedgen()));
}

// 獲取設備信息
void Caffe::DeviceQuery() {
  cudaDeviceProp prop;
  int device;
  if (cudaSuccess != cudaGetDevice(&device)) {
    printf("No cuda device present.\n");
    return;
  }
  // #define CUDA_CHECK(condition) \
  /* Code block avoids redefinition of cudaError_t error */ \
  //do { \
  //  cudaError_t error = condition; \
  //  CHECK_EQ(error, cudaSuccess) << " " << cudaGetErrorString(error); \
  //} while (0)
  CUDA_CHECK(cudaGetDeviceProperties(&prop, device));
  LOG(INFO) << "Device id:                     " << device;
  LOG(INFO) << "Major revision number:         " << prop.major;
  LOG(INFO) << "Minor revision number:         " << prop.minor;
  LOG(INFO) << "Name:                          " << prop.name;
  LOG(INFO) << "Total global memory:           " << prop.totalGlobalMem;
  LOG(INFO) << "Total shared memory per block: " << prop.sharedMemPerBlock;
  LOG(INFO) << "Total registers per block:     " << prop.regsPerBlock;
  LOG(INFO) << "Warp size:                     " << prop.warpSize;
  LOG(INFO) << "Maximum memory pitch:          " << prop.memPitch;
  LOG(INFO) << "Maximum threads per block:     " << prop.maxThreadsPerBlock;
  LOG(INFO) << "Maximum dimension of block:    "
      << prop.maxThreadsDim[0] << ", " << prop.maxThreadsDim[1] << ", "
      << prop.maxThreadsDim[2];
  LOG(INFO) << "Maximum dimension of grid:     "
      << prop.maxGridSize[0] << ", " << prop.maxGridSize[1] << ", "
      << prop.maxGridSize[2];
  LOG(INFO) << "Clock rate:                    " << prop.clockRate;
  LOG(INFO) << "Total constant memory:         " << prop.totalConstMem;
  LOG(INFO) << "Texture alignment:             " << prop.textureAlignment;
  LOG(INFO) << "Concurrent copy and execution: "
      << (prop.deviceOverlap ? "Yes" : "No");
  LOG(INFO) << "Number of multiprocessors:     " << prop.multiProcessorCount;
  LOG(INFO) << "Kernel execution timeout:      "
      << (prop.kernelExecTimeoutEnabled ? "Yes" : "No");
  return;
}


class Caffe::RNG::Generator {
 public:
  Generator() : rng_(new caffe::rng_t(cluster_seedgen())) {}
  explicit Generator(unsigned int seed) : rng_(new caffe::rng_t(seed)) {}
  caffe::rng_t* rng() { return rng_.get(); }
 private:
  shared_ptr<caffe::rng_t> rng_;
};

Caffe::RNG::RNG() : generator_(new Generator()) { }

Caffe::RNG::RNG(unsigned int seed) : generator_(new Generator(seed)) { }

Caffe::RNG& Caffe::RNG::operator=(const RNG& other) {
  generator_.reset(other.generator_.get());
  return *this;
}

void* Caffe::RNG::generator() {
  return static_cast<void*>(generator_->rng());
}
// cublas的geterrorstring
const char* cublasGetErrorString(cublasStatus_t error) {
  switch (error) {
  case CUBLAS_STATUS_SUCCESS:
    return "CUBLAS_STATUS_SUCCESS";
  case CUBLAS_STATUS_NOT_INITIALIZED:
    return "CUBLAS_STATUS_NOT_INITIALIZED";
  case CUBLAS_STATUS_ALLOC_FAILED:
    return "CUBLAS_STATUS_ALLOC_FAILED";
  case CUBLAS_STATUS_INVALID_VALUE:
    return "CUBLAS_STATUS_INVALID_VALUE";
  case CUBLAS_STATUS_ARCH_MISMATCH:
    return "CUBLAS_STATUS_ARCH_MISMATCH";
  case CUBLAS_STATUS_MAPPING_ERROR:
    return "CUBLAS_STATUS_MAPPING_ERROR";
  case CUBLAS_STATUS_EXECUTION_FAILED:
    return "CUBLAS_STATUS_EXECUTION_FAILED";
  case CUBLAS_STATUS_INTERNAL_ERROR:
    return "CUBLAS_STATUS_INTERNAL_ERROR";
#if CUDA_VERSION >= 6000
  case CUBLAS_STATUS_NOT_SUPPORTED:
    return "CUBLAS_STATUS_NOT_SUPPORTED";
#endif
#if CUDA_VERSION >= 6050
  case CUBLAS_STATUS_LICENSE_ERROR:
    return "CUBLAS_STATUS_LICENSE_ERROR";
#endif
  }
  return "Unknown cublas status";
}
// curand的getlasterrorstring
const char* curandGetErrorString(curandStatus_t error) {
  switch (error) {
  case CURAND_STATUS_SUCCESS:
    return "CURAND_STATUS_SUCCESS";
  case CURAND_STATUS_VERSION_MISMATCH:
    return "CURAND_STATUS_VERSION_MISMATCH";
  case CURAND_STATUS_NOT_INITIALIZED:
    return "CURAND_STATUS_NOT_INITIALIZED";
  case CURAND_STATUS_ALLOCATION_FAILED:
    return "CURAND_STATUS_ALLOCATION_FAILED";
  case CURAND_STATUS_TYPE_ERROR:
    return "CURAND_STATUS_TYPE_ERROR";
  case CURAND_STATUS_OUT_OF_RANGE:
    return "CURAND_STATUS_OUT_OF_RANGE";
  case CURAND_STATUS_LENGTH_NOT_MULTIPLE:
    return "CURAND_STATUS_LENGTH_NOT_MULTIPLE";
  case CURAND_STATUS_DOUBLE_PRECISION_REQUIRED:
    return "CURAND_STATUS_DOUBLE_PRECISION_REQUIRED";
  case CURAND_STATUS_LAUNCH_FAILURE:
    return "CURAND_STATUS_LAUNCH_FAILURE";
  case CURAND_STATUS_PREEXISTING_FAILURE:
    return "CURAND_STATUS_PREEXISTING_FAILURE";
  case CURAND_STATUS_INITIALIZATION_FAILED:
    return "CURAND_STATUS_INITIALIZATION_FAILED";
  case CURAND_STATUS_ARCH_MISMATCH:
    return "CURAND_STATUS_ARCH_MISMATCH";
  case CURAND_STATUS_INTERNAL_ERROR:
    return "CURAND_STATUS_INTERNAL_ERROR";
  }
  return "Unknown curand status";
}
#endif  // CPU_ONLY
}  // namespace caffe
相關文章
相關標籤/搜索