Github地址:Mask_RCNN
『計算機視覺』Mask-RCNN_論文學習
『計算機視覺』Mask-RCNN_項目文檔翻譯
『計算機視覺』Mask-RCNN_推斷網絡其一:總覽
『計算機視覺』Mask-RCNN_推斷網絡其二:基於ReNet101的FPN共享網絡
『計算機視覺』Mask-RCNN_推斷網絡其三:RPN錨框處理和Proposal生成
『計算機視覺』Mask-RCNN_推斷網絡其四:FPN和ROIAlign的耦合
『計算機視覺』Mask-RCNN_推斷網絡其五:目標檢測結果精煉
『計算機視覺』Mask-RCNN_推斷網絡其六:Mask生成
『計算機視覺』Mask-RCNN_推斷網絡終篇:使用detect方法進行推斷
『計算機視覺』Mask-RCNN_錨框生成
『計算機視覺』Mask-RCNN_訓練網絡其一:數據集與Dataset類
『計算機視覺』Mask-RCNN_訓練網絡其二:train網絡結構&損失函數
『計算機視覺』Mask-RCNN_訓練網絡其三:訓練Modelhtml
Mask_RCNN的錨框本質上來講和SSD的是同樣的(『TensorFlow』SSD源碼學習_其三:錨框生成),python
中心點的個數等於特徵層像素數ios
框體生成是圍繞中心點的git
最終的框體座標須要歸一化到01之間,都是對於輸入圖片的相對大小github
RCNN系列通常都是一個共享特徵,但在Mask_RCNN結構引入了FPN結構後,和SSD同樣,使用了多層特徵,這樣二者的錨框生成算法能夠說是一模一樣了,只不過是生成策略有所微調:算法
SSD中不一樣特徵層對應着不一樣的網格加強比例參數;Mask_RCNN不通層的比例(anchor_ratios)則徹底一致網絡
SSD每一層每個中心點生成該層ratio+2個框;Mask_RCNN生成固定3個框app
SSD中心點爲feat像素偏移0.5步長;Mask_RCNN中心點直接選爲feat像素位置ide
而基本生成方式二者徹底一致:函數
h、w初始值爲給定的參考尺寸,即感覺野控制實際依賴的參數爲每一層的anchor_ratios和參考尺寸,對SSD:
anchor_sizes=[(21., 45.), (45., 99.), (99., 153.), (153., 207.), (207., 261.), (261., 315.)]
anchor_ratios=[[2, .5], [2, .5, 3, 1./3], [2, .5, 3, 1./3], [2, .5, 3, 1./3], [2, .5], [2, .5]]
對Mask_RCNN(h、w參考尺寸大小一致):
self.config.BACKBONE_STRIDES = [4, 8, 16, 32, 64] # 特徵層的下采樣倍數,中心點計算使用
self.config.RPN_ANCHOR_RATIOS = [0.5, 1, 2] # 特徵層錨框生成參數
self.config.RPN_ANCHOR_SCALES = [32, 64, 128, 256, 512] # 特徵層錨框感覺野
錨框生成入口函數位於model.py中的get_anchor函數,須要參數image_shape,保證含有[h, w]便可,也能夠包含[h, w, c],
def get_anchors(self, image_shape): """Returns anchor pyramid for the given image size.""" # [N, (height, width)] backbone_shapes = compute_backbone_shapes(self.config, image_shape) # Cache anchors and reuse if image shape is the same if not hasattr(self, "_anchor_cache"): self._anchor_cache = {} if not tuple(image_shape) in self._anchor_cache: # Generate Anchors: [anchor_count, (y1, x1, y2, x2)] a = utils.generate_pyramid_anchors( self.config.RPN_ANCHOR_SCALES, # (32, 64, 128, 256, 512) self.config.RPN_ANCHOR_RATIOS, # [0.5, 1, 2] backbone_shapes, # with shape [N, (height, width)] self.config.BACKBONE_STRIDES, # [4, 8, 16, 32, 64] self.config.RPN_ANCHOR_STRIDE) # 1 # Keep a copy of the latest anchors in pixel coordinates because # it's used in inspect_model notebooks. # TODO: Remove this after the notebook are refactored to not use it self.anchors = a # Normalize coordinates self._anchor_cache[tuple(image_shape)] = utils.norm_boxes(a, image_shape[:2]) return self._anchor_cache[tuple(image_shape)]
調用函數compute_backbone_shapes計算各個特徵層shape:
def compute_backbone_shapes(config, image_shape): """Computes the width and height of each stage of the backbone network. Returns: [N, (height, width)]. Where N is the number of stages """ if callable(config.BACKBONE): return config.COMPUTE_BACKBONE_SHAPE(image_shape) # Currently supports ResNet only assert config.BACKBONE in ["resnet50", "resnet101"] return np.array( [[int(math.ceil(image_shape[0] / stride)), int(math.ceil(image_shape[1] / stride))] for stride in config.BACKBONE_STRIDES]) # [4, 8, 16, 32, 64]
調用函數utils.generate_pyramid_anchors生成所有錨框:
def generate_pyramid_anchors(scales, ratios, feature_shapes, feature_strides, anchor_stride): """Generate anchors at different levels of a feature pyramid. Each scale is associated with a level of the pyramid, but each ratio is used in all levels of the pyramid. Returns: anchors: [N, (y1, x1, y2, x2)]. All generated anchors in one array. Sorted with the same order of the given scales. So, anchors of scale[0] come first, then anchors of scale[1], and so on. """ # Anchors # [anchor_count, (y1, x1, y2, x2)] anchors = [] for i in range(len(scales)): anchors.append(generate_anchors(scales[i], ratios, feature_shapes[i], feature_strides[i], anchor_stride)) # [anchor_count, (y1, x1, y2, x2)] return np.concatenate(anchors, axis=0)
utils.generate_pyramid_anchors會調用utils.generate_anchors來生成每一層的錨框(這一步較多的使用了函數meshgrid,介紹見『Numpy』np.meshgrid):
def generate_anchors(scales, ratios, shape, feature_stride, anchor_stride): """ scales: 1D array of anchor sizes in pixels. Example: [32, 64, 128] ratios: 1D array of anchor ratios of width/height. Example: [0.5, 1, 2] shape: [height, width] spatial shape of the feature map over which to generate anchors. feature_stride: Stride of the feature map relative to the image in pixels. anchor_stride: Stride of anchors on the feature map. For example, if the value is 2 then generate anchors for every other feature map pixel. """ # Get all combinations of scales and ratios scales, ratios = np.meshgrid(np.array(scales), np.array(ratios)) scales = scales.flatten() ratios = ratios.flatten() # Enumerate heights and widths from scales and ratios heights = scales / np.sqrt(ratios) widths = scales * np.sqrt(ratios) # Enumerate shifts in feature space shifts_y = np.arange(0, shape[0], anchor_stride) * feature_stride shifts_x = np.arange(0, shape[1], anchor_stride) * feature_stride shifts_x, shifts_y = np.meshgrid(shifts_x, shifts_y) # Enumerate combinations of shifts, widths, and heights box_widths, box_centers_x = np.meshgrid(widths, shifts_x) # (n, 3) (n, 3) box_heights, box_centers_y = np.meshgrid(heights, shifts_y) # (n, 3) (n, 3) # Reshape to get a list of (y, x) and a list of (h, w) # (n, 3, 2) -> (3n, 2) box_centers = np.stack([box_centers_y, box_centers_x], axis=2).reshape([-1, 2]) box_sizes = np.stack([box_heights, box_widths], axis=2).reshape([-1, 2]) # Convert to corner coordinates (y1, x1, y2, x2) boxes = np.concatenate([box_centers - 0.5 * box_sizes, box_centers + 0.5 * box_sizes], axis=1) # 框體信息是相對於原圖的, [N, (y1, x1, y2, x2)] return boxes
模擬某層的中心點分佈
最後回到get_anchor,調用utils.norm_boxes將錨框座標化爲01之間:
def norm_boxes(boxes, shape): """Converts boxes from pixel coordinates to normalized coordinates. boxes: [N, (y1, x1, y2, x2)] in pixel coordinates shape: [..., (height, width)] in pixels Note: In pixel coordinates (y2, x2) is outside the box. But in normalized coordinates it's inside the box. Returns: [N, (y1, x1, y2, x2)] in normalized coordinates """ h, w = shape scale = np.array([h - 1, w - 1, h - 1, w - 1]) shift = np.array([0, 0, 1, 1]) return np.divide((boxes - shift), scale).astype(np.float32)
最終返回相對座標下的錨框,shape:[anchor_count, (y1, x1, y2, x2)]。