使用TensorRT對人臉檢測網絡MTCNN進行加速

前言

最近在作人臉比對的工做,須要用到人臉關鍵點檢測的算法,比較成熟和通用的一種算法是 MTCNN,能夠同時進行人臉框選和關鍵點檢測,對於每張臉輸出 5 個關鍵點,能夠用來進行人臉對齊。python

問題

剛開始準備對齊人臉圖片用於訓練人臉比對算法,是使用官方版本的 MTCNN,該版本是基於 Caffe 的 Matlab 接口的,跑起來很慢,差很少要一秒鐘一張圖片,處理完幾萬張圖片一天就過去了,好在效果不錯。
訓練完人臉特徵提取的網絡之後,想要部署整我的臉比對算法,須要進行人臉檢測和對齊。用於工業生產,那個版本的 MTCNN 顯然不合適了。在 Github 上尋找替代算法,發現有一個從 Facenet 倉庫裏面拿出來打包成 Python 包的 MTCNN,直接 pip 就裝上了,可是,它也很慢,雖然用了 TensorFlow, 沒用上 GPU,檢測一張 1080P 的圖片要 700ms,太慢了。linux

想法

正好這幾天在學習 TensorRT 相關知識,已經成功將人臉特徵提取網絡轉成 onnx 格式,而後用 TensorRT 的 Python 接口部署好了,單張圖片耗時從 15ms 減小到 3ms,很是理想的結果!理所固然,想着把 MTCNN 部署在 TensorRT 平臺上面。git

MTCNN 的 Caffe 模型直接轉成 TensorRT 會有問題,主要是 PReLU 不被支持,解決方法是將該操做重寫,可是時間不容許,目前只學會瞭如何調用可以完整轉化的模型,還須要繼續深刻了解模型轉化的細節。github

解決方案

很是感謝 @jkjung-avt的工做,在他的博客中詳細介紹瞭如何使用 Cython 和 TensorRT 優化 MTCNN。在他的 Github 中,給出了 TensorRT 版本的 MTCNN,而且是使用 Python 接口寫的,太符合個人需求了!算法

下面回顧一下是如何使用該代碼完成工做的。網絡

1.將整個項目下載下來,首先在項目根目錄下 make ,編譯 Cython 模塊,生成 pytrt.cpython-36m-x86_64-linux-gnu.so
2.在 mtcnn 文件夾下 make,生成 create_engines,再運行 ./create_engines,將 PNet,RNetONet 的模型文件分別轉化爲 engine 文件,後面能夠直接使用這三個文件進行推理。
3.下面就是使用該模型,說實話,做者的代碼還沒來得及看,代碼量較大,須要認真學習。經過做者的博客,還發現了 Jetson Nano 這樣的好東西,便宜的深度學習方案,有時間能夠玩一下。下面這個文件就是調用生成的 engine 文件提供推理服務了。app

'''
mtcnn.py
'''
import cv2
import numpy as np

import pytrt

PIXEL_MEAN = 127.5
PIXEL_SCALE = 0.0078125


def convert_to_1x1(boxes):
    """Convert detection boxes to 1:1 sizes

    # Arguments
        boxes: numpy array, shape (n,5), dtype=float32

    # Returns
        boxes_1x1
    """
    boxes_1x1 = boxes.copy()
    hh = boxes[:, 3] - boxes[:, 1] + 1.
    ww = boxes[:, 2] - boxes[:, 0] + 1.
    mm = np.maximum(hh, ww)
    boxes_1x1[:, 0] = boxes[:, 0] + ww * 0.5 - mm * 0.5
    boxes_1x1[:, 1] = boxes[:, 1] + hh * 0.5 - mm * 0.5
    boxes_1x1[:, 2] = boxes_1x1[:, 0] + mm - 1.
    boxes_1x1[:, 3] = boxes_1x1[:, 1] + mm - 1.
    boxes_1x1[:, 0:4] = np.fix(boxes_1x1[:, 0:4])
    return boxes_1x1


def crop_img_with_padding(img, box, padding=0):
    """Crop a box from image, with out-of-boundary pixels padded

    # Arguments
        img: img as a numpy array, shape (H, W, 3)
        box: numpy array, shape (5,) or (4,)
        padding: integer value for padded pixels

    # Returns
        cropped_im: cropped image as a numpy array, shape (H, W, 3)
    """
    img_h, img_w, _ = img.shape
    if box.shape[0] == 5:
        cx1, cy1, cx2, cy2, _ = box.astype(int)
    elif box.shape[0] == 4:
        cx1, cy1, cx2, cy2 = box.astype(int)
    else:
        raise ValueError
    cw = cx2 - cx1 + 1
    ch = cy2 - cy1 + 1
    cropped_im = np.zeros((ch, cw, 3), dtype=np.uint8) + padding
    ex1 = max(0, -cx1)  # ex/ey's are the destination coordinates
    ey1 = max(0, -cy1)
    ex2 = min(cw, img_w - cx1)
    ey2 = min(ch, img_h - cy1)
    fx1 = max(cx1, 0)  # fx/fy's are the source coordinates
    fy1 = max(cy1, 0)
    fx2 = min(cx2+1, img_w)
    fy2 = min(cy2+1, img_h)
    cropped_im[ey1:ey2, ex1:ex2, :] = img[fy1:fy2, fx1:fx2, :]
    return cropped_im


def nms(boxes, threshold, type='Union'):
    """Non-Maximum Supression

    # Arguments
        boxes: numpy array [:, 0:5] of [x1, y1, x2, y2, score]'s
        threshold: confidence/score threshold, e.g. 0.5
        type: 'Union' or 'Min'

    # Returns
        A list of indices indicating the result of NMS
    """
    if boxes.shape[0] == 0:
        return []
    xx1, yy1, xx2, yy2 = boxes[:, 0], boxes[:, 1], boxes[:, 2], boxes[:, 3]
    areas = np.multiply(xx2-xx1+1, yy2-yy1+1)
    sorted_idx = boxes[:, 4].argsort()

    pick = []
    while len(sorted_idx) > 0:
        # In each loop, pick the last box (highest score) and remove
        # all other boxes with IoU over threshold
        tx1 = np.maximum(xx1[sorted_idx[-1]], xx1[sorted_idx[0:-1]])
        ty1 = np.maximum(yy1[sorted_idx[-1]], yy1[sorted_idx[0:-1]])
        tx2 = np.minimum(xx2[sorted_idx[-1]], xx2[sorted_idx[0:-1]])
        ty2 = np.minimum(yy2[sorted_idx[-1]], yy2[sorted_idx[0:-1]])
        tw = np.maximum(0.0, tx2 - tx1 + 1)
        th = np.maximum(0.0, ty2 - ty1 + 1)
        inter = tw * th
        if type == 'Min':
            iou = inter / \
                np.minimum(areas[sorted_idx[-1]], areas[sorted_idx[0:-1]])
        else:
            iou = inter / \
                (areas[sorted_idx[-1]] + areas[sorted_idx[0:-1]] - inter)
        pick.append(sorted_idx[-1])
        sorted_idx = sorted_idx[np.where(iou <= threshold)[0]]
    return pick


def generate_pnet_bboxes(conf, reg, scale, t):
    """
    # Arguments
        conf: softmax score (face or not) of each grid
        reg: regression values of x1, y1, x2, y2 coordinates.
             The values are normalized to grid width (12) and
             height (12).
        scale: scale-down factor with respect to original image
        t: confidence threshold

    # Returns
        A numpy array of bounding box coordinates and the
        cooresponding scores: [[x1, y1, x2, y2, score], ...]

    # Notes
        Top left corner coordinates of each grid is (x*2, y*2),
        or (x*2/scale, y*2/scale) in the original image.
        Bottom right corner coordinates is (x*2+12-1, y*2+12-1),
        or ((x*2+12-1)/scale, (y*2+12-1)/scale) in the original
        image.
    """
    conf = conf.T  # swap H and W dimensions
    dx1 = reg[0, :, :].T
    dy1 = reg[1, :, :].T
    dx2 = reg[2, :, :].T
    dy2 = reg[3, :, :].T
    (x, y) = np.where(conf >= t)
    if len(x) == 0:
        return np.zeros((0, 5), np.float32)

    score = np.array(conf[x, y]).reshape(-1, 1)          # Nx1
    reg = np.array([dx1[x, y], dy1[x, y],
                    dx2[x, y], dy2[x, y]]).T * 12.       # Nx4
    topleft = np.array([x, y], dtype=np.float32).T * 2.  # Nx2
    bottomright = topleft + np.array([11., 11.], dtype=np.float32)  # Nx2
    boxes = (np.concatenate((topleft, bottomright), axis=1) + reg) / scale
    boxes = np.concatenate((boxes, score), axis=1)       # Nx5
    # filter bboxes which are too small
    #boxes = boxes[boxes[:, 2]-boxes[:, 0] >= 12., :]
    #boxes = boxes[boxes[:, 3]-boxes[:, 1] >= 12., :]
    return boxes


def generate_rnet_bboxes(conf, reg, pboxes, t):
    """
    # Arguments
        conf: softmax score (face or not) of each box
        reg: regression values of x1, y1, x2, y2 coordinates.
             The values are normalized to box width and height.
        pboxes: input boxes to RNet
        t: confidence threshold

    # Returns
        boxes: a numpy array of box coordinates and cooresponding
               scores: [[x1, y1, x2, y2, score], ...]
    """
    boxes = pboxes.copy()  # make a copy
    assert boxes.shape[0] == conf.shape[0]
    boxes[:, 4] = conf  # update 'score' of all boxes
    boxes = boxes[conf >= t, :]
    reg = reg[conf >= t, :]
    ww = (boxes[:, 2]-boxes[:, 0]+1).reshape(-1, 1)  # x2 - x1 + 1
    hh = (boxes[:, 3]-boxes[:, 1]+1).reshape(-1, 1)  # y2 - y1 + 1
    boxes[:, 0:4] += np.concatenate((ww, hh, ww, hh), axis=1) * reg
    return boxes


def generate_onet_outputs(conf, reg_boxes, reg_marks, rboxes, t):
    """
    # Arguments
        conf: softmax score (face or not) of each box
        reg_boxes: regression values of x1, y1, x2, y2
                   The values are normalized to box width and height.
        reg_marks: regression values of the 5 facial landmark points
        rboxes: input boxes to ONet (already converted to 2x1)
        t: confidence threshold

    # Returns
        boxes: a numpy array of box coordinates and cooresponding
               scores: [[x1, y1, x2, y2,... , score], ...]
        landmarks: a numpy array of facial landmark coordinates:
                   [[x1, x2, ..., x5, y1, y2, ..., y5], ...]
    """
    boxes = rboxes.copy()  # make a copy
    assert boxes.shape[0] == conf.shape[0]
    boxes[:, 4] = conf
    boxes = boxes[conf >= t, :]
    reg_boxes = reg_boxes[conf >= t, :]
    reg_marks = reg_marks[conf >= t, :]
    xx = boxes[:, 0].reshape(-1, 1)
    yy = boxes[:, 1].reshape(-1, 1)
    ww = (boxes[:, 2]-boxes[:, 0]).reshape(-1, 1)
    hh = (boxes[:, 3]-boxes[:, 1]).reshape(-1, 1)
    marks = np.concatenate((xx, xx, xx, xx, xx, yy, yy, yy, yy, yy), axis=1)
    marks += np.concatenate((ww, ww, ww, ww, ww, hh, hh,
                             hh, hh, hh), axis=1) * reg_marks
    ww = ww + 1
    hh = hh + 1
    boxes[:, 0:4] += np.concatenate((ww, hh, ww, hh), axis=1) * reg_boxes
    return boxes, marks


def clip_dets(dets, img_w, img_h):
    """Round and clip detection (x1, y1, ...) values.

    Note we exclude the last value of 'dets' in computation since
    it is 'conf'.
    """
    dets[:, 0:-1] = np.fix(dets[:, 0:-1])
    evens = np.arange(0, dets.shape[1]-1, 2)
    odds = np.arange(1, dets.shape[1]-1, 2)
    dets[:, evens] = np.clip(dets[:, evens], 0., float(img_w-1))
    dets[:, odds] = np.clip(dets[:, odds], 0., float(img_h-1))
    return dets


class TrtPNet(object):
    """TrtPNet

    Refer to mtcnn/det1_relu.prototxt for calculation of input/output
    dimmensions of TrtPNet, as well as input H offsets (for all scales).
    The output H offsets are merely input offsets divided by stride (2).
    """
    input_h_offsets = (0, 216, 370, 478, 556, 610, 648, 676, 696)
    output_h_offsets = (0, 108, 185, 239, 278, 305, 324, 338, 348)
    max_n_scales = 9

    def __init__(self, engine):
        """__init__

        # Arguments
            engine: path to the TensorRT engine file
        """
        self.trtnet = pytrt.PyTrtMtcnn(engine,
                                       (3, 710, 384),
                                       (2, 350, 187),
                                       (4, 350, 187))
        self.trtnet.set_batchsize(1)

    def detect(self, img, minsize=40, factor=0.709, threshold=0.7):
        """Detect faces using PNet

        # Arguments
            img: input image as a RGB numpy array
            threshold: confidence threshold

        # Returns
            A numpy array of bounding box coordinates and the
            cooresponding scores: [[x1, y1, x2, y2, score], ...]
        """
        if minsize < 40:
            raise ValueError("TrtPNet is currently designed with "
                             "'minsize' >= 40")
        if factor > 0.709:
            raise ValueError("TrtPNet is currently designed with "
                             "'factor' <= 0.709")
        m = 12.0 / minsize
        img_h, img_w, _ = img.shape
        minl = min(img_h, img_w) * m

        # create scale pyramid
        scales = []
        while minl >= 12:
            scales.append(m)
            m *= factor
            minl *= factor
        if len(scales) > self.max_n_scales:  # probably won't happen...
            raise ValueError('Too many scales, try increasing minsize '
                             'or decreasing factor.')

        total_boxes = np.zeros((0, 5), dtype=np.float32)
        img = (img.astype(np.float32) - PIXEL_MEAN) * PIXEL_SCALE

        # stack all scales of the input image vertically into 1 big
        # image, and only do inferencing once
        im_data = np.zeros((1, 3, 710, 384), dtype=np.float32)
        for i, scale in enumerate(scales):
            h_offset = self.input_h_offsets[i]
            h = int(img_h * scale)
            w = int(img_w * scale)
            im_data[0, :, h_offset:(h_offset+h), :w] = \
                cv2.resize(img, (w, h)).transpose((2, 0, 1))

        out = self.trtnet.forward(im_data)

        # extract outputs of each scale from the big output blob
        for i, scale in enumerate(scales):
            h_offset = self.output_h_offsets[i]
            h = (int(img_h * scale) - 12) // 2 + 1
            w = (int(img_w * scale) - 12) // 2 + 1
            pp = out['prob1'][0, 1, h_offset:(h_offset+h), :w]
            cc = out['boxes'][0, :, h_offset:(h_offset+h), :w]
            boxes = generate_pnet_bboxes(pp, cc, scale, threshold)
            if boxes.shape[0] > 0:
                pick = nms(boxes, 0.5, 'Union')
                if len(pick) > 0:
                    boxes = boxes[pick, :]
            if boxes.shape[0] > 0:
                total_boxes = np.concatenate((total_boxes, boxes), axis=0)

        if total_boxes.shape[0] == 0:
            return total_boxes
        pick = nms(total_boxes, 0.7, 'Union')
        dets = clip_dets(total_boxes[pick, :], img_w, img_h)
        return dets

    def destroy(self):
        self.trtnet.destroy()
        self.trtnet = None


class TrtRNet(object):
    """TrtRNet

    # Arguments
        engine: path to the TensorRT engine (det2) file
    """

    def __init__(self, engine):
        self.trtnet = pytrt.PyTrtMtcnn(engine,
                                       (3, 24, 24),
                                       (2, 1, 1),
                                       (4, 1, 1))

    def detect(self, img, boxes, max_batch=256, threshold=0.7):
        """Detect faces using RNet

        # Arguments
            img: input image as a RGB numpy array
            boxes: detection results by PNet, a numpy array [:, 0:5]
                   of [x1, y1, x2, y2, score]'s
            max_batch: only process these many top boxes from PNet
            threshold: confidence threshold

        # Returns
            A numpy array of bounding box coordinates and the
            cooresponding scores: [[x1, y1, x2, y2, score], ...]
        """
        if max_batch > 256:
            raise ValueError('Bad max_batch: %d' % max_batch)
        boxes = boxes[:max_batch]  # assuming boxes are sorted by score
        if boxes.shape[0] == 0:
            return boxes
        img_h, img_w, _ = img.shape
        boxes = convert_to_1x1(boxes)
        crops = np.zeros((boxes.shape[0], 24, 24, 3), dtype=np.uint8)
        for i, det in enumerate(boxes):
            cropped_im = crop_img_with_padding(img, det)
            # NOTE: H and W dimensions need to be transposed for RNet!
            crops[i, ...] = cv2.transpose(cv2.resize(cropped_im, (24, 24)))
        crops = crops.transpose((0, 3, 1, 2))  # NHWC -> NCHW
        crops = (crops.astype(np.float32) - PIXEL_MEAN) * PIXEL_SCALE

        self.trtnet.set_batchsize(crops.shape[0])
        out = self.trtnet.forward(crops)

        pp = out['prob1'][:, 1, 0, 0]
        cc = out['boxes'][:, :, 0, 0]
        boxes = generate_rnet_bboxes(pp, cc, boxes, threshold)
        if boxes.shape[0] == 0:
            return boxes
        pick = nms(boxes, 0.7, 'Union')
        dets = clip_dets(boxes[pick, :], img_w, img_h)
        return dets

    def destroy(self):
        self.trtnet.destroy()
        self.trtnet = None


class TrtONet(object):
    """TrtONet

    # Arguments
        engine: path to the TensorRT engine (det3) file
    """

    def __init__(self, engine):
        self.trtnet = pytrt.PyTrtMtcnn(engine,
                                       (3, 48, 48),
                                       (2, 1, 1),
                                       (4, 1, 1),
                                       (10, 1, 1))

    def detect(self, img, boxes, max_batch=64, threshold=0.7):
        """Detect faces using ONet

        # Arguments
            img: input image as a RGB numpy array
            boxes: detection results by RNet, a numpy array [:, 0:5]
                   of [x1, y1, x2, y2, score]'s
            max_batch: only process these many top boxes from RNet
            threshold: confidence threshold

        # Returns
            dets: boxes and conf scores
            landmarks
        """
        if max_batch > 64:
            raise ValueError('Bad max_batch: %d' % max_batch)
        if boxes.shape[0] == 0:
            return (np.zeros((0, 5), dtype=np.float32),
                    np.zeros((0, 10), dtype=np.float32))
        boxes = boxes[:max_batch]  # assuming boxes are sorted by score
        img_h, img_w, _ = img.shape
        boxes = convert_to_1x1(boxes)
        crops = np.zeros((boxes.shape[0], 48, 48, 3), dtype=np.uint8)
        for i, det in enumerate(boxes):
            cropped_im = crop_img_with_padding(img, det)
            # NOTE: H and W dimensions need to be transposed for RNet!
            crops[i, ...] = cv2.transpose(cv2.resize(cropped_im, (48, 48)))
        crops = crops.transpose((0, 3, 1, 2))  # NHWC -> NCHW
        crops = (crops.astype(np.float32) - PIXEL_MEAN) * PIXEL_SCALE

        self.trtnet.set_batchsize(crops.shape[0])
        out = self.trtnet.forward(crops)

        pp = out['prob1'][:, 1, 0, 0]
        cc = out['boxes'][:, :, 0, 0]
        mm = out['landmarks'][:, :, 0, 0]
        boxes, landmarks = generate_onet_outputs(pp, cc, mm, boxes, threshold)
        pick = nms(boxes, 0.7, 'Min')
        return (clip_dets(boxes[pick, :], img_w, img_h),
                np.fix(landmarks[pick, :]))

    def destroy(self):
        self.trtnet.destroy()
        self.trtnet = None


class TrtMtcnn(object):
    """TrtMtcnn"""

    def __init__(self, engine_files):
        self.pnet = TrtPNet(engine_files[0])
        self.rnet = TrtRNet(engine_files[1])
        self.onet = TrtONet(engine_files[2])

    def __del__(self):
        self.onet.destroy()
        self.rnet.destroy()
        self.pnet.destroy()

    def _detect_1280x720(self, img, minsize):
        """_detec_1280x720()

        Assuming 'img' has been resized to less than 1280x720.
        """
        # MTCNN model was trained with 'MATLAB' image so its channel
        # order is RGB instead of BGR.
        img = img[:, :, ::-1]  # BGR -> RGB
        dets = self.pnet.detect(img, minsize=minsize)
        dets = self.rnet.detect(img, dets)
        dets, landmarks = self.onet.detect(img, dets)
        return dets, landmarks

    def detect(self, img, minsize=40):
        """detect()

        This function handles rescaling of the input image if it's
        larger than 1280x720.
        """
        if img is None:
            raise ValueError
        img_h, img_w, _ = img.shape
        scale = min(720. / img_h, 1280. / img_w)
        if scale < 1.0:
            new_h = int(np.ceil(img_h * scale))
            new_w = int(np.ceil(img_w * scale))
            img = cv2.resize(img, (new_w, new_h))
            minsize = max(int(np.ceil(minsize * scale)), 40)
        dets, landmarks = self._detect_1280x720(img, minsize)
        if scale < 1.0:
            dets[:, :-1] = np.fix(dets[:, :-1] / scale)
            landmarks = np.fix(landmarks / scale)
        return dets, landmarks

4.而後在須要人臉檢測的地方less

from mtcnn import TrtMtcnn

mtcnn = TrtMtcnn(mtcnn_engine_file) # 只初始化一次
dets, landmarks = mtcnn.detect(img, minsize=40)

這樣就能夠進行人臉框選和關鍵點檢測了。
dets 是人臉框 [[x1, y1, x2, y2,... , score], ...]
landmarks 是5個關鍵點的座標 [[x1, x2, ..., x5, y1, y2, ..., y5], ...]ide

5.若是一張圖片中有多張臉,但願選取靠近圖片中心的臉,經過如下函數返回該臉的索引,原理是計算左上點和右下點和圖片中心的距離,取最小的那個。函數

def find_central_face(img, dets):
    h, w, _ = img.shape
    min_distance = 1e10
    min_distance_index = 0
    i = 0
    for det in dets:
        distance = (
            (det[0] - w / 2) * (det[0] - w / 2)
            + (det[1] - h / 2) * (det[1] - h / 2)
            + (det[2] - w / 2) * (det[2] - w / 2)
            + (det[3] - h / 2) * (det[3] - h / 2)
        )
        if distance < min_distance:
            min_distance = distance
            min_distance_index = i
        i += 1
    return min_distance_index

有了 5 個關鍵點,就能夠作人臉對齊了

import cv2
import numpy


class FaceAligner:
    def __init__(self):
        self.imgSize = [112, 96]
        # 96*112 圖中標準的5個關鍵的座標
        self.coord5point = [
            [30.2946, 51.6963],
            [65.5318, 51.6963],
            [48.0252, 71.7366],
            [33.5493, 92.3655],
            [62.7299, 92.3655],
        ]  # left_eye, right_eye, nose, mouth_left, mouth_right

    def transformation_from_points(self, points1, points2):
    # 尋找點之間的變換矩陣
        points1 = points1.astype(numpy.float64)
        points2 = points2.astype(numpy.float64)
        c1 = numpy.mean(points1, axis=0)
        c2 = numpy.mean(points2, axis=0)
        points1 -= c1
        points2 -= c2
        s1 = numpy.std(points1)
        s2 = numpy.std(points2)
        points1 /= s1
        points2 /= s2
        U, S, Vt = numpy.linalg.svd(points1.T * points2)
        R = (U * Vt).T
        return numpy.vstack(
            [
                numpy.hstack(((s2 / s1) * R, c2.T - (s2 / s1) * R * c1.T)),
                numpy.matrix([0.0, 0.0, 1.0]),
            ]
        )
    
    def warp_im(self, img_im, src_landmarks, dst_landmarks):
    # 根據關鍵點進行變換
        pts1 = numpy.float64(
            numpy.matrix([[point[0], point[1]] for point in src_landmarks])
        )
        pts2 = numpy.float64(
            numpy.matrix([[point[0], point[1]] for point in dst_landmarks])
        )
        M = self.transformation_from_points(pts1, pts2)
        dst = cv2.warpAffine(img_im, M[:2], (img_im.shape[1], img_im.shape[0]))
        return dst
    
    def align(self, img, face_landmarks):
        dst = self.warp_im(img, face_landmarks, self.coord5point) # 原圖經過關鍵點變換
        crop_im = dst[0: self.imgSize[0], 0: self.imgSize[1]] # 在變換後的圖中裁剪須要的尺寸
        return crop_im

後面就是使用人臉特徵提取器,分別對兩張對齊後的人臉提取特徵,計算歐氏距離,卡閾值判斷結果了。最終加速結果:1080P 圖片,只須要 20ms,完美符合需求了!

總結

這是看 TensorRT 的第三天,已經成功使用 TensorRT 對已有模型進行加速了。對 TensorRT 的工做流程比較熟悉了,可是,對於模型轉化,操做轉化,自定義操做仍是一頭霧水,必需要認真學習,尤爲是 C++ 接口,看着很難,實際上跟 Python 差很少,只是語法比較囉嗦了一點而已。

熟練掌握 TensorRT,之後全部模型均可以放在上面加速,豈不美滋滋。

參考連接

1 https://github.com/kpzhang93/MTCNN_face_detection_alignment
2 https://github.com/ipazc/mtcnn
3 https://github.com/davidsandberg/facenet
4 https://jkjung-avt.github.io/tensorrt-mtcnn/
5 https://github.com/jkjung-avt/tensorrt_demos#mtcnn

相關文章
相關標籤/搜索