訓練和測試一個有效的機器學習模型最重要的一步是收集大量數據並使用這些數據對其進行有效訓練。小批量(Mini-batches)訓練是最有效的訓練策略,在每次迭代中使用一小部分數據進行訓練。
可是,隨着大量的機器學習任務在視頻數據集上執行,存在着對不等長視頻進行有效批處理的問題。大多數解決方法依賴於將視頻裁剪成相等的長度,以便在迭代期間提取相同數量的幀,但在咱們須要從每一幀獲取信息來有效地預測某些事情的場景中,這並非特別有用的,特別是在自動駕駛汽車和動做識別的情景中。
本文咱們經過建立一個能夠處理不一樣長度視頻的處理方法來解決該問題。
在Glenn Jocher的Yolov3中(https://github.com/ultralytics/yolov3),我用LoadStreams做爲基礎,建立了LoadStreamsBatch類。
類初始化git
def __init__(self, sources='streams.txt', img_size=416, batch_size=2, subdir_search=False): self.mode = 'images' self.img_size = img_size self.def_img_size = None videos = [] if os.path.isdir(sources): if subdir_search: for subdir, dirs, files in os.walk(sources): for file in files: if 'video' in magic.from_file(subdir + os.sep + file, mime=True): videos.append(subdir + os.sep + file) else: for elements in os.listdir(sources): if not os.path.isdir(elements) and 'video' in magic.from_file(sources + os.sep + elements, mime=True): videos.append(sources + os.sep + elements) else: with open(sources, 'r') as f: videos = [x.strip() for x in f.read().splitlines() if len(x.strip())] n = len(videos) curr_batch = 0 self.data = [None] * batch_size self.cap = [None] * batch_size self.sources = videos self.n = n self.cur_pos = 0 # 啓動線程從視頻流中讀取幀 for i, s in enumerate(videos): if curr_batch == batch_size: break print('%g/%g: %s... ' % (self.cur_pos+1, n, s), end='') self.cap[curr_batch] = cv2.VideoCapture(s) try: assert self.cap[curr_batch].isOpened() except AssertionError: print('Failed to open %s' % s) self.cur_pos+=1 continue w = int(self.cap[curr_batch].get(cv2.CAP_PROP_FRAME_WIDTH)) h = int(self.cap[curr_batch].get(cv2.CAP_PROP_FRAME_HEIGHT)) fps = self.cap[curr_batch].get(cv2.CAP_PROP_FPS) % 100 frames = int(self.cap[curr_batch].get(cv2.CAP_PROP_FRAME_COUNT)) _, self.data[i] = self.cap[curr_batch].read() # guarantee first frame thread = Thread(target=self.update, args=([i, self.cap[curr_batch], self.cur_pos+1]), daemon=True) print(' success (%gx%g at %.2f FPS having %g frames).' % (w, h, fps, frames)) curr_batch+=1 self.cur_pos+=1 thread.start() print('') # 新的一行 if all( v is None for v in self.data ): return # 檢查常見形狀 s = np.stack([letterbox(x, new_shape=self.img_size)[0].shape for x in self.data], 0) # 推理的形狀 self.rect = np.unique(s, axis=0).shape[0] == 1 if not self.rect: print('WARNING: Different stream shapes detected. For optimal performance supply similarly-shaped streams.')
在init函數中,接受四個參數。img_size與原始版本相同,其餘三個參數定義以下:github
def letterbox(img, new_shape=(416, 416), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True): # 將圖像調整爲32個像素倍數的矩形 https://github.com/ultralytics/yolov3/issues/232 shape = img.shape[:2] # 當前形狀 [height, width] if isinstance(new_shape, int): new_shape = (new_shape, new_shape) # 比例 r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) if not scaleup: # 只按比例縮小,不按比例放大(用於更好的測試圖) r = min(r, 1.0) # 計算填充 ratio = r, r # 寬高比 new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] #填充 if auto: # 最小矩形 dw, dh = np.mod(dw, 64), np.mod(dh, 64) # 填充 elif scaleFill: # 伸展 dw, dh = 0.0, 0.0 new_unpad = new_shape ratio = new_shape[0] / shape[1], new_shape[1] / shape[0] # 寬高比 dw /= 2 # 將填充分紅兩側 dh /= 2 if shape[::-1] != new_unpad: # 改變大小 img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # 添加邊界 return img, ratio, (dw, dh)
固定間隔檢索幀函數
update函數有一個小的變化,咱們另外存儲了默認的圖像大小,以便在全部視頻都被提取進行處理,但因爲長度不相等,一個視頻比另外一個視頻提早完成。當我解釋代碼的下一部分時,它就會更清楚了,那就是next 函數。app
def update(self, index, cap, cur_pos): # 讀取守護進程中的下一個幀 n = 0 while cap.isOpened(): n += 1 # _, self.imgs[index] = cap.read() cap.grab() if n == 4: # 每4幀讀取一次 _, self.data[index] = cap.retrieve() if self.def_img_size is None: self.def_img_size = self.data[index].shape n = 0 time.sleep(0.01) # 等待
迭代器
若是幀存在,它會像以前同樣傳遞給letterbox函數,而若是frame爲None的,則意味着視頻已被徹底處理,這時咱們檢查列表中的全部視頻是否都已被處理,若是有更多的視頻要處理,cur_pos指針將用於獲取下一個可用視頻的位置。
若是再也不從列表中提取視頻,但仍在處理某些視頻,則向其餘處理組件發送一個空白幀,即它根據其餘批次中的剩餘幀動態調整視頻大小。機器學習
def __next__(self): self.count += 1 img0 = self.data.copy() img = [] for i, x in enumerate(img0): if x is not None: img.append(letterbox(x, new_shape=self.img_size, auto=self.rect)[0]) else: if self.cur_pos == self.n: if all( v is None for v in img0 ): cv2.destroyAllWindows() raise StopIteration else: img0[i] = np.zeros(self.def_img_size) img.append(letterbox(img0[i], new_shape=self.img_size, auto=self.rect)[0]) else: print('%g/%g: %s... ' % (self.cur_pos+1, self.n, self.sources[self.cur_pos]), end='') self.cap[i] = cv2.VideoCapture(self.sources[self.cur_pos]) fldr_end_flg = 0 while not self.cap[i].isOpened(): print('Failed to open %s' % self.sources[self.cur_pos]) self.cur_pos+=1 if self.cur_pos == self.n: img0[i] = np.zeros(self.def_img_size) img.append(letterbox(img0[i], new_shape=self.img_size, auto=self.rect)[0]) fldr_end_flg = 1 break self.cap[i] = cv2.VideoCapture(self.sources[self.cur_pos]) if fldr_end_flg: continue w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) fps = cap.get(cv2.CAP_PROP_FPS) % 100 frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) _, self.data[i] = self.cap[i].read() # 保證第一幀 img0[i] = self.data[i] img.append(letterbox(self.data[i], new_shape=self.img_size, auto=self.rect)[0]) thread = Thread(target=self.update, args=([i, self.cap[i], self.cur_pos+1]), daemon=True) print(' success (%gx%g at %.2f FPS having %g frames).' % (w, h, fps, frames)) self.cur_pos+=1 thread.start() print('') # 新的一行 # 堆疊 img = np.stack(img, 0) # 轉換 img = img[:, :, :, ::-1].transpose(0, 3, 1, 2) # BGR 到 RGB, bsx3x416x416 img = np.ascontiguousarray(img) return self.sources, img, img0, None
結論
隨着大量的時間花費在數據收集和數據預處理上,我相信這有助於減小視頻與模型匹配的時間,咱們能夠集中精力使模型與數據相匹配。
參考連接:https://towardsdatascience.com/variable-sized-video-mini-batching-c4b1a47c043bide