上次咱們成功訓練了手掌識別器http://www.cnblogs.com/take-fetter/p/8438747.html,能夠成功獲得識別的結果如圖html
接下來須要使用opencv來獲取手掌,去除背景部分,這裏就須要用到掩膜(mask)、ROI(region of interest)等相關知識,具體的概念仍是不講了,網上不少。git
首先從圖中根據上次的程序畫框部分提取手掌(固然本身截圖再保存也能夠-.-)以下github
算法思想:根據黑白圖片,基於距離變換獲得手掌中心,並根據最大半徑畫出手掌的內切圓如圖算法
代碼以下ide
distance = cv2.distanceTransform(black_and_white, cv2.DIST_L2, 5, cv2.CV_32F) # Calculates the distance to the closest zero pixel for each pixel of the source image. maxdist = 0 # rows,cols = img.shape for i in range(distance.shape[0]): for j in range(distance.shape[1]): dist = distance[i][j] if maxdist < dist: x = j y = i maxdist = dist
cv2.circle(original, (x, y), maxdist, (255, 100, 255), 1, 8, 0)
如今咱們已知了圓的半徑和圓心座標,所以能夠根據ROI提取出內切正方形(雖然內切正方形會損失不少的信息,可是目前我尚未想到其餘的更好的辦法),做出正方形以下this
做正方形並提取的代碼以下spa
final_img = original.copy()
#cv2.circle() this line half_slide = maxdist * math.cos(math.pi / 4) (left, right, top, bottom) = ((x - half_slide), (x + half_slide), (y - half_slide), (y + half_slide)) p1 = (int(left), int(top)) p2 = (int(right), int(bottom)) cv2.rectangle(original, p1, p2, (77, 255, 9), 1, 1) final_img = final_img[int(top):int(bottom),int(left):int(right)]
運行截圖3d
能夠看到出現了灰色部分,按理說是不會存在的,使用cv2.imwrite發現沒有出現任何問題,如圖rest
感受是cv2.imshow對於輸出圖片的像素大小有必定限制,進行了自動填充或者是默認有灰色做爲背景色且比在這裏咱們提取出的圖片要大code
代碼地址:https://github.com/takefetter/Get_PalmPrint/blob/master/process_palm.py
1.https://github.com/dev-td7/Automatic-Hand-Detection-using-Wrist-localisation 這位老哥的repo,基於膚色的提取和造成近似橢圓給個人啓發很大(雖而後半部分徹底沒有用.....)
2.http://answers.opencv.org/question/180668/how-to-find-the-center-of-one-palm-in-the-picture/ 雖然基於距離變化參考至這裏的回答,不過也算是完成了提問者的需求。
轉載請註明出處http://www.cnblogs.com/take-fetter/p/8453589.html