第一階段實現對圖片中人臉的識別並打上標籤(好比:人名)html
第二階段使用攝像頭實現對人物的識別,好比典型的應用作一我的臉考勤的系統python
Face-api.js 是一個 JavaScript API,是基於 tensorflow.js 核心 API 的人臉檢測和人臉識別的瀏覽器實現。它實現了一系列的卷積神經網絡(CNN),針對網絡和移動設備進行了優化。很是牛逼,簡單好用git
是一個 JavaScript 文件上傳庫。能夠拖入上傳文件,而且會對圖像進行優化以加快上傳速度。讓用戶體驗到出色、進度可見、如絲般順暢的用戶體驗。確實很酷的一款上傳圖片的開源產品github
是一個 JavaScript 庫,它以優雅的方式展現圖片,視頻和一些 html 內容。它包含你所指望的一切特性 —— 支持觸屏,響應式和高度自定義web
Demo http://221.224.21.30:2020/FaceLibs/Index 密碼:123456算法
注意:紅框中的火箭浣熊,鋼鐵俠,戰爭機器沒有正確的識別,雖然能夠經過調整一些參數能夠識別出來,但仍是其它的問題,應該是訓練的模型中缺乏對帶面具的和動漫人物的人臉數據。數據庫
仍是先來看看代碼吧,作這類開發,並無想象中的那麼難,由於難的核心別人都已經幫你實現了,因此和普通的程序開發沒有什麼不一樣,熟練掌握這些api的方法和功能就能夠作出很是實用而且很是酷炫的產品。canvas
下載每一個人物的圖片進行分類api
這裏對face-api.js類庫代碼作一下簡單的說明瀏覽器
function dodetectpic() { $.messager.progress(); //加載訓練好的模型(weight,bias) Promise.all([ faceapi.nets.faceRecognitionNet.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), faceapi.nets.faceLandmark68Net.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), faceapi.nets.faceLandmark68TinyNet.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), faceapi.nets.ssdMobilenetv1.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), faceapi.nets.tinyFaceDetector.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), faceapi.nets.mtcnn.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), //faceapi.nets.tinyYolov.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights') ]).then(async () => { //在原來圖片容器中添加一層用於顯示識別的藍色框框 const container = document.createElement('div') container.style.position = 'relative' $('#picmodal').prepend(container) //先加載維護好的人臉數據(人臉的特徵數據和標籤,用於後面的比對) const labeledFaceDescriptors = await loadLabeledImages() //比對人臉特徵數據 const faceMatcher = new faceapi.FaceMatcher(labeledFaceDescriptors, 0.6) //獲取輸入圖片 let image = document.getElementById('testpic') //根據圖片大小建立一個圖層,用於顯示方框 let canvas = faceapi.createCanvasFromMedia(image) //console.log(canvas); container.prepend(canvas) const displaySize = { width: image.width, height: image.height } faceapi.matchDimensions(canvas, displaySize) //設置須要使用什麼算法和參數進行掃描識別圖片的人臉特徵 const options = new faceapi.SsdMobilenetv1Options({ minConfidence: 0.38 }) //const options = new faceapi.TinyFaceDetectorOptions() //const options = new faceapi.MtcnnOptions() //開始獲取圖片中每一張人臉的特徵數據 const detections = await faceapi.detectAllFaces(image, options).withFaceLandmarks().withFaceDescriptors() //根據人臉輪廓的大小,調整方框的大小 const resizedDetections = faceapi.resizeResults(detections, displaySize) //開始和事先準備的標籤庫比對,找出最符合的那個標籤 const results = resizedDetections.map(d => faceMatcher.findBestMatch(d.descriptor)) console.log(results) results.forEach((result, i) => { //顯示比對的結果 const box = resizedDetections[i].detection.box const drawBox = new faceapi.draw.DrawBox(box, { label: result.toString() }) drawBox.draw(canvas) console.log(box, drawBox) }) $.messager.progress('close'); }) } //讀取人臉標籤數據 async function loadLabeledImages() { //獲取人臉圖片數據,包含:圖片+標籤 const data = await $.get('/FaceLibs/GetImgData'); //對圖片按標籤進行分類 const labels = [...new Set(data.map(item => item.Label))] console.log(labels); return Promise.all( labels.map(async label => { const descriptions = [] const imgs = data.filter(item => item.Label == label); for (let i = 0; i < imgs.length; i++) { const item = imgs[i]; const img = await faceapi.fetchImage(`${item.ImgUrl}`) //console.log(item.ImgUrl, img); //const detections = await faceapi.detectSingleFace(img).withFaceLandmarks().withFaceDescriptor() //識別人臉的初始化參數 const options = new faceapi.SsdMobilenetv1Options({ minConfidence:0.38}) //const options = new faceapi.TinyFaceDetectorOptions() //const options = new faceapi.MtcnnOptions() //掃描圖片中人臉的輪廓數據 const detections = await faceapi.detectSingleFace(img, options).withFaceLandmarks().withFaceDescriptor() console.log(detections); if (detections) { descriptions.push(detections.descriptor) } else { console.warn('Unrecognizable face') } } console.log(label, descriptions); return new faceapi.LabeledFaceDescriptors(label, descriptions) }) ) }
face-api 有幾個很是重要的方法下面說明一下都是來自 https://github.com/justadudewhohacks/face-api.js/ 的介紹
在使用這些方法前必須先加載訓練好的模型,這裏並不須要本身照片進行訓練了,face-api.js應該是在tensorflow.js上改的因此這些訓練好的模型應該和python版的tensorflow都是通用的,全部可用的模型都在https://github.com/justadudewhohacks/face-api.js/tree/master/weights 能夠找到
//加載訓練好的模型(weight,bias) // ageGenderNet 識別性別和年齡 // faceExpressionNet 識別表情,開心,沮喪,普通 // faceLandmark68Net 識別臉部特徵用於mobilenet算法 // faceLandmark68TinyNet 識別臉部特徵用於tiny算法 // faceRecognitionNet 識別人臉 // ssdMobilenetv1 google開源AI算法除庫包含分類和線性迴歸 // tinyFaceDetector 比Google的mobilenet更輕量級,速度更快一點 // mtcnn 多任務CNN算法,一開瀏覽器就卡死 // tinyYolov2 識別身體輪廓的算法,不知道怎麼用 Promise.all([ faceapi.nets.faceRecognitionNet.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), faceapi.nets.faceLandmark68Net.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), faceapi.nets.faceLandmark68TinyNet.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), faceapi.nets.ssdMobilenetv1.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), faceapi.nets.tinyFaceDetector.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), faceapi.nets.mtcnn.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights'), //faceapi.nets.tinyYolov.loadFromUri('https://raw.githubusercontent.com/justadudewhohacks/face-api.js/master/weights') ]).then(async () => {})
很是重要參數設置,在優化識別性能和比對的正確性上頗有幫助,就是須要慢慢的微調。
SsdMobilenetv1Options export interface ISsdMobilenetv1Options { // minimum confidence threshold // default: 0.5 minConfidence?: number // maximum number of faces to return // default: 100 maxResults?: number } // example const options = new faceapi.SsdMobilenetv1Options({ minConfidence: 0.8 }) TinyFaceDetectorOptions export interface ITinyFaceDetectorOptions { // size at which image is processed, the smaller the faster, // but less precise in detecting smaller faces, must be divisible // by 32, common sizes are 128, 160, 224, 320, 416, 512, 608, // for face tracking via webcam I would recommend using smaller sizes, // e.g. 128, 160, for detecting smaller faces use larger sizes, e.g. 512, 608 // default: 416 inputSize?: number // minimum confidence threshold // default: 0.5 scoreThreshold?: number } // example const options = new faceapi.TinyFaceDetectorOptions({ inputSize: 320 }) MtcnnOptions export interface IMtcnnOptions { // minimum face size to expect, the higher the faster processing will be, // but smaller faces won't be detected // default: 20 minFaceSize?: number // the score threshold values used to filter the bounding // boxes of stage 1, 2 and 3 // default: [0.6, 0.7, 0.7] scoreThresholds?: number[] // scale factor used to calculate the scale steps of the image // pyramid used in stage 1 // default: 0.709 scaleFactor?: number // number of scaled versions of the input image passed through the CNN // of the first stage, lower numbers will result in lower inference time, // but will also be less accurate // default: 10 maxNumScales?: number // instead of specifying scaleFactor and maxNumScales you can also // set the scaleSteps manually scaleSteps?: number[] } // example const options = new faceapi.MtcnnOptions({ minFaceSize: 100, scaleFactor: 0.8 })
最經常使用的圖片識別方法,想要識別什麼就調用相應的方法就行了
// all faces await faceapi.detectAllFaces(input) await faceapi.detectAllFaces(input).withFaceExpressions() await faceapi.detectAllFaces(input).withFaceLandmarks() await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceExpressions() await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceExpressions().withFaceDescriptors() await faceapi.detectAllFaces(input).withFaceLandmarks().withAgeAndGender().withFaceDescriptors() await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceExpressions().withAgeAndGender().withFaceDescriptors() // single face await faceapi.detectSingleFace(input) await faceapi.detectSingleFace(input).withFaceExpressions() await faceapi.detectSingleFace(input).withFaceLandmarks() await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceExpressions() await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceExpressions().withFaceDescriptor() await faceapi.detectSingleFace(input).withFaceLandmarks().withAgeAndGender().withFaceDescriptor() await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceExpressions().withAgeAndGender().withFaceDescriptor()
ml5js.org https://ml5js.org/ 這裏有不少封裝好的詳細的例子,很是好。
接下來我準備第二部分功能,經過攝像頭快速識別人臉,作一我的臉考勤的應用。應該剩下的工做也很少了,只要接上攝像頭就能夠了