運動識別php
利用運動識別(motion detection)來進行近景識別是最有意思的一種方式。實現運動識別的基本原理是設置一個起始的基準RGB圖像,而後將從攝像頭獲取的每一幀影像和這個基準圖像進行比較。若是發現了差別,咱們能夠認爲有東西進入到了攝像頭的視野範圍。html
不難看出這種策略是有缺陷的。在現實生活中,物體是運動的。在一個房間裏,某我的可能會輕微移動傢俱。在戶外,一輛汽車可能會啓動,風可能會將一些小樹吹的搖搖晃晃。在這些場景中,盡然沒有連續的移動動做,可是物體的狀態仍是發生了變化,依據以前的策略,系統會判斷錯誤。所以,在這些狀況下,咱們須要間歇性的更改基準圖像才能解決這一問題。c++
EmguCV項目的官方網站爲http://www.emgu.com/wiki/index.php/Main_Page 實際的源代碼和安裝包放在SourceForge(http://sourceforge.net/projects/emgucv/files/ )上。本文使用的Emgu版本爲2.3.0。Emgu的安裝過程很簡單直觀,只須要點擊下載好的可執行文件便可。不過有一點須要注意的是EmguCV彷佛在x86架構的計算機上運行的最好。若是在64位的機器上開發,最好爲Emgu庫的目標平臺指定爲x86,以下圖所示(你也能夠在官網上下載源碼而後本身在x64平臺上編譯)。架構
要使用Emgu庫,須要添加對下面三個dll的引用:Emgu.CV、Emgu.CV.UI以及Emgu.Util。ide
由於Emgu是對C++類庫的一個.Net包裝,因此須要在dll所在的目錄放一些額外的非託管的dll,使得Emgu可以找到這些dll進行處理。Emgu在應用程序的執行目錄查找這些dll。若是在debug模式下面,則在bin/Debug目錄下面查找。在release模式下,則在bin/Release目錄下面。共有11個非託管的C++ dll須要放置在相應目錄下面,他們是opencv_calib3d231.dll, opencv_conrib231.dll, opencv_core231.dll,opencv_features2d231.dll, opencv_ffmpeg.dll, opencv_highgui231.dll, opencv_imgproc231.dll,opencv_legacy231.dll, opencv_ml231.dll, opencv_objectdetect231.dll, and opencv_video231.dll。這些dll能夠在Emgu的安裝目錄下面找到。爲了方便,能夠拷貝全部以opencv_開頭的dll。性能
在咱們的擴展方法庫中,咱們須要一些額外的擴展幫助方法。上一篇文章討論過,每一種類庫都有其本身可以理解的核心圖像類型。在Emgu中,這個核心的圖像類型是泛型的Image<TColor,TDepth>類型,它實現了Emgu.CV.IImage接口。下面的代碼展示了一些咱們熟悉的影像數據格式和Emgu特定的影像格式之間轉換的擴展方法。新建一個名爲EmguExtensions.cs的靜態類,並將其命名空間改成ImageManipulationMethods,和咱們以前ImageExtensions類的命名空間相同。咱們能夠將全部的的擴展方法放到同一個命名空間中。這個類負責三種不一樣影像數據類型之間的轉換:從Microsoft.Kinect.ColorFrameImage到Emgu.CV.Image<TColor,TDepth>,從System.Drawing.Bitmap到Emgu.CV.Image<TColor,TDepth>以及Emgu.CV.Image<TColor,TDepth>到System.Windows.Media.Imaging.BitmapSource之間的轉換。網站
使用Emgu類庫來實現運動識別,咱們將用到在以前文章中講到的「拉數據」(polling)模型而不是基於事件的機制來獲取數據。這是由於圖像處理很是消耗系統計算和內存資源,咱們但願可以調節處理的頻率,而這隻能經過「拉數據」這種模式來實現。須要指出的是本例子只是演示如何進行運動識別,因此注重的是代碼的可讀性,而不是性能,你們看了理解了以後能夠對其進行改進。ui
由於彩色影像數據流用來更新Image控件數據源,咱們使用深度影像數據流來進行運動識別。須要指出的是,咱們全部用於運動追蹤的數據都是經過深度影像數據流提供的。如前面文章討論,CompositionTarget.Rendering事件一般是用來進行從彩色影像數據流中「拉」數據。可是對於深度影像數據流,咱們將會建立一個BackgroundWorker對象來對深度影像數據流進行處理。BackgroundWorker對象將會調用Pulse方法來「拉」取深度影像數據,並執行一些消耗計算資源的處理。當BackgroundWorker完成了一個循環,接着從深度影像數據流中「拉」取下一幅影像繼續處理。代碼中聲明瞭兩個名爲MotionHistory和IBGFGDetector的Emgu成員變量。這兩個變量一塊兒使用,經過相互比較來不斷更新基準影像來探測運動。this
KinectSensor _kinectSensor; private MotionHistory _motionHistory; private IBGFGDetector<Bgr> _forgroundDetector; bool _isTracking = false; public MainWindow() { InitializeComponent(); this.Unloaded += delegate { _kinectSensor.ColorStream.Disable(); }; this.Loaded += delegate { _motionHistory = new MotionHistory( 1.0, //in seconds, the duration of motion history you wants to keep 0.05, //in seconds, parameter for cvCalcMotionGradient 0.5); //in seconds, parameter for cvCalcMotionGradient _kinectSensor = KinectSensor.KinectSensors[0]; _kinectSensor.ColorStream.Enable(); _kinectSensor.Start(); BackgroundWorker bw = new BackgroundWorker(); bw.DoWork += (a, b) => Pulse(); bw.RunWorkerCompleted += (c, d) => bw.RunWorkerAsync(); bw.RunWorkerAsync(); }; }
下面的代碼是執行圖象處理來進行運動識別的關鍵部分。代碼在Emgu的示例代碼的基礎上進行了一些修改。Pluse方法中的第一個任務是將彩色影像數據流產生的ColorImageFrame對象轉換到Emgu中能處理的圖象數據類型。_forgroundDetector對象被用來更新_motionHistory對象,他是持續更新的基準影像的容器。_forgroundDetector還被用來與基準影像進行比較,以判斷是否發生變化。當從當前彩色影像數據流中獲取到的影像和基準影像有不一樣時,建立一個影像來反映這兩張圖片之間的差別。而後將這張影像轉換爲一系列更小的圖片,而後對運動識別進行分解。咱們遍歷這一些列運動的圖像來看他們是否超過咱們設定的運動識別的閾值。若是這些運動很明顯,咱們就在界面上顯示視頻影像,不然什麼都不顯示。spa
private void Pulse() { using (ColorImageFrame imageFrame = _kinectSensor.ColorStream.OpenNextFrame(200)) { if (imageFrame == null) return; using (Image<Bgr, byte> image = imageFrame.ToOpenCVImage<Bgr, byte>()) using (MemStorage storage = new MemStorage()) //create storage for motion components { if (_forgroundDetector == null) { _forgroundDetector = new BGStatModel<Bgr>(image , Emgu.CV.CvEnum.BG_STAT_TYPE.GAUSSIAN_BG_MODEL); } _forgroundDetector.Update(image); //update the motion history _motionHistory.Update(_forgroundDetector.ForgroundMask); //get a copy of the motion mask and enhance its color double[] minValues, maxValues; System.Drawing.Point[] minLoc, maxLoc; _motionHistory.Mask.MinMax(out minValues, out maxValues , out minLoc, out maxLoc); Image<Gray, Byte> motionMask = _motionHistory.Mask .Mul(255.0 / maxValues[0]); //create the motion image Image<Bgr, Byte> motionImage = new Image<Bgr, byte>(motionMask.Size); motionImage[0] = motionMask; //Threshold to define a motion area //reduce the value to detect smaller motion double minArea = 100; storage.Clear(); //clear the storage Seq<MCvConnectedComp> motionComponents = _motionHistory.GetMotionComponents(storage); bool isMotionDetected = false; //iterate through each of the motion component for (int c = 0; c < motionComponents.Count(); c++) { MCvConnectedComp comp = motionComponents[c]; //reject the components that have small area; if (comp.area < minArea) continue; OnDetection(); isMotionDetected = true; break; } if (isMotionDetected == false) { OnDetectionStopped(); this.Dispatcher.Invoke(new Action(() => rgbImage.Source = null)); return; } this.Dispatcher.Invoke( new Action(() => rgbImage.Source = imageFrame.ToBitmapSource()) ); } } } private void OnDetection() { if (!_isTracking) _isTracking = true; } private void OnDetectionStopped() { _isTracking = false; }
運動模板 —— 運動檢測(只用到RGB信息)
<Window x:Class="KinectMovementDetection.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="400" Width="525"> <Grid > <Image Name="rgbImage" Stretch="Fill"/> </Grid> </Window>
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; using Microsoft.Kinect; using System.Drawing.Imaging; using System.Runtime.InteropServices; using Emgu.CV; using Emgu.CV.Structure; using System.Windows; using System.IO; namespace ImageManipulationExtensionMethods { public static class EmguImageExtensions { public static Image<TColor, TDepth> ToOpenCVImage<TColor, TDepth>(this ColorImageFrame image) where TColor : struct, IColor where TDepth : new() { var bitmap = image.ToBitmap(); return new Image<TColor, TDepth>(bitmap); } public static Image<TColor, TDepth> ToOpenCVImage<TColor, TDepth>(this Bitmap bitmap) where TColor : struct, IColor where TDepth : new() { return new Image<TColor, TDepth>(bitmap); } public static System.Windows.Media.Imaging.BitmapSource ToBitmapSource(this IImage image) { var source = image.Bitmap.ToBitmapSource(); return source; } } }
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; using Microsoft.Kinect; using System.Drawing.Imaging; using System.Runtime.InteropServices; using Emgu.CV; using Emgu.CV.Structure; using System.Windows; using System.IO; namespace ImageManipulationExtensionMethods { public static class EmguImageExtensions { public static Image<TColor, TDepth> ToOpenCVImage<TColor, TDepth>(this ColorImageFrame image) where TColor : struct, IColor where TDepth : new() { var bitmap = image.ToBitmap(); return new Image<TColor, TDepth>(bitmap); } public static Image<TColor, TDepth> ToOpenCVImage<TColor, TDepth>(this Bitmap bitmap) where TColor : struct, IColor where TDepth : new() { return new Image<TColor, TDepth>(bitmap); } public static System.Windows.Media.Imaging.BitmapSource ToBitmapSource(this IImage image) { var source = image.Bitmap.ToBitmapSource(); return source; } } }
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using Emgu.CV; using Emgu.CV.Structure; using Emgu.CV.VideoSurveillance; using Microsoft.Kinect; using System.ComponentModel; using ImageManipulationExtensionMethods; namespace KinectMovementDetection { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { KinectSensor _kinectSensor; private MotionHistory _motionHistory; // 歷史運動模板 private IBGFGDetector<Bgr> _forgroundDetector; bool _isTracking = false; public MainWindow() { InitializeComponent(); this.Unloaded += delegate { _kinectSensor.ColorStream.Disable(); }; this.Loaded += delegate { _motionHistory = new MotionHistory( 1.0, //in seconds, the duration of motion history you wants to keep 0.05, //in seconds, parameter for cvCalcMotionGradient 0.5); //in seconds, parameter for cvCalcMotionGradient _kinectSensor = KinectSensor.KinectSensors[0]; _kinectSensor.ColorStream.Enable(); _kinectSensor.Start(); BackgroundWorker bw = new BackgroundWorker(); // 單獨線程上執行操做 bw.DoWork += (a, b) => Pulse(); bw.RunWorkerCompleted += (c, d) => bw.RunWorkerAsync(); bw.RunWorkerAsync(); }; } private void Pulse() { using (ColorImageFrame imageFrame = _kinectSensor.ColorStream.OpenNextFrame(200)) { if (imageFrame == null) return; using (Image<Bgr, byte> image = imageFrame.ToOpenCVImage<Bgr, byte>()) using (MemStorage storage = new MemStorage()) //create storage for motion components { if (_forgroundDetector == null) { _forgroundDetector = new BGStatModel<Bgr>(image , Emgu.CV.CvEnum.BG_STAT_TYPE.GAUSSIAN_BG_MODEL); } _forgroundDetector.Update(image); //update the motion history _motionHistory.Update(_forgroundDetector.ForegroundMask); //get a copy of the motion mask and enhance its color double[] minValues, maxValues; System.Drawing.Point[] minLoc, maxLoc; _motionHistory.Mask.MinMax(out minValues, out maxValues , out minLoc, out maxLoc); Image<Gray, Byte> motionMask = _motionHistory.Mask .Mul(255.0 / maxValues[0]); //create the motion image Image<Bgr, Byte> motionImage = new Image<Bgr, byte>(motionMask.Size); motionImage[0] = motionMask; //Threshold to define a motion area //reduce the value to detect smaller motion double minArea = 100; storage.Clear(); //clear the storage Seq<MCvConnectedComp> motionComponents = _motionHistory.GetMotionComponents(storage); bool isMotionDetected = false; //iterate through each of the motion component for (int c = 0; c < motionComponents.Count(); c++) { MCvConnectedComp comp = motionComponents[c]; //reject the components that have small area; if (comp.area < minArea) continue; OnDetection(); isMotionDetected = true; break; } if (isMotionDetected == false) { OnDetectionStopped(); this.Dispatcher.Invoke(new Action(() => rgbImage.Source = null)); return; } this.Dispatcher.Invoke( new Action(() => rgbImage.Source = imageFrame.ToBitmapSource()) ); } } } private void OnDetection() { if (!_isTracking) _isTracking = true; } private void OnDetectionStopped() { _isTracking = false; } } }