做者自轉,原文連接:http://blog.csdn.net/nmlh7448...express
曾經微軟宣傳Kinect宣傳的很火,但一直沒有捨得買一臺。第一次接觸是在某個Hackathon上,想作一個空氣鼠標的項目,藉助Kinect實現的,感受這個產品挺驚豔。最近千方百計借到一臺一代的Kinect for Windows,還有微軟官方的開發書籍(《Kinect應用開發實戰——用最天然的方式與機器對話》),略研究了下Kinect的開發。編程
關於Kinect的介紹網上有不少資料,這裏再也不贅述。既然是開發微軟自家的產品,確定要上微軟全家桶,VS2015(C#)+SDK V1.8+Developer toolkit V1.8。其中SDK能夠直接在微軟官網上下載,除了官方SDK,還有其它的SDK,我不是很瞭解,因此不敢妄言介紹。一代Kinect有windows和Xbox 360兩個版本,windows版本的Kinect前面寫着「Kinect」,而Xbox 360版本前面寫着「Xbox 360」,xbox版的鏈接電腦須要有轉接線,可是很詭異的是我曾經直接用Xbox版的鏈接電腦也成功了。而且我最開始安裝的SDK是V2.0,也能成功跑起來Kinect V1……雖然說SDK V2.0只能驅動二代Kinect,但也許微軟仍是照顧了舊版本的硬件吧。不過爲了穩妥,仍是安裝SDK V1.8,而且使用Kinect for Windows。
將Kinect鏈接上電腦以後,能夠打開Developer toolkit browser,運行其中某一個demo,來檢驗Kinect是否正常工做。通常狀況下,正常工做是Kinect正面綠燈一直亮。在這裏不得不吐槽下Kinect的電源線質量問題,兩次接觸Kinect都是電源線有問題。這時只有USB供電,電壓不足,狀態是紅燈一直亮,這種狀況下更換電源線就行了。c#
環境配好以後,打開VS2015,新建一個WPF窗體工程的解決方案,而後在引用裏面添加Kinect v1.8,而後在程序中using Microsoft.Kinect便可。Kinect視頻方面主要包括採集彩色數據、採集深度數據、追蹤骨骼三個功能,此外還有經過麥克風陣列採集聲音數據。windows
Kinect有兩個攝像頭,分別是彩色攝像頭和深度攝像頭,因此第一個程序就是實現獲取兩個攝像頭採集到的彩色視頻流和深度視頻流。在MainWindow.xaml文件裏,在工具箱中選中Image,向窗體中添加兩個大小爲640*480的Image,不重疊,分別命名爲depthImage和colorImage;在Window標籤中添加屬性Loaded="Window_Loaded" Closed=Window_Closed,最終Xaml文件代碼以下:數組
<Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:KinectWpfApplication1" xmlns:WpfViewers="clr-namespace:Microsoft.Samples.Kinect.WpfViewers;assembly=Microsoft.Samples.Kinect.WpfViewers"> x:Class="KinectWpfApplication1.MainWindow" mc:Ignorable="d" Title="MainWindow" Height="590" Width="1296" Loaded="Window_Loaded" Closed="Window_Closed"> <Grid> <Image x:Name="depthImage" HorizontalAlignment="Left" Height="480" Margin="650,0,-0.4,0" VerticalAlignment="Top" Width="640"/> <Image x:Name="colorImage" HorizontalAlignment="Left" Height="480" VerticalAlignment="Top" Width="640"/> </Grid> </Window>
Kinect的調用是使用已經封裝好的KinectSensor類,用於管理Kinect資源。該類一樣支持多個Kinect同時工做,由於我只弄到一臺,因此多臺Kinect的狀況不予考慮。定義KinectSensor _kinect;在Window_Load()中添加函數StartKinect(),而後定義StartKinect函數以下:多線程
private void StartKinect() { if (KinectSensor.KinectSensors.Count <= 0) { MessageBox.Show("No Kinect device foound!"); return; } _kinect = KinectSensor.KinectSensors[0]; //MessageBox.Show("Status:" + _kinect.Status); _kinect.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30); _kinect.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30); _kinect.ColorFrameReady += new EventHandler<ColorImageFrameReadyEventArgs>(KinectColorFrameReady); _kinect.DepthFrameReady += new EventHandler<DepthImageFrameReadyEventArgs>(KinectDepthFrameReady); //_kinect.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(_kinect_AllFrameReady); _kinect.Start(); }
第一個if能夠判斷有幾臺Kinect工做,若是沒有就提示,而後獲取第一臺Kinect設備。定義彩色視頻流和深度視頻流的格式,包括顏色格式、視頻大小和幀速。一代Kinect只有640480FPS30和1280720FPS12,二代的圖像分辨率和幀速都比一代優秀。接下來註冊事件,這裏要介紹一下Kinect的兩種模型——事件模型和輪詢模型,事件模型就如同上述代碼中,彩色視頻採集到一幀以後會觸發事件ColorFrameReady,而後在事件屬性ColorFrameReadyArgs中處理數據,深度視頻和骨骼追蹤也是如此,除了分別處理事件,還有三種幀都採集完畢後觸發的AllFramesReady,可是集中處理的代碼運行後十分卡頓,因此我沒有使用這一事件;另外一種是輪詢模型,與事件模型的「等待Kinect給數據」不一樣,該模型是去向Kinect「主動要數據」,這種方法更快,也更適合多線程,這一模型之後會介紹。事件模型的優勢在於代碼可讀性好,對編程語言來講顯得更加優雅。咱們註冊的處理深度數據和彩色視頻數據的方法代碼以下:編程語言
private void KinectColorFrameReady(object sender, ColorImageFrameReadyEventArgs e) { using (ColorImageFrame colorImageFrame = e.OpenColorImageFrame()) { if (colorImageFrame == null) return; byte[] pixels = new byte[colorImageFrame.PixelDataLength]; colorImageFrame.CopyPixelDataTo(pixels); int stride = colorImageFrame.Width * 4; colorImage.Source = BitmapSource.Create>(colorImageFrame.Width, colorImageFrame.Height, 96, 96, PixelFormats.Bgr32, null, pixels, stride); } } private void KinectDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e) { using (DepthImageFrame depthImageFrame = e.OpenDepthImageFrame()) { if (depthImageFrame == null) return; short[] depthPixelData = new short[depthImageFrame.PixelDataLength]; depthImageFrame.CopyPixelDataTo(depthPixelData); byte[] pixels = ConvertDepthFrameToColorFrame(depthPixelData, ((KinectSensor)sender).DepthStream); int stride = depthImageFrame.Width * 4; depthImage.Source = BitmapSource.Create(depthImageFrame.Width, depthImageFrame.Height, 96, 96, PixelFormats.Bgr32, null, pixels, stride); } }
ConvertDepthFrameToColorFrame() 是將深度數據流轉爲彩色數據,以便在Image控件上顯示。ide
/// <summary> /// 將16位灰階深度圖轉爲32位彩色深度圖 /// </summary> /// <param name="depthImageFrame">16位灰階深度圖</param> /// <param name="depthImageStream">用於得到深度數據流的相關屬性</param> /// <returns></returns> private byte[] ConvertDepthFrameToColorFrame(short[] depthImageFrame, DepthImageStream depthImageStream) { byte[] depthFrame32 = new byte[depthImageStream.FrameWidth * depthImageStream.FrameHeight * bgr32BytesPerPixel]; //經過常量獲取有效視距,不用硬編碼 int tooNearDepth = depthImageStream.TooNearDepth; int tooFarDepth = depthImageStream.TooFarDepth; int unknowDepth = depthImageStream.UnknownDepth; for (int i16 = 0, i32 = 0; i16 < depthImageFrame.Length && i32 < depthFrame32.Length; i16++, i32 += 4) { int player = depthImageFrame[i16] & DepthImageFrame.PlayerIndexBitmask; int realDepth = depthImageFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth; //經過位運算,將13位的深度圖裁剪位8位 byte intensity = (byte)(~(realDepth >> 4)); if (player == 0 && realDepth == 0) { depthFrame32[i32 + redIndex] = 255; depthFrame32[i32 + greenIndex] = 255; depthFrame32[i32 + blueIndex] = 255; } else if (player == 0 && realDepth == tooFarDepth) { //深紫色 depthFrame32[i32 + redIndex] = 66; depthFrame32[i32 + greenIndex] = 0; depthFrame32[i32 + blueIndex] = 66; } else if (player == 0 && realDepth == unknowDepth) { //深棕色 depthFrame32[i32 + redIndex] = 66; depthFrame32[i32 + greenIndex] = 66; depthFrame32[i32 + blueIndex] = 33; } else { depthFrame32[i32 + redIndex] = (byte)(intensity >> intensityShiftByPlayerR[player]); depthFrame32[i32 + greenIndex] = (byte)(intensity >> intensityShiftByPlayerG[player]); depthFrame32[i32 + blueIndex] = (byte)(intensity >> intensityShiftByPlayerB[player]); } } return depthFrame32; }
其中player是Kinect經過深度數據判斷出視野內有多少人,人體區域用鮮豔的顏色標記。
最後,在 Window_Closed 中關閉Kinect:函數
private void Window_Closed(object sender, EventArgs e) { if (_kinect != null) { if (_kinect.Status == KinectStatus.Connected) { _kinect.Stop(); } } }
完成代碼後,就能夠生成並運行了。文章末尾會附上完整代碼。工具
Kinect SDK支持用C++和C#開發,由於C#比較簡單再加上VS2015的足夠智能化,許多方法直接看函數名就知道用處,因此我選擇使用C#。微軟在那本書中介紹了NUI的概念,再加上對Kinect開發的瞭解,以及最近 HoloLens 發行,我感受Kinect + HoloLens 纔是絕配——一個負責處理數據和顯示,一個負責人機交互鏈接現實世界和虛擬世界。NUI必然是將來的趨勢,而實現NUI 90%會依靠Kinect或者其它功能相似 Kinect 的設備來實現。雖然 Kinect 市場佔有率很小,應用也很是少,但不得不令我猜想微軟在下很大的一盤棋,藉此來定義將來的操做系統。
代碼中還包括將深度數據轉爲256色灰階圖像並用亮綠色標記人體區域的方法。但這個方法不知爲何一識別出人體就會變得很是卡頓,但願有大神看到後能告知一下。
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using Microsoft.Kinect; using System.Threading; namespace KinectWpfApplication1 { /// <summary> /// MainWindow.xaml 的交互邏輯 /// </summary> public partial class MainWindow : Window { private KinectSensor _kinect; const float maxDepthDistance = 4095; const float minDepthDistance = 850; const float maxDepthDistancOddset = maxDepthDistance - minDepthDistance; private const int redIndex = 2; private const int greenIndex = 1; private const int blueIndex = 0; private static readonly int[] intensityShiftByPlayerR = { 1, 2, 0, 2, 0, 0, 2, 0 }; private static readonly int[] intensityShiftByPlayerG = { 1, 2, 2, 0, 2, 0, 0, 1 }; private static readonly int[] intensityShiftByPlayerB = { 1, 0, 2, 2, 0, 2, 0, 2 }; private static readonly int bgr32BytesPerPixel = (PixelFormats.Bgr32.BitsPerPixel + 7) / 8; public MainWindow() { InitializeComponent(); } private void Window_Loaded(object sender, RoutedEventArgs e) { StartKinect(); } private void StartKinect() { if (KinectSensor.KinectSensors.Count <= 0) { MessageBox.Show("No Kinect device foound!"); return; } _kinect = KinectSensor.KinectSensors[0]; //MessageBox.Show("Status:" + _kinect.Status); _kinect.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30); _kinect.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30); _kinect.ColorFrameReady += new EventHandler<ColorImageFrameReadyEventArgs>(KinectColorFrameReady); _kinect.DepthFrameReady += new EventHandler<DepthImageFrameReadyEventArgs>(KinectDepthFrameReady); //_kinect.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(_kinect_AllFrameReady); _kinect.Start(); } private void KinectColorFrameReady(object sender, ColorImageFrameReadyEventArgs e) { using (ColorImageFrame colorImageFrame = e.OpenColorImageFrame()) { if (colorImageFrame == null) return; byte[] pixels = new byte[colorImageFrame.PixelDataLength]; colorImageFrame.CopyPixelDataTo(pixels); int stride = colorImageFrame.Width * 4; colorImage.Source = BitmapSource.Create(colorImageFrame.Width, colorImageFrame.Height, 96, 96, PixelFormats.Bgr32, null, pixels, stride); } } private void KinectDepthFrameReady(object sender, DepthImageFrameReadyEventArgs e) { using (DepthImageFrame depthImageFrame = e.OpenDepthImageFrame()) { if (depthImageFrame == null) return; short[] depthPixelData = new short[depthImageFrame.PixelDataLength]; depthImageFrame.CopyPixelDataTo(depthPixelData); byte[] pixels = ConvertDepthFrameToColorFrame(depthPixelData, ((KinectSensor)sender).DepthStream); int stride = depthImageFrame.Width * 4; depthImage.Source = BitmapSource.Create(depthImageFrame.Width, depthImageFrame.Height, 96, 96, PixelFormats.Bgr32, null, pixels, stride); } /* using (DepthImageFrame depthImageFrame = e.OpenDepthImageFrame()) { if (depthImageFrame == null) return; byte[] pixels = ConvertDepthFrameToGrayFrame(depthImageFrame); int stride = depthImageFrame.Width * 4; depthImage.Source = BitmapSource.Create(depthImageFrame.Width, depthImageFrame.Height, 96, 96, PixelFormats.Bgr32, null, pixels, stride); }*/ } /// <summary> /// 單色直方圖計算公式,返回256色灰階,顏色越黑越遠。 /// </summary> /// <param name="dis">深度值,有效值爲......</param> /// <returns></returns> private static byte CalculateIntensityFromDepth(int dis) { return (byte)(255 - (255 * Math.Max(dis - minDepthDistance, 0) / maxDepthDistancOddset)); } /// <summary> /// 生成BGR32格式的圖片字節數組 /// </summary> /// <param name="depthImageFrame"></param> /// <returns></returns> private byte[] ConvertDepthFrameToGrayFrame(DepthImageFrame depthImageFrame) { short[] rawDepthData = new short[depthImageFrame.PixelDataLength]; depthImageFrame.CopyPixelDataTo(rawDepthData); byte[] pixels = new byte[depthImageFrame.Height * depthImageFrame.Width * 4]; for (int depthIndex = 0, colorIndex = 0; depthIndex < rawDepthData.Length && colorIndex < pixels.Length; depthIndex++, colorIndex += 4) { int player = rawDepthData[depthIndex] & DepthImageFrame.PlayerIndexBitmask; int depth = rawDepthData[depthIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth; if (depth <= 900) { //離Kinect很近 pixels[colorIndex + blueIndex] = 255; pixels[colorIndex + greenIndex] = 0; pixels[colorIndex + redIndex] = 0; } else if (depth > 900 && depth < 2000) { pixels[colorIndex + blueIndex] = 0; pixels[colorIndex + greenIndex] = 255; pixels[colorIndex + redIndex] = 0; } else if (depth >= 2000) { //離Kinect超過2米 pixels[colorIndex + blueIndex] = 0; pixels[colorIndex + greenIndex] = 0; pixels[colorIndex + redIndex] = 255; } //單色直方圖着色 byte intensity = CalculateIntensityFromDepth(depth); pixels[colorIndex + blueIndex] = intensity; pixels[colorIndex + greenIndex] = intensity; pixels[colorIndex + redIndex] = intensity; //若是是人體區域,用亮綠色標記 /*if (player > 0) { pixels[colorIndex + blueIndex] = Colors.LightGreen.B; pixels[colorIndex + greenIndex] = Colors.LightGreen.G; pixels[colorIndex + redIndex] = Colors.LightGreen.R; }*/ } return pixels; } /// <summary> /// 將16位灰階深度圖轉爲32位彩色深度圖 /// </summary> /// <param name="depthImageFrame">16位灰階深度圖</param> /// <param name="depthImageStream">用於得到深度數據流的相關屬性</param> /// <returns></returns> private byte[] ConvertDepthFrameToColorFrame(short[] depthImageFrame, DepthImageStream depthImageStream) { byte[] depthFrame32 = new byte[depthImageStream.FrameWidth * depthImageStream.FrameHeight * bgr32BytesPerPixel]; //經過常量獲取有效視距,不用硬編碼 int tooNearDepth = depthImageStream.TooNearDepth; int tooFarDepth = depthImageStream.TooFarDepth; int unknowDepth = depthImageStream.UnknownDepth; for (int i16 = 0, i32 = 0; i16 < depthImageFrame.Length && i32 < depthFrame32.Length; i16++, i32 += 4) { int player = depthImageFrame[i16] & DepthImageFrame.PlayerIndexBitmask; int realDepth = depthImageFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth; //經過位運算,將13位的深度圖裁剪位8位 byte intensity = (byte)(~(realDepth >> 4)); if (player == 0 && realDepth == 0) { depthFrame32[i32 + redIndex] = 255; depthFrame32[i32 + greenIndex] = 255; depthFrame32[i32 + blueIndex] = 255; } else if (player == 0 && realDepth == tooFarDepth) { //深紫色 depthFrame32[i32 + redIndex] = 66; depthFrame32[i32 + greenIndex] = 0; depthFrame32[i32 + blueIndex] = 66; } else if (player == 0 && realDepth == unknowDepth) { //深棕色 depthFrame32[i32 + redIndex] = 66; depthFrame32[i32 + greenIndex] = 66; depthFrame32[i32 + blueIndex] = 33; } else { depthFrame32[i32 + redIndex] = (byte)(intensity >> intensityShiftByPlayerR[player]); depthFrame32[i32 + greenIndex] = (byte)(intensity >> intensityShiftByPlayerG[player]); depthFrame32[i32 + blueIndex] = (byte)(intensity >> intensityShiftByPlayerB[player]); } } return depthFrame32; } private void Window_Closed(object sender, EventArgs e) { if (_kinect != null) { if (_kinect.Status == KinectStatus.Connected) { _kinect.Stop(); } } } } }