KITTI數據集接口已經提供了matlab接口,本篇將說明詳細說明其應用並與PCL進行對接。PCL爲C++點雲處理語言庫,詳情可見:http://pointclouds.org/git
程序能夠從官網下載,也能夠從個人github上下載https://github.com/ZouCheng321/fusion_kitti,爲運行本代碼,必須先編譯make.m文件。github
關於激光相機的demo爲 run_demoVelodyne.m算法
本例以讀取 2011_09_26_drive_0005_sync場景,讀取第一幀,爲例app
首先設置讀取路徑和基本信息ui
if nargin<1 base_dir = './data/2011_09_26_drive_0005_sync';%場景路徑 end if nargin<2 calib_dir = './data/2011_09_26';%標定文件路徑 end cam = 2; % 第二個相機 frame = 0; % 幀數
讀取標定文件:spa
% load calibration calib = loadCalibrationCamToCam(fullfile(calib_dir,'calib_cam_to_cam.txt')); Tr_velo_to_cam = loadCalibrationRigid(fullfile(calib_dir,'calib_velo_to_cam.txt')); % compute projection matrix velodyne->image plane R_cam_to_rect = eye(4); R_cam_to_rect(1:3,1:3) = calib.R_rect{1}; P_velo_to_img = calib.P_rect{cam+1}*R_cam_to_rect*Tr_velo_to_cam; %內外參數
其中P_velo_to_img 爲上文所釋的投影矩陣:KRTcl。 Tr_velo_to_cam 爲RTcl矩陣,R_cam_to_rect爲相機畸變矯正矩陣,calib.P_rect{cam+1}爲相機內參。code
接下來讀取圖像和激光數據,並刪除在相機平面後面的激光點:blog
% load and display image img = imread(sprintf('%s/image_%02d/data/%010d.png',base_dir,cam,frame)); fig = figure('Position',[20 100 size(img,2) size(img,1)]); axes('Position',[0 0 1 1]); imshow(img); hold on; % load velodyne points fid = fopen(sprintf('%s/velodyne_points/data/%010d.bin',base_dir,frame),'rb'); velo = fread(fid,[4 inf],'single')'; velo = velo(1:5:end,:); % remove every 5th point for display speed fclose(fid); % remove all points behind image plane (approximation idx = velo(:,1)<5; velo(idx,:) = [];
最後就是投影過程:接口
velo_img = project(velo(:,1:3),P_velo_to_img);
在圖像上顯示,距離用顏色表示:rem
cols = jet; for i=1:size(velo_img,1) col_idx = round(64*5/velo(i,1)); plot(velo_img(i,1),velo_img(i,2),'o','LineWidth',4,'MarkerSize',1,'Color',cols(col_idx,:)); end
下面我將爲你們說明獲取保存爲彩色點雲,爲後續算法作準備。
首先去除投影到圖像外界的點:
X_plane=round(velo_img(:,2)); Y_plane=round(velo_img(:,1)); cloud=velo(:,1:3); indice=find(X_plane>size(img,1)); X_plane(indice)=[]; Y_plane(indice)=[]; cloud(indice,:)=[]; indice=find(X_plane<1); X_plane(indice)=[]; Y_plane(indice)=[]; cloud(indice,:)=[]; indice=find(Y_plane>size(img,2)); X_plane(indice)=[]; Y_plane(indice)=[]; cloud(indice,:)=[]; indice=find(Y_plane<1); X_plane(indice)=[]; Y_plane(indice)=[]; cloud(indice,:)=[];
而後獲取圖像每一個點的RGB值:
R=img(:,:,1); G=img(:,:,2); B=img(:,:,3); induv=sub2ind(size(R),X_plane,Y_plane); cloud(:,4)=double(R(induv))/255+1; cloud(:,5)=double(G(induv))/255+1; cloud(:,6)=double(B(induv))/255+1;
最後將點雲保存爲pcd格式,藉助了外部文件savepcd.m:
savepcd('color_cloud.pcd',cloud');
接下開用pcl庫顯示點雲,請確保電腦已經安裝pcl:
cd view
mkdir build
cd build
cmake ..
make
./cloud_viewer ../../color_cloud.pcd
能夠看見融合後的彩色點雲:
至於如何利用因此幀創建地圖將在後面的博客中介紹。