Ubuntu14.04(indigo)實現RGBDSLAMv2(數據集和實時Kinect)

Ubuntu14.04(indigo)實現RGBDSLAMv2(數據集和實時Kinect v2)

1、在.bag數據集上跑RGBDSLAMv2

RGBDSLAMv2指的是Felix Endres大神在2014年發表論文,實現的基於RGB-D 攝像頭的SLAM系統,用於建立三維點雲或者八叉樹地圖。html

安裝步驟重點參考原gitbub網址:https://github.com/felixendres/rgbdslam_v2node

說明本人臺式機硬件配置:python

Intel(R)Core(TM)i5-6500 CPU @ 3.20GHz 3.20GHz;
RAM: 16.0GB;
GPU: NVIDIA GeForce GTX 1060 6GB。git

 

1. 在Ubuntu14.04中安裝ROS Indigo,參考網址:http://wiki.ros.org/cn/indigo/Installation/Ubuntugithub

2. 安裝opencv2.4.9,參考網址:http://www.samontab.com/web/2014/06/installing-opencv-2-4-9-in-ubuntu-14-04-lts/web

               http://blog.csdn.net/baoke485800/article/details/51236198ubuntu

系統更新ruby

sudo apt-get update
sudo apt-get upgrade

安裝相關依賴包bash

sudo apt-get install build-essential libgtk2.0-dev libjpeg-dev libtiff4-dev libjasper-dev libopenexr-dev cmake python-dev python-numpy python-tk libtbb-dev libeigen3-dev yasm libfaac-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev libx264-dev libqt4-dev libqt4-opengl-dev sphinx-common texlive-latex-extra libv4l-dev libdc1394-22-dev libavcodec-dev libavformat-dev libswscale-dev default-jdk ant libvtk5-qt4-dev

利用wget得到Opencv2.4.9源文件,等下載完成後解壓dom

wget http://sourceforge.net/projects/opencvlibrary/files/opencv-unix/2.4.9/opencv-2.4.9.zip
unzip opencv-2.4.9.zip
 cdopencv-2.4.9

cmake編譯安裝opencv源文件包

mkdir build
 
cmake -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_VTK=ON ..
make -j4
sudo make installcdbuild

配置opencv相關

sudo gedit /etc/ld.so.conf.d/opencv.conf

在打開的文件中(空文件也可)添加以下代碼並保存

/usr/local/lib

執行如下代碼

sudo ldconfig

打開另一個文件

sudo gedit /etc/bash.bashrc

在文件末尾添加以下並保存退出

PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
export PKG_CONFIG_PATH

檢查opencv是否安裝成功

cd ~/opencv-2.4.9/samples/c
chmod +x build_all.sh
./build_all.sh
老版本的C語言接口
./facedetect --cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml"--scale=1.5 lena.jpg

./facedetect --cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml"--nested-cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_eye.xml" --scale=1.5 lena.jpg

新的C++接口

~/opencv-2.4.9/build/bin/cpp-example-grabcut ~/opencv-2.4.9/samples/cpp/lena.jpg

OK,測試成功。(更多測試詳見上面參考網址)

opencv官網unix版本下載地址:https://sourceforge.net/projects/opencvlibrary/files/opencv-unix/

3. 安裝pcl-1.7.2,使用github源碼安裝,地址:https://github.com/PointCloudLibrary/pcl

4. 建立catkin工做空間:

#爲rgbdslam單首創建一個catkin工做空間

mkdir rgbdslam_catkin_ws
cd rgbdslam_catkin_ws
mkdir src
cd ~/rgbdslam_catkin_ws/src

#將其初始化爲catkin工做空間的源碼存放文件夾

catkin_init_workspace

#進入catkin工做空間目錄

cd ~/rgbdslam_catkin_ws/

#編譯新建的catkin工做空間,生成build、devel文件夾,造成完整的catkin工做空間

catkin_make

#調用終端配置文件

source devel/setup.bash

5. 源碼安裝g2o, 參考原gitbub網址:https://github.com/felixendres/rgbdslam_v2

6. 編譯安裝RGBDSLAMv2

#進入catkin工做空間的源碼存放文件夾

cd ~/rgbdslam_catkin_ws/src

#下載github上對應ROS Indigo版本的rgbdslam源碼

wget -q http://github.com/felixendres/rgbdslam_v2/archive/indigo.zip

#解壓

unzip -q indigo.zip

#進入catkin工做空間目錄

cd ~/rgbdslam_catkin_ws/

#ROS依賴包更新

rosdep update
yuanlibin@yuanlibin:~/rgbdslam_catkin_ws$ rosdep update
reading in sources list data from /etc/ros/rosdep/sources.list.d
Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/osx-homebrew.yaml
Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/base.yaml
Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/python.yaml
Hit https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/ruby.yaml
Hit https://raw.githubusercontent.com/ros/rosdistro/master/releases/fuerte.yaml
Query rosdistro index https://raw.githubusercontent.com/ros/rosdistro/master/index.yaml
Add distro "groovy"
Add distro "hydro"
Add distro "indigo"
Add distro "jade"
Add distro "kinetic"
Add distro "lunar"
updated cache in /home/yuanlibin/.ros/rosdep/sources.cache

#安裝rgbdslam依賴包

rosdep install rgbdslam

正確運行後顯示:#All required rosdeps installed successfully

#編譯rgbdslam

catkin_make

正確運行後顯示:[100%] Built target rgbdslam

source devel/setup.bash

最後運行

roslaunch rgbdslam rgbdslam.launch

會出現錯誤:

NODES
  /
    rgbdslam (rgbdslam/rgbdslam)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
ERROR: cannot launch node of type [rgbdslam/rgbdslam]: rgbdslam
ROS path [0]=/opt/ros/indigo/share/ros
ROS path [1]=/opt/ros/indigo/share
ROS path [2]=/opt/ros/indigo/stacks
No processes to monitor
shutting down processing monitor...
... shutting down processing monitor complete

解決方法是將工做空間的路徑加到 .bashrc 文件中,如本電腦示例::

echo "source /home/yuanlibin/rgbdslam_catkin_ws/devel/setup.bash" >> ~/.bashrc
source ~/.bashrc

至此,RGBDSLAMv2已編譯安裝完成。

7. 下載TUM的.bag數據集文件,下載地址:https://vision.in.tum.de/data/datasets/rgbd-dataset/download

例如:rgbd_dataset_freiburg1_xyz.bag

查看.bag數據集的信息:

終端1

roscore

終端2

rosbag play rgbd_dataset_freiburg1_xyz.bag

終端3

rostopic info

最後的命令不要按enter鍵按tab鍵進行查看

yuanlibin@yuanlibin:~$ rostopic info /
/camera/depth/camera_info  /cortex_marker_array
/camera/depth/image        /imu
/camera/rgb/camera_info    /rosout
/camera/rgb/image_color    /rosout_agg
/clock                     /tf
yuanlibin@yuanlibin:~$

而後修改路徑:/home/yuanlibin/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/launch下的rgbdslam.launch文件

其中第八、9行的輸入數據設置

   <param name="config/topic_image_mono"              value="/camera/rgb/image_color"/> 
    <param name="config/topic_image_depth"             value="/camera/depth_registered/sw_registered/image_rect_raw"/>

須要修改成上述數據集相應的信息,修改以下:

   <param name="config/topic_image_mono"              value="/camera/rgb/image_color"/> 
    <param name="config/topic_image_depth"             value="/camera/depth/image"/>

在該文件中能夠修改系統使用的特徵:

SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB.

8. 在數據集上跑RGBDSLAMv2

終端1

roscore

終端2

rosbag play rgbd_dataset_freiburg1_xyz.bag

終端3

roslaunch rgbdslam rgbdslam.launch

最後,就能夠看到在數據集上運行RGBDSLAMv2重建的三維點雲圖了。

 2、基於Kinect v1實時運行RGBDSLAMv2

1. 進行ROS indigo下Kinect v1的驅動安裝與調試,可參考:http://www.cnblogs.com/yuanlibin/p/8608190.html

2. 在終端執行如下命令:

終端1

roscore

終端2

roslaunch rgbdslam openni+rgbdslam.launch

3. 移動Kinect v1,就能夠看到實時重建的三維點雲了。

 3、基於Kinect v2實時運行RGBDSLAMv2

1. 運行Kinect v2 查看其輸出數據信息:

終端1

roslaunch kinect2_bridge kinect2_bridge.launch

終端2(輸入命令rostopic info後,不要按enter,要按table鍵進行查看)

yuanlibin@yuanlibin:~$ rostopic info /
/kinect2/bond
/kinect2/hd/camera_info
/kinect2/hd/image_color
/kinect2/hd/image_color/compressed
/kinect2/hd/image_color_rect
/kinect2/hd/image_color_rect/compressed
/kinect2/hd/image_depth_rect
/kinect2/hd/image_depth_rect/compressed
/kinect2/hd/image_mono
/kinect2/hd/image_mono/compressed
/kinect2/hd/image_mono_rect
/kinect2/hd/image_mono_rect/compressed
/kinect2/hd/points
/kinect2/qhd/camera_info
/kinect2/qhd/image_color
/kinect2/qhd/image_color/compressed
/kinect2/qhd/image_color_rect
/kinect2/qhd/image_color_rect/compressed
/kinect2/qhd/image_depth_rect
/kinect2/qhd/image_depth_rect/compressed
/kinect2/qhd/image_mono
/kinect2/qhd/image_mono/compressed
/kinect2/qhd/image_mono_rect
--More--
View Code

2. 在路徑/home/yuanlibin/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/launch下新建一個rgbdslam_kinect2.launch文件,內容以下:

<launch>
<node pkg="rgbdslam" type="rgbdslam" name="rgbdslam" cwd="node" required="true" output="screen"> 
<!-- Input data settings-->
<param name="config/topic_image_mono"              value="/kinect2/qhd/image_color_rect"/>  
<param name="config/camera_info_topic"             value="/kinect2/qhd/camera_info"/>

<param name="config/topic_image_depth"             value="/kinect2/qhd/image_depth_rect"/>

<param name="config/topic_points"                  value=""/> <!--if empty, poincloud will be reconstructed from image and depth -->

<!-- These are the default values of some important parameters -->
<param name="config/feature_extractor_type"        value="ORB"/><!-- also available: SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB. -->
<param name="config/feature_detector_type"         value="ORB"/><!-- also available: SIFT, SURF, GFTT (good features to track), ORB. -->
<param name="config/detector_grid_resolution"      value="3"/><!-- detect on a 3x3 grid (to spread ORB keypoints and parallelize SIFT and SURF) -->

<param name="config/optimizer_skip_step"           value="15"/><!-- optimize only every n-th frame -->
<param name="config/cloud_creation_skip_step"      value="2"/><!-- subsample the images' pixels (in both, width and height), when creating the cloud (and therefore reduce memory consumption) -->

<param name="config/backend_solver"                value="csparse"/><!-- pcg is faster and good for continuous online optimization, cholmod and csparse are better for offline optimization (without good initial guess)-->

<param name="config/pose_relative_to"              value="first"/><!-- optimize only a subset of the graph: "largest_loop" = Everything from the earliest matched frame to the current one. Use "first" to optimize the full graph, "inaffected" to optimize only the frames that were matched (not those inbetween for loops) -->

<param name="config/maximum_depth"           value="2"/>
<param name="config/subscriber_queue_size"         value="20"/>

<param name="config/min_sampled_candidates"        value="30"/><!-- Frame-to-frame comparisons to random frames (big loop closures) -->
<param name="config/predecessor_candidates"        value="20"/><!-- Frame-to-frame comparisons to sequential frames-->
<param name="config/neighbor_candidates"           value="20"/><!-- Frame-to-frame comparisons to graph neighbor frames-->
<param name="config/ransac_iterations"             value="140"/>

<param name="config/g2o_transformation_refinement"           value="1"/>
<param name="config/icp_method"           value="gicp"/>  <!-- icp, gicp ... -->

<!--
<param name="config/max_rotation_degree"           value="20"/>
<param name="config/max_translation_meter"           value="0.5"/>

<param name="config/min_matches"           value="30"/>   

<param name="config/min_translation_meter"           value="0.05"/>
<param name="config/min_rotation_degree"           value="3"/>
<param name="config/g2o_transformation_refinement"           value="2"/>
<param name="config/min_rotation_degree"           value="10"/>

<param name="config/matcher_type"         value="ORB"/>
 -->
</node>
</launch>
View Code

注意第三、四、五、7行的輸入數據設置,應與上面查看到的信息一致。

在該文件中能夠修改系統使用的特徵:

SIFT, SIFTGPU, SURF, SURF128 (extended SURF), ORB.

3. 最後基於Kinect v2的實時運行RGBDSLAMv2

終端1

roslaunch rgbdslam rgbdslam_kinect2.launch

終端2

roslaunch kinect2_bridge kinect2_bridge.launch

緩慢移動Kinect v2,就能夠看到實時重建的三維點雲了。本身實現的三維點雲截圖以下:

圖1所示爲實驗室工位的全景三維點雲圖;

圖2所示爲全景圖中紅點處的側視圖。

圖1. 實驗室工位的全景三維點雲圖

圖2. 全景圖中紅點處的側視圖

相關文章
相關標籤/搜索