環境:Ubuntu 14.04 + ROS indigo + ORB-SLAM2 (Thinkpad T460s)html
1. 安裝ORB-SLAM:node
Pangolinpython
Pangolin有一些依賴庫,按照提示安裝好git
git clone https://github.com/stevenlovegrove/Pangolin.git cd Pangolin mkdir build cd build cmake .. make -j
OpenCVgithub
2.4.8版本,2.4.11版本都可以用,3.2版本沒有測試,應該也行ubuntu
注意OpenCV兼容性常常出問題,包括頭文件的路徑各版本也有變化.bash
所以從source編譯比較好,能夠在電腦中編譯好幾個經常使用版本的OpenCV,之後想卸載,直接在build目錄中sudo make uninstall便可,想安裝,在build目錄中sudo make install,這樣切換不一樣版本仍是比較快的.ide
Eigen測試
sudo apt-get install libeigen3-dev
Eigen是一個只有頭文件的庫,默認安裝在/usr/include/eigen3/中,因爲Eigen的位置常常有問題,致使CMakeLists.txt找不到這個庫,所以ORB-SLAM提供了一個FindEigen3.cmake文件幫助尋找Eigen3,在本身的工程中也能夠去使用這個文件來幫助尋找Eigen庫的位置.ui
DBow和g2o
這兩個庫ORB-SLAM的Thirdparty目錄中提供了,下載ORB-SLAM源代碼後使用提供的腳本便可.
將ORB-SLAM安裝在ROS的工做路徑catkin_ws中,不理解ROS原理的須要去ROS官網把Beginner Level Tutorial看完.
cd catkin_ws/src git clone https://github.com/raulmur/ORB_SLAM2.git
運行ORB-SLAM目錄下的build.sh腳本:
cd ORB-SLAM2
./build.sh
// build.sh
echo "Configuring and building Thirdparty/DBoW2 ..." cd Thirdparty/DBoW2 mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j cd ../../g2o echo "Configuring and building Thirdparty/g2o ..." mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j cd ../../../ echo "Uncompress vocabulary ..." cd Vocabulary tar -xf ORBvoc.txt.tar.gz cd .. echo "Configuring and building ORB_SLAM2 ..." mkdir build cd build cmake .. -DCMAKE_BUILD_TYPE=Release make -j
完成DBow,g2o,ORB-SLAM的編譯,解壓DBow字典文件.ORB-SLAM啓動時,也須要載入這個100多M的文件,比較耗時.
2. 筆記本攝像頭驅動安裝和相機標定
1. 使用博世公司的 "usb_cam":A ROS Driver for V4L USB Cameras
cd catkin_ws/src git clone https://github.com/bosch-ros-pkg/usb_cam.git cd ../ catkin_make
下載須要標定的黑白棋盤,打印後貼在平板上.
2. 編譯ROS相機標定包
rosdep install camera_calibration
rosmake camera_calibration
3. 啓動usb_cam,獲取筆記本攝像頭的圖像
// sudo apt-get install ros-indigo-usb-cam optional 若沒有安裝usb_cam驅動時安裝 roslaunch usb_cam usb-cam-test.launch
4. 啓動標定程序
rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.025 image:=/usb_cam/image_raw camera:=/usb_cam
標定界面出現後,按照x(左右)、y(上下)、size(先後)、skew(傾斜)等方式移動棋盤,直到x,y,size,skew的進度條都變成綠色位置.
此時能夠按下CALIBRATE按鈕,等一段時間就能夠完成標定。
完成後Commit,在終端後會有標定結果yaml文件地址.打開後,按照TUM1.yaml的格式修改,命名爲mycam.yaml.複製到/home/shang/catkin_ws/src/ORB_SLAM2/Examples/Monocular/目錄下
只是須要加上camera的尺寸Camera.width和Camera.height
個人T460s攝像頭標定結果和ORB-SLAM參數是
%YAML:1.0 #-------------------------------------------------------------------------------------------- # Camera Parameters. Adjust them! #-------------------------------------------------------------------------------------------- # Camera calibration and distortion parameters (OpenCV) Camera.fx: 626.3131886043523 Camera.fy: 624.0872390416225 Camera.cx: 280.8331825622062 Camera.cy: 234.9590765749035 Camera.k1: 0.1226796723026339 Camera.k2: -0.1753096021786491 Camera.p1: 0.003319071389844154 Camera.p2: -0.01267716347709299 Camera.k3: 0 Camera.width: 640 Camera.width: 480 # Camera frames per second Camera.fps: 30.0 # Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale) Camera.RGB: 1 #-------------------------------------------------------------------------------------------- # ORB Parameters #-------------------------------------------------------------------------------------------- # ORB Extractor: Number of features per image ORBextractor.nFeatures: 1000 # ORB Extractor: Scale factor between levels in the scale pyramid ORBextractor.scaleFactor: 1.2 # ORB Extractor: Number of levels in the scale pyramid ORBextractor.nLevels: 8 # ORB Extractor: Fast threshold # Image is divided in a grid. At each cell FAST are extracted imposing a minimum response. # Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST # You can lower these values if your images have low contrast ORBextractor.iniThFAST: 20 ORBextractor.minThFAST: 7 #-------------------------------------------------------------------------------------------- # Viewer Parameters #-------------------------------------------------------------------------------------------- Viewer.KeyFrameSize: 0.05 Viewer.KeyFrameLineWidth: 1 Viewer.GraphLineWidth: 0.9 Viewer.PointSize:2 Viewer.CameraSize: 0.08 Viewer.CameraLineWidth: 3 Viewer.ViewpointX: 0 Viewer.ViewpointY: -0.7 Viewer.ViewpointZ: -1.8 Viewer.ViewpointF: 500
3. 使用筆記本攝像頭運行ORB-SLAM
至此,準備工做完成.
1. 將ORB-SLAM的ROS包路徑添加到環境變量
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:/home/shang/catkin_ws/ORB_SLAM2/Examples/ROS // you should change /home/shang/catkin_ws to your catkin workspace
2. 編譯ORB-SLAM的ROS節點
cd src/ORB_SLAM2/Examples/ROS/ORB_SLAM2 mkdir build cd build cmake .. -DROS_BUILD_TYPE=Release make -j
3. 這一步是最重要的!
ORB ROS節點訂閱的topic和usb_cam發佈的topic名稱不一樣!
有兩種方法,第一中較費事,可是能夠幫助理解ROS的工做過程,第二種很簡單,去ORB_SLAM中將其訂閱的代碼改掉,從新編譯。
方法一:
編寫自定義的ROS包,讓ORB-SLAM的ROS節點訂閱筆記本攝像頭髮布圖像的topic
問題是,ORB-SLAM ROS節點訂閱的topic爲/camera/image_view,而筆記本攝像頭圖像流發佈topic爲/usb_cam/image_raw,這些能夠經過rostopic list -v / rosnode list看到.
所以須要本身寫一個ROS node程序,將這兩個topic聯合起來,咱們選擇本身從新定義一個ros packge
cd catkin_ws/src catkin_create_pkg orb_image_transport image_transport cv_bridge cd .. catkin_make cd orb_image_transport gedit orb_image_converter.cpp
orb_image_converter.cpp文件負責將筆記本攝像頭圖像publish到一個topic,讓ORB-SLAM訂閱這個topic
#include <ros/ros.h> #include <image_transport/image_transport.h> #include <cv_bridge/cv_bridge.h> #include <sensor_msgs/image_encodings.h> #include <opencv2/imgproc/imgproc.hpp> //include the headers for OPENCV's image processing and GUI module #include <opencv2/highgui/highgui.hpp> // static const std::string OPENCV_WINDOW = "Image window"; //define show image gui class ImageConverter { ros::NodeHandle nh_; //define Nodehandle image_transport::ImageTransport it_; //use this to create a publisher or subscriber image_transport::Subscriber image_sub_; // image_transport::Publisher image_pub_; public: ImageConverter() : it_(nh_) { // Subscrive to input video feed and publish output video feed image_sub_ = it_.subscribe("/usb_cam/image_raw", 1, &ImageConverter::imageCb, this); //image_pub_ = it_.advertise("/image_converter/output_video", 1); image_pub_ = it_.advertise("/camera/image_raw", 1); cv::namedWindow(OPENCV_WINDOW); //Opencv HighGUI calls to create/destroy a display window on start-up / shutdon } ~ImageConverter() { cv::destroyWindow(OPENCV_WINDOW); } void imageCb(const sensor_msgs::ImageConstPtr& msg) { cv_bridge::CvImagePtr cv_ptr; try { cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8); } catch (cv_bridge::Exception& e) { ROS_ERROR("cv_bridge exception: %s", e.what()); return; } cv::imshow(OPENCV_WINDOW, cv_ptr->image); cv::waitKey(3); // Output modified video stream image_pub_.publish(cv_ptr->toImageMsg()); } }; int main(int argc, char** argv) { ros::init(argc, argv, "image_converter"); ImageConverter ic; ros::spin(); return 0; }
並在CMakeLists.txt文件最後添加
add_executable(orb_image_converter orb_image_converter.cpp)
target_link_libraries(orb_image_converter ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
catkin_make後就完成了全部的工做.
注意這裏沒有使用自定義的消息類型,不須要對Package.xml和CMakeLists.txt作別的改動.
最後一次運行就能夠完成ORB-SLAM在筆記本攝像頭上的運行
roslaunch usb_cam usb_cam-test.launch rosrun orb_image_transport orb_image_converter rosrun ORB_SLAM2 Mono /home/shang/catkin_ws/src/ORB_SLAM2/Vocabulary/ORBvoc.txt /home/shang/catkin_ws/src/ORB_SLAM2/Examples/Monocular/mycam.yaml // change /home/shang to your directory
也可使用一個腳本運行全部的節點:
demo.sh
gnome-terminal -x bash -c "rosrun orb_image_transport orb_image_converter; exec $SHELL" gnome-terminal -x bash -c "rosrun ORB_SLAM2 Mono /home/shang/catkin_ws/src/ORB_SLAM2/Vocabulary/ORBvoc.txt /home/shang/catkin_ws/src/ORB_SLAM2/Examples/Monocular/mycam.yaml ; exec $SHELL" roslaunch usb_cam usb_cam-test.launch
直接運行./demo.sh便可完成
方法二:
後來發現這種方法太笨,在安裝了博世的ROS攝像頭驅動包usb_cam之後,攝像頭的圖像將發佈到/usb_cam/image_raw,所以在ORB的代碼中將其訂閱的topic從/camera/image_raw改成/usb_cam/image_raw便可,在ROS目錄下的ros_mono.cc文件中修改便可,雙目,深度以及AR demo同理。
這樣,只須要使用如下兩條命令便可。
roslaunch usb_cam usb_cam-test.launch
rosrun ORB_SLAM2 Mono /home/shang/catkin_ws/src/ORB_SLAM2/Vocabulary/ORBvoc.txt /home/shang/catkin_ws/src/ORB_SLAM2/Examples/ROS/ORB_SLAM2/mycam.yaml
參考: