這一節將用ROS+Gazebo 環境獲取激光獲取點雲,並用PCL和OPENCV處理,源代碼在:https://github.com/ZouCheng321/5_laser_camera_simgit
因爲激光的視角遠大於相機,因此咱們使用了5個相機來獲取圖像,這相似於Ladybug相機:github
相機獲取的五張圖像:函數
接下來咱們用來構建彩色點雲:ui
相機與激光的位置變換,因爲是正五邊形分別,這很容易求得:spa
Eigen::Matrix4f rt0,rt1,rt2,rt3,rt4; rt0<< 0,0,-1,0, 0,1,0,0, 1,0,0,0, 0,0,0,1; rt1<< 0,0,-1,0, -0.95105651629,0.30901699437,0,0, 0.30901699437,0.95105651629,0,0, 0,0,0,1; rt2 << 0,0,-1,0, -0.58778525229,-0.80901699437,0,0, -0.80901699437,0.58778525229,0,0, 0,0,0,1; rt3 << 0,0,-1,0, 0.58778525229,-0.80901699437,0,0, -0.80901699437,-0.58778525229,0,0, 0,0,0,1; rt4 << 0,0,-1,0, 0.95105651629,0.30901699437,0,0, 0.30901699437,-0.95105651629,0,0, 0,0,0,1; Eigen::Matrix4f inv0,inv1,inv2,inv3,inv4; inv0=rt0.inverse(); inv1=rt1.inverse(); inv2=rt2.inverse(); inv3=rt3.inverse(); inv4=rt4.inverse(); RT.push_back(rt0); RT.push_back(rt1); RT.push_back(rt2); RT.push_back(rt3); RT.push_back(rt4); INV.push_back(inv0); INV.push_back(inv1); INV.push_back(inv2); INV.push_back(inv3); INV.push_back(inv4);
相機的內參,已經在仿真軟件中設定:3d
std::vector<cv::Point2d> imagePoints; cv::Mat intrisicMat(3, 3, cv::DataType<double>::type); // Intrisic matrix intrisicMat.at<double>(0, 0) = 476.715669286; intrisicMat.at<double>(1, 0) = 0; intrisicMat.at<double>(2, 0) = 0; intrisicMat.at<double>(0, 1) = 0; intrisicMat.at<double>(1, 1) = 476.715669286; intrisicMat.at<double>(2, 1) = 0; intrisicMat.at<double>(0, 2) = 400; intrisicMat.at<double>(1, 2) = 400; intrisicMat.at<double>(2, 2) = 1; cv::Mat rVec(3, 1, cv::DataType<double>::type); // Rotation vector rVec.at<double>(0) = 0; rVec.at<double>(1) = 0; rVec.at<double>(2) = 0; cv::Mat tVec(3, 1, cv::DataType<double>::type); // Translation vector tVec.at<double>(0) = 0.4; tVec.at<double>(1) = 0; tVec.at<double>(2) = -0.1; cv::Mat distCoeffs(5, 1, cv::DataType<double>::type); // Distortion vector distCoeffs.at<double>(0) = 0; distCoeffs.at<double>(1) = 0; distCoeffs.at<double>(2) = 0; distCoeffs.at<double>(3) = 0; distCoeffs.at<double>(4) = 0;
去除相機後方的點雲:code
std::vector<cv::Point3d> Generate3DPoints(pcl::PointCloud<pcl::PointXYZ>::Ptr cloud,int num) { std::vector<cv::Point3d> points; pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_f (new pcl::PointCloud<pcl::PointXYZ>); Eigen::Matrix4f TR; TR << 0,0,-1,0, 0,1,0,0, 1,0,0,0, 0,0,0,1; pcl::transformPointCloud (*cloud, *cloud_f, RT[num]); pcl::PassThrough<pcl::PointXYZ> pass; pass.setInputCloud (cloud_f); pass.setFilterFieldName ("z"); pass.setFilterLimits (0.0, 10); //pass.setFilterLimitsNegative (true); pass.filter (*cloud); cout<<"size:"<<cloud->size()<<endl; for(int i=0;i<=cloud->points.size();i++) { points.push_back(cv::Point3d(cloud->points[i].x, cloud->points[i].y, cloud->points[i].z)); } return points; }
將前方的點雲投影到相機平面,這裏直接用opencv自帶的projectPoints函數:orm
cv::projectPoints(objectPoints, rVec, tVec, intrisicMat, distCoeffs, imagePoints);
保留圖像內的點雲:blog
for(int i=0;i<imagePoints.size();i++) { if(imagePoints[i].x>=0&&imagePoints[i].x<800&&imagePoints[i].y>=0&&imagePoints[i].y<800) { pcl::PointXYZRGB point; point.x = cloud->points[i].x; point.y = cloud->points[i].y; point.z = cloud->points[i].z; point.r = _I(round(imagePoints[i].x),round(imagePoints[i].y))[2]; point.g = _I(round(imagePoints[i].x),round(imagePoints[i].y))[1]; point.b = _I(round(imagePoints[i].x),round(imagePoints[i].y))[0]; colored_cloud->points.push_back (point); } }
最後顯示全部點雲:it
pcl::visualization::PCLVisualizer viewer("Cloud viewer"); viewer.addPointCloud(colored_cloud_sum, "sample cloud"); viewer.setBackgroundColor(0,0,0); while(!viewer.wasStopped()) //while (!viewer->wasStopped ()) viewer.spinOnce(100);
要構建這個項目:
cd 5_laser_camera_sim
mkdir build
cd build
cmake ..
make
./color
將看到以下顯示: