论文部分内容阅读
Kinect作为轻量级手持传感器,在室内场景恢复与模型重建中具有灵活、高效的特点。不同于大多数只基于彩色影像或只基于深度影像的重建算法,提出一种将彩色影像与深度影像相结合的点云配准算法并用于室内模型重建恢复,其过程包括相邻帧数据的配准与整体优化。在Kinect已被精确标定的基础上,将彩色影像匹配得到的同名点构成极线约束与深度图像迭代最近点配准的点到面约束相结合,以提高相邻帧数据配准算法的精度与鲁棒性。利用相邻4帧数据连续点共面约束,对相邻帧数据配准结果进行全局优化,以提高模型重建的精度。在理论分析基础上,通过实验验证了该算法在Kinect Fusion无法实现追踪、建模的场景中鲁棒性依然较好,点云配准及建模精度符合Kinect观测精度。
As a lightweight handheld sensor, Kinect is flexible and efficient in indoor scene recovery and model reconstruction. Different from most reconstruction algorithms based on only color images or depth images, a point cloud registration algorithm combining color image and depth image is proposed and used for indoor model reconstruction and restoration. The process includes the matching of adjacent frame data Standard and overall optimization. Based on Kinect’s accurate calibration, the same-name points obtained by color image matching are combined into a point-to-surface constraint with polar-line constraints and the nearest-point registration of the depth image iteration to improve the accuracy of the adjacent frame data registration algorithm Robustness. By using the contiguous four-frame data point coplanar constraints, the registration results of neighboring frames are globally optimized to improve the accuracy of the model reconstruction. Based on the theoretical analysis, the experimental results show that the proposed algorithm is robust and robust in the scene that Kinect Fusion can not track and model, and the registration and modeling accuracy of point cloud accord with Kinect observation accuracy.