Implicit neural representations and 3D Gaussian splatting (3DGS) have shown great potential for scene reconstruction. Recent studies have expanded their applications in autonomous reconstruction through task assignment methods. However, these methods are mainly limited to single robot, and rapid reconstruction of large-scale scenes remains challenging. Additionally, task-driven planning based on surface uncertainty is prone to being trapped in local optima. To this end, we propose the first 3DGS-based centralized multi-robot autonomous 3D reconstruction framework. To further reduce time cost of task generation and improve reconstruction quality, we integrate online open-vocabulary semantic segmentation with surface uncertainty of 3DGS, focusing view sampling on regions with high instance uncertainty. Finally, we develop a multi-robot collaboration strategy with mode and task assignments improving reconstruction quality while ensuring planning efficiency. Our method demonstrates the highest reconstruction quality among all planning methods and superior planning efficiency compared to existing multi-robot methods. We deploy our method on multiple robots, and results show that it can effectively plan view paths and reconstruct scenes with high quality.
隐式神经表示(Implicit Neural Representations)和三维高斯散点(3D Gaussian Splatting, 3DGS)在场景重建中展现了巨大潜力。近期研究将其应用扩展至自主重建中的任务分配方法。然而,这些方法主要局限于单机器人场景,对大规模场景的快速重建仍具挑战性。此外,基于表面不确定性的任务驱动规划容易陷入局部最优。 为此,我们提出了首个基于3DGS的集中式多机器人自主三维重建框架。为进一步减少任务生成的时间成本并提高重建质量,我们将在线开放词汇语义分割与3DGS表面不确定性相结合,将视角采样聚焦于实例不确定性高的区域。最后,我们开发了一种多机器人协作策略,结合模式和任务分配,在保证规划效率的同时提升重建质量。 实验结果表明,我们的方法在所有规划方法中实现了最高的重建质量,并在规划效率方面显著优于现有多机器人方法。在多机器人实际部署中,我们的方法能够高效规划视角路径并以高质量重建场景,为大规模多机器人三维场景重建提供了强有力的解决方案。