Autonomous agents often require accurate methods for detecting and localizing changes in their environment, particularly when observations are captured from unconstrained and inconsistent viewpoints. We propose a novel label-free, pose-agnostic change detection method that integrates information from multiple viewpoints to construct a change-aware 3D Gaussian Splatting (3DGS) representation of the scene. With as few as 5 images of the post-change scene, our approach can learn additional change channels in a 3DGS and produce change masks that outperform single-view techniques. Our change-aware 3D scene representation additionally enables the generation of accurate change masks for unseen viewpoints. Experimental results demonstrate state-of-the-art performance in complex multi-object scenes, achieving a 1.7× and 1.6× improvement in Mean Intersection Over Union and F1 score respectively over other baselines. We also contribute a new real-world dataset to benchmark change detection in diverse challenging scenes in the presence of lighting variations.
自主智能体通常需要准确的方法来检测和定位环境中的变化,尤其是在观察视点不受限制且不一致的情况下。我们提出了一种新颖的、无需标注且与姿态无关的变化检测方法,该方法整合来自多个视点的信息,以构建场景的**变化感知三维高斯点绘(3DGS)**表示。即使仅使用后变化场景的 5 张图像,我们的方法也能在 3DGS 中学习附加的变化通道,并生成优于单视图技术的变化掩码。 我们的变化感知三维场景表示还能够为未见过的视点生成准确的变化掩码。实验结果表明,该方法在复杂多物体场景中达到了当前最先进的性能,IoU和F1分数分别相比其他基线提高了 1.7 倍和 1.6 倍。此外,我们还贡献了一个新的真实场景数据集,用于在存在光照变化的多样化复杂场景中对变化检测进行基准测试。