In this paper, we present a novel algorithm for probabilistically updating and rasterizing semantic maps within 3D Gaussian Splatting (3D-GS). Although previous methods have introduced algorithms which learn to rasterize features in 3D-GS for enhanced scene understanding, 3D-GS can fail without warning which presents a challenge for safety-critical robotic applications. To address this gap, we propose a method which advances the literature of continuous semantic mapping from voxels to ellipsoids, combining the precise structure of 3D-GS with the ability to quantify uncertainty of probabilistic robotic maps. Given a set of images, our algorithm performs a probabilistic semantic update directly on the 3D ellipsoids to obtain an expectation and variance through the use of conjugate priors. We also propose a probabilistic rasterization which returns per-pixel segmentation predictions with quantifiable uncertainty. We compare our method with similar probabilistic voxel-based methods to verify our extension to 3D ellipsoids, and perform ablation studies on uncertainty quantification and temporal smoothing.
本文提出了一种新算法,用于在3D Gaussian Splatting(3D-GS)中概率更新和光栅化语义地图。尽管已有方法通过学习在3D-GS中光栅化特征来增强场景理解,但3D-GS在某些情况下可能突然失效,这对安全关键的机器人应用构成了挑战。为弥补这一不足,我们提出了一种方法,将连续语义映射从体素扩展到椭球体,结合了3D-GS的精确结构与量化概率机器人地图不确定性的能力。给定一组图像,我们的算法直接在3D椭球体上执行概率语义更新,通过共轭先验获得期望和方差。同时,我们还提出了一种概率光栅化方法,能够返回具有可量化不确定性的逐像素分割预测。我们将该方法与类似的概率体素方法进行了比较,以验证其对3D椭球体的扩展,并进行了不确定性量化和时间平滑的消融研究。