Gaussian splatting has achieved impressive improvements for both novel-view synthesis and surface reconstruction from multi-view images. However, current methods still struggle to reconstruct high-quality surfaces from only sparse view input images using Gaussian splatting. In this paper, we propose a novel method called SolidGS to address this problem. We observed that the reconstructed geometry can be severely inconsistent across multi-views, due to the property of Gaussian function in geometry rendering. This motivates us to consolidate all Gaussians by adopting a more solid kernel function, which effectively improves the surface reconstruction quality. With the additional help of geometrical regularization and monocular normal estimation, our method achieves superior performance on the sparse view surface reconstruction than all the Gaussian splatting methods and neural field methods on the widely used DTU, Tanks-and-Temples, and LLFF datasets.
高斯点云在多视图图像的新颖视图合成和表面重建方面取得了显著的进展。然而,当前的方法在仅使用稀疏视图输入图像进行高斯点云时,仍难以重建高质量的表面。在本文中,我们提出了一种名为SolidGS的新方法来解决这一问题。我们观察到,由于高斯函数在几何渲染中的特性,重建的几何体在多视图之间可能存在严重的不一致性。这促使我们通过采用更稳固的核函数来整合所有高斯点,从而有效提升表面重建质量。在几何正则化和单目法线估计的额外辅助下,我们的方法在稀疏视图表面重建方面,在广泛使用的DTU、Tanks-and-Temples和LLFF数据集上,表现出优于所有高斯点云方法和神经场方法的卓越性能。