3D Gaussian Splatting (3DGS) integrates the strengths of primitive-based representations and volumetric rendering techniques, enabling real-time, high-quality rendering. However, 3DGS models typically overfit to single-scene training and are highly sensitive to the initialization of Gaussian ellipsoids, heuristically derived from Structure from Motion (SfM) point clouds, which limits both generalization and practicality. To address these limitations, we propose GS-Net, a generalizable, plug-and-play 3DGS module that densifies Gaussian ellipsoids from sparse SfM point clouds, enhancing geometric structure representation. To the best of our knowledge, GS-Net is the first plug-and-play 3DGS module with cross-scene generalization capabilities. Additionally, we introduce the CARLA-NVS dataset, which incorporates additional camera viewpoints to thoroughly evaluate reconstruction and rendering quality. Extensive experiments demonstrate that applying GS-Net to 3DGS yields a PSNR improvement of 2.08 dB for conventional viewpoints and 1.86 dB for novel viewpoints, confirming the method's effectiveness and robustness.
3D 高斯投影(3D Gaussian Splatting, 3DGS)结合了基于原语的表示方法与体积渲染技术的优势,实现了实时高质量渲染。然而,3DGS 模型通常对单场景训练过拟合,并且对从结构光(Structure from Motion, SfM)点云中启发式推导的高斯椭球初始化高度敏感,限制了其泛化能力和实用性。为了解决这些限制,我们提出了 GS-Net,一种可扩展的、即插即用的 3DGS 模块,用于从稀疏的 SfM 点云中致密化高斯椭球,增强几何结构的表示能力。据我们所知,GS-Net 是首个具备跨场景泛化能力的即插即用 3DGS 模块。此外,我们引入了 CARLA-NVS 数据集,该数据集包含额外的相机视角,以全面评估重建和渲染质量。大量实验表明,将 GS-Net 应用于 3DGS 可使传统视角下的 PSNR 提高 2.08 dB,新的视角下提高 1.86 dB,验证了该方法的有效性和鲁棒性。