Skip to content

Latest commit

 

History

History
7 lines (5 loc) · 2.39 KB

2412.06234.md

File metadata and controls

7 lines (5 loc) · 2.39 KB

Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction

Generalized feed-forward Gaussian models have achieved significant progress in sparse-view 3D reconstruction by leveraging prior knowledge from large multi-view datasets. However, these models often struggle to represent high-frequency details due to the limited number of Gaussians. While the densification strategy used in per-scene 3D Gaussian splatting (3D-GS) optimization can be adapted to the feed-forward models, it may not be ideally suited for generalized scenarios. In this paper, we propose Generative Densification, an efficient and generalizable method to densify Gaussians generated by feed-forward models. Unlike the 3D-GS densification strategy, which iteratively splits and clones raw Gaussian parameters, our method up-samples feature representations from the feed-forward models and generates their corresponding fine Gaussians in a single forward pass, leveraging the embedded prior knowledge for enhanced generalization. Experimental results on both object-level and scene-level reconstruction tasks demonstrate that our method outperforms state-of-the-art approaches with comparable or smaller model sizes, achieving notable improvements in representing fine details.

基于广义前馈高斯模型的稀疏视角3D重建利用大规模多视角数据集的先验知识取得了显著进展。然而,由于高斯数量有限,这些模型在表示高频细节方面往往表现不足。尽管每场景优化的3D高斯点云(3D-GS)中采用的密化策略可以适配于前馈模型,但在广义场景中可能并不理想。 本文提出了 生成式密化(Generative Densification),这是一种高效且具备良好泛化能力的方法,用于密化由前馈模型生成的高斯点云。不同于3D-GS密化策略通过迭代地分裂和复制原始高斯参数来实现密化,我们的方法通过一次前向传播对前馈模型的特征表示进行上采样,并生成相应的细化高斯点云,从而利用嵌入的先验知识提升泛化能力。 在对象级和场景级重建任务上的实验表明,我们的方法在模型大小相当或更小的情况下,性能优于最新方法,在细节表示上取得了显著改进。这种方法不仅提升了前馈模型在稀疏视角条件下的表现,还为高效、细节丰富的3D重建提供了一种通用解决方案。