High dynamic range (HDR) novel view synthesis (NVS) aims to create photorealistic images from novel viewpoints using HDR imaging techniques. The rendered HDR images capture a wider range of brightness levels containing more details of the scene than normal low dynamic range (LDR) images. Existing HDR NVS methods are mainly based on NeRF. They suffer from long training time and slow inference speed. In this paper, we propose a new framework, High Dynamic Range Gaussian Splatting (HDR-GS), which can efficiently render novel HDR views and reconstruct LDR images with a user input exposure time. Specifically, we design a Dual Dynamic Range (DDR) Gaussian point cloud model that uses spherical harmonics to fit HDR color and employs an MLP-based tone-mapper to render LDR color. The HDR and LDR colors are then fed into two Parallel Differentiable Rasterization (PDR) processes to reconstruct HDR and LDR views. To establish the data foundation for the research of 3D Gaussian splatting-based methods in HDR NVS, we recalibrate the camera parameters and compute the initial positions for Gaussian point clouds. Experiments demonstrate that our HDR-GS surpasses the state-of-the-art NeRF-based method by 3.84 and 1.91 dB on LDR and HDR NVS while enjoying 1000x inference speed and only requiring 6.3% training time.
高动态范围(HDR)新视角合成(NVS)旨在使用HDR成像技术从新的视角创建逼真的图像。渲染的HDR图像捕捉更广泛的亮度级别,包含比普通低动态范围(LDR)图像更多的场景细节。现有的HDR NVS方法主要基于NeRF,它们的缺点是训练时间长和推理速度慢。在本文中,我们提出了一个新框架,高动态范围高斯喷溅(HDR-GS),它可以高效地渲染新的HDR视角并根据用户输入的曝光时间重建LDR图像。具体来说,我们设计了一个双动态范围(DDR)高斯点云模型,使用球谐函数拟合HDR颜色,并采用基于MLP的色调映射器来渲染LDR颜色。然后将HDR和LDR颜色输入到两个并行可微光栅化(PDR)过程中,以重建HDR和LDR视图。为了为HDR NVS中基于三维高斯喷溅方法的研究建立数据基础,我们重新校准了相机参数并计算了高斯点云的初始位置。实验表明,我们的HDR-GS在LDR和HDR NVS上分别超过了最新的基于NeRF的方法3.84 dB和1.91 dB,同时享有1000倍的推理速度,仅需6.3%的训练时间。