Relighting radiance fields is severely underconstrained for multi-view data, which is most often captured under a single illumination condition; It is especially hard for full scenes containing multiple objects. We introduce a method to create relightable radiance fields using such single-illumination data by exploiting priors extracted from 2D image diffusion models. We first fine-tune a 2D diffusion model on a multi-illumination dataset conditioned by light direction, allowing us to augment a single-illumination capture into a realistic -- but possibly inconsistent -- multi-illumination dataset from directly defined light directions. We use this augmented data to create a relightable radiance field represented by 3D Gaussian splats. To allow direct control of light direction for low-frequency lighting, we represent appearance with a multi-layer perceptron parameterized on light direction. To enforce multi-view consistency and overcome inaccuracies we optimize a per-image auxiliary feature vector. We show results on synthetic and real multi-view data under single illumination, demonstrating that our method successfully exploits 2D diffusion model priors to allow realistic 3D relighting for complete scenes.
对多视角数据进行光照重建是一个严重欠约束的问题,因为这些数据通常是在单一光照条件下捕获的;对于包含多个物体的完整场景尤其困难。我们提出了一种方法,利用从2D图像扩散模型中提取的先验信息,通过单一光照数据创建可重光照的辐射场。我们首先在一个多光照数据集上微调一个2D扩散模型,并以光照方向为条件,允许我们将单一光照捕获的数据扩充为一个基于直接定义的光照方向生成的真实但可能不一致的多光照数据集。我们使用这些扩充的数据创建了一个用3D高斯投影表示的可重光照辐射场。 为了实现对低频光照下光照方向的直接控制,我们使用一个以光照方向为参数的多层感知机来表示外观。为确保多视角一致性并克服不准确性,我们优化了每张图像的辅助特征向量。我们展示了在单一光照条件下的合成和真实多视角数据的实验结果,证明了我们的方法能够成功利用2D扩散模型的先验,实现对完整场景的逼真3D重光照效果。