Joanna Waczynska, Piotr Borycki, Joanna Kaleta, Slawomir Tadeja, Przemysław Spurek
This repository contains the official authors implementation associated with the paper "D-MiSo: Editing Dynamic 3D Scenes using Multi-Gaussians Soup".
Abstract: * Over the past years, we have observed an abundance of approaches for modeling dynamic 3D scenes using Gaussian Splatting (GS). Such solutions use GS to represent the scene's structure and the neural network to model dynamics. Such approaches allow fast rendering and extracting each element of such a dynamic scene. However, modifying such objects over time is challenging. SC-GS (Sparse Controlled Gaussian Splatting) enhanced with Deformed Control Points partially solves this issue. However, this approach necessitates selecting elements that need to be kept fixed, as well as centroids that should be adjusted throughout editing. Moreover, this task poses additional difficulties regarding the re-productivity of such editing. To address this, we propose Dynamic Multi-Gaussian Soup (D-MiSo), which allows us to model the mesh-inspired representation of dynamic GS. Additionally, we propose a strategy of linking parameterized Gaussian splats, forming a Triangle Soup with the estimated mesh. Consequently, we can separately construct new trajectories for the 3D objects composing the scene. Thus, we can make the scene's dynamic editable over time or while maintaining partial dynamics. *
Project Page under the LINK.
Check us if you want having fun with animations:
Multiple animations are possible:
Since, the software is based on original Gaussian Splatting repository, for details regarding requirements, we kindly direct you to check 3DGS. Here we present the most important information.
- Conda (recommended)
- CUDA-ready GPU with Compute Capability 7.0+
- CUDA toolkit 11 for PyTorch extensions (we used 11.8)
# SSH
git clone [email protected]:waczjoan/D-MiSo.git --recursive
or
# HTTPS
git clone https://github.com/waczjoan/D-MiSo.git --recursive
To install the required Python packages we used 3.7 and 3.8 python and conda v. 24.1.0
conda env create --file environment.yml
conda dmiso
Common issues:
- Are you sure you downloaded the repository with the --recursive flag?
- Please note that this process assumes that you have CUDA SDK 11 installed, not 12. if you encounter a problem please refer to 3DGS repository.
Download dataset and put it in data
directory.
- We use the
D-NeRF Datasets
; dataset available under the link. - For
NeRF-DS
we used dataset available under the link. PanopticSports Datasets:
find scenes under the link.
If you would like only check renders, we share two pretrained models for jumpingjacks
from D-NeRF Datasets
:
- with black background: link
- with white background: link. Additionally, here we share two modified triangle-soup needed to render modification.
In this section we describe more details, and make step by step how to train and render D-MiSo.
- Go to D-NeRF Datasets, download
jumpingjacks
dataset and put it in todata
directory. For example:
<D-MiSo>
|---data
| |---<jumpingjacks>
| |---<mutant>
| |---...
|---train.py
|---metrics.py
|---...
- Train model:
train.py --eval -s "data/jumpingjacks" -m "output/jumpingjacks" --iterations 80000
--warm_up 2000 --densify_until_iter 5000
--num_gauss 100000 --num_splat 25 --batch_size 10 -r 2 --is_blender
Tip1: use -w
if you want white background
In output/jumpingjacks
you should find:
<D-MiSo>
|---data
| |---<jumpingjacks>
| | |---transforms_train.json
| | |---...
|---output
| |---<jumpingjacks>
| | |---deform
| | |---time_net
| | |---point_cloud
| | |---xyz
| | |---cfg_args
| | |---...
|---train.py
|---metrics.py
|---...
- Evaluation:
Firstly let's check renders in init position.
In this scenario let's run:
render.py -m output/jumpingjacks
Use --skip_train
, if you would like to skip train dataset in render.
Then, let's calculate metrics:
python metrics.py -m output/jumpingjacks
In output/jumpingjacks
you should find:
<D-MiSo>
|---output
| |---<jumpingjacks>
| | |---point_cloud
| | |---cfg_args
| | |---test
| | | |---<ours_best>
| | |---results.json
| | |---...
|---metrics.py
|---...
- Save Triangle-Soups.
Since the modifications are based on creating triangle -soups we will, need a blender .obj
. Use the script:
scripts/render_pseudomesh.py -m "output/jumpingjacks"
Note, if necessary, modify it according to your needs!
In output/jumpingjacks
you should find:
<D-MiSo>
|---output
| |---<jumpingjacks>
| | |---point_cloud
| | |---cfg_args
| | |---triangle_soups
| | | |---<ours_best>
| | | | |---core_triangle_soups
| | | | |---sub_triangle_soups
| | |---results.json
| | |---...
|---metrics.py
|---...
- Own modification* (for blender users):
You can prepare your own more realistic transformation. Open blender app; you need download it (https://www.blender.org/); Import created objects form sub_triangle_soups
folder. Create modification and save it: File -> Export -> Wavefront (.obj).
NOTE: For first our code use we prepared pre-trained model for jumpingjacks
. Download it from link. And save it:
<D-MiSo>
|---output
| |---<jumpingjacks_pre_trained_white_bg>
| |---<jumpingjacks_pre_trained_black_bg>
|---metrics.py
|---...
Please download also transforms_renders.json
from link, and put it in data\dataset
:
for examples:
<D-MiSo>
|---data
| |---<jumpingjacks>
| | |---transforms_renders.json
| | |---transforms_train.json
| | |---...
| |---<mutant>
| |---...
|---train.py
|---metrics.py
|---...
We prepared modification for jumpingjacks_pre_trained_white_bg
, please find it here:
<D-MiSo>
|---output
| |---<jumpingjacks_pre_trained_white_bg>
| | |---traingle_soups
| | | |---selected
| | | | |---sub_triangle_soup_example_time_0.2300.obj
| | | | |---sub_triangle_soup_modification_1.obj
| | | | |---sub_triangle_soup_modification_2.obj
|---metrics.py
|---...
sub_triangle_soup_example_time_0.2300.obj
was selected obj createdusing scripts/render_pseudomesh.py
- Render modification:
To create renders based on created obj
run:
scripts/render_based_on_obj.py -m "output/jumpingjacks_pre_trained_white_bg"
--objpath "output/jumpingjacks_pre_trained_white_bg/triangle_soups/selected/sub_triangle_soup_modification_1.obj"
Please check also sub_triangle_soup_modification_2.obj
.
sub_triangle_soup_modification_1
-- head rotationsub_triangle_soup_modification_2
-- hand and leg moving
NOTE! Script scripts/render_based_on_obj.py
uses transforms_renders.obj
to define views. Please check it, according to your needs.
In output/jumpingjacks
you should find additional_views
with new renders:
<D-MiSo>
|---output
| |---<jumpingjacks>
| | |---point_cloud
| | |---cfg_args
| | |---additional_views
| | |---results.json
| | |---...
|---metrics.py
|---...
@Article{waczyńska2024dmiso,
author = {Joanna Waczyńska and Piotr Borycki and Joanna Kaleta and Sławomir Tadeja and Przemysław Spurek},
title = {D-MiSo: Editing Dynamic 3D Scenes using Multi-Gaussians Soup},
year = {2024},
eprint = {2405.14276},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
@Article{kerbl3Dgaussians,
author = {Kerbl, Bernhard and Kopanas, Georgios and Leimk{\"u}hler, Thomas and Drettakis, George},
title = {3D Gaussian Splatting for Real-Time Radiance Field Rendering},
journal = {ACM Transactions on Graphics},
number = {4},
volume = {42},
month = {July},
year = {2023},
url = {https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/}
}
@Article{huang2023sc,
author = {Huang, Yi-Hua and Sun, Yang-Tian and Yang, Ziyi and Lyu, Xiaoyang and Cao, Yan-Pei and Qi, Xiaojuan},
title = {SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes},
journal = {arXiv preprint arXiv:2312.14937},
year = {2023}
}