| Project Page | Paper |
This part is the same as original threestudio. Skip it if you already have installed the environment.
See installation.md for additional information, including installation via Docker.
- We test our code on an NVIDIA graphics card with 80GB VRAM and have CUDA installed.
- Install
Python >= 3.8
. - (Optional, Recommended) Create a virtual environment:
python3 -m virtualenv sketchDream
. sketchDream/bin/activate
# Newer pip versions, e.g. pip-23.x, can be much faster than old versions, e.g. pip-20.x.
# For instance, it caches the wheels of git packages to avoid unnecessarily rebuilding them later.
python3 -m pip install --upgrade pip
- Install
PyTorch >= 1.12
. We have tested ontorch1.12.1+cu113
, but other versions should also work fine.
# torch1.12.1+cu113
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
- (Optional, Recommended) Install ninja to speed up the compilation of CUDA extensions:
pip install ninja
- Install dependencies:
pip install -r requirements.txt
Download the pretrained models from HuggingFace Model Page and put in into the folder "models".
We provide three test examples. Simply run:
./scripts/golden_fish.sh
If you want to test your own sketches, see ./tools/depth_predict.py
and ./scripts/depth_predict.sh
to generate corresponding depth maps. You may need to try different seeds to generate satisfactory depth maps.
We provide two test examples for refine editing stage. Download the data from Google_Drive and unzip them into ./asserts/
. Then, run:
./scripts/bark_editing/run_refine_editing.sh
If you want to get the coarse editing results, simply run:
./scripts/bark_editing/run_coarse_editing.sh
We will release more examples soon.
- For sketch-based generation, the hyperparameters can be tuned to generate the best results. If you pefer to generate higher quality objects instead of more sketch faithfulness, increase the "Diffusion_2D_prob" and reduce the "lambda_mask". If you pefer to generate more sketch faithful results, doing verse vice or set "four_view" as False.
- If the memory is limited, you can turn off the soft-shading as in MVDream-project and reduce the resolution.
- We are so sorry that the current complete editing process is complicated for your own 3D models. You should manually draw the editing sketches and predict the depth maps. The model_mask_edit.obj (extracted from the coarse editing process) should be manually corrected by meshlab if you want to get the best refine editing results. For refinement stage, the parameters should be carefully checked, as underscored in yaml file.
This code is built on the threestudio-project, MVDream-project, and pose-warping. Thanks to the maintainers for their contribution to the community!
If you find MVDream helpful, please consider citing:
@article {SketchDream2024,
author = {Liu, Feng-Lin and Fu, Hongbo and Lai, Yu-Kun and Gao, Lin},
title = {SketchDream: Sketch-based Text-to-3D Generation and Editing},
journal = {ACM Transactions on Graphics (Proceedings of ACM SIGGRAPH 2024)},
year = {2024},
volume = 43,
number = 4
}