In this repo, we release our data, codes, and pre-trained models for this Neural Shape Compiler [project page].
We released our data, model file, pre-trained model, and some of the inference codes. More inference codes, test codes and training codes are on the way.
Please proceed 1~4 steps below:
-
git clone --recurse-submodules https://github.com/tiangeluo/ShapeCompiler.git conda create --name shapecompiler python=3.8 conda activate shapecompiler
-
install PyTorch and PyTorch3D.
# my install commands pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html conda install -c fvcore -c iopath -c conda-forge fvcore iopath conda install -c bottler nvidiacub pip install "git+https://github.com/facebookresearch/pytorch3d.git"
-
cd ShapeCompiler python setup.py install # my gcc version is 8.2.0 for compiling cuda operators bash compile.sh
-
download pre-trained model
shapecompiler.pt
from GoogleDrive , and move it toShapeCompiler/outputs/shapecompiler_models
# to generate point clouds conditional on text
# results will be saved in ./outputs/shapecompiler_outputs/text2pts_test1
python generate_pts_condtext.py --model_path ./shapecompiler.pt --text 'a chair has armrests, with slats between legs' --save_name 'test1'
# to generate text conditional on point clouds
# results will be saved in ./outputs/shapecompiler_outputs/pts2text_test1
python generate_text_condpts.py --model_path ./shapecompiler.pt --pts_path './assets/example_chair.ply' --save_name 'test1'
# note that point clouds extract from ShapeNet has different orientation as we train ShapeCompiler
# assume point clouds has: pc.shape = [2048, 3]. you need to turn pc[:,2] = -1*pc[:,2]
# you can add flag --inverse in your command line to conduct pc[:,2] = -1*pc[:,2]
# if you are not confident if the shape orientation is correct, please visualize ./assets/example_chair.ply
# to generate programs conditional on point clouds
# generated program parameters, program text, voxels, and extracted point clouds will be saved in ./outputs/shapecompiler_outputs/pts2pgm_test1
python generate_pgm_condpts.py --model_path ./shapecompiler.pt --pts_path './assets/example_chair.ply' --save_name 'test1'
Description | Link |
---|---|
Shape Compiler, training with all the data mentioned in paper | Download (1.49GB) |
PointVQVAE, training with ABO, ShapeNet, Program objects | Download (107.3 MB) |
PointVQVAE, training with ShapeNet objects | Stay Tuned |
PointVQVAE, training with ABO, ShapeNet, Program, Objaverse objects | Stay Tuned |
Our [shape, structural description] paired data is stored under /data
as pickle files and be loaded via data= pickle.load(open('abo_text_train.pkl','rb'))
. Each pickle file contains num of indices
and pcs_name
. You can accesee the text annotation by index (e.g., data[10]
) and its correspondence point cloud file name (e.g., data['pcs_name'][10]
).
Please also cite ABO, ShapeNet, Text2Shape, and ShapeGlot, if you use our caption data along with the objects provided in their datasets.
- Text->3D
With code: Text2Shape, DreamField, Shape IMLE, CLIPForge, MeshDiffusion
No official code release: DreamFusion, Magic3D, ShapeCrafter, Shape2VecSet, TAPS3D, 3DGen
- 3D->Program
With code: ShapeProgram, ShapeAssembly, LegoAssembly
No official code release: ProgramViaImplicitPart
- 3D->Text
With code: Scan2cap
We thank the below open-resource projects and codes.
- PyTorch and PyTorch3D.
- Our codebase builds heavily on https://github.com/lucidrains/DALLE-pytorch.
- PointVQVAE implementation is built based on Shaper developed by Jiayuan.
- We follow this script to render our point clouds with Misuba.
If you find our work or repo helpful, we are happy to receive a citation.
@article{luo2022neural,
title={Neural Shape Compiler: A Unified Framework for Transforming between Text, Point Cloud, and Program},
author={Luo, Tiange and Lee, Honglak and Johnson, Justin},
journal={arXiv preprint arXiv:2212.12952},
year={2022}
}