Skip to content

Commit

Permalink
Merge branch 'main' into dev/mattstrong/ds
Browse files Browse the repository at this point in the history
  • Loading branch information
peasant98 authored Jul 26, 2024
2 parents 8801f9a + 702886b commit 7b699eb
Show file tree
Hide file tree
Showing 115 changed files with 749 additions and 398 deletions.
8 changes: 4 additions & 4 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -127,18 +127,18 @@ WORKDIR /home/${USERNAME}
ENV PATH="${PATH}:/home/${USERNAME}/.local/bin"

# Upgrade pip and install packages.
RUN python3.10 -m pip install --no-cache-dir --upgrade pip setuptools pathtools promise pybind11 omegaconf
RUN python3.10 -m pip install --no-cache-dir --upgrade pip setuptools==69.5.1 pathtools promise pybind11 omegaconf

# Install pytorch and submodules
# echo "${CUDA_VERSION}" | sed 's/.$//' | tr -d '.' -- CUDA_VERSION -> delete last digit -> delete all '.'
RUN CUDA_VER=$(echo "${CUDA_VERSION}" | sed 's/.$//' | tr -d '.') && python3.10 -m pip install --no-cache-dir \
torch==2.0.1+cu${CUDA_VER} \
torchvision==0.15.2+cu${CUDA_VER} \
torch==2.1.2+cu${CUDA_VER} \
torchvision==0.16.2+cu${CUDA_VER} \
--extra-index-url https://download.pytorch.org/whl/cu${CUDA_VER}

# Install tiny-cuda-nn (we need to set the target architectures as environment variable first).
ENV TCNN_CUDA_ARCHITECTURES=${CUDA_ARCHITECTURES}
RUN python3.10 -m pip install --no-cache-dir git+https://github.com/NVlabs/tiny-cuda-nn.git@v1.6#subdirectory=bindings/torch
RUN python3.10 -m pip install --no-cache-dir git+https://github.com/NVlabs/tiny-cuda-nn.git#subdirectory=bindings/torch

# Install pycolmap, required by hloc.
RUN git clone --branch v0.4.0 --recursive https://github.com/colmap/pycolmap.git && \
Expand Down
1 change: 1 addition & 0 deletions docs/nerfology/methods/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ The following methods are supported in nerfstudio:
:maxdepth: 1
Instant-NGP<instant_ngp.md>
Splatfacto<splat.md>
Splatfacto-W<splatw.md>
Instruct-NeRF2NeRF<in2n.md>
Instruct-GS2GS<igs2gs.md>
SIGNeRF<signerf.md>
Expand Down
4 changes: 3 additions & 1 deletion docs/nerfology/methods/splat.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,9 @@ We provide a few additional variants:
| `depth-splatfacto` | Default Model, Depth Supervision | ~6GB | Fast |
| `splatfacto-big` | More Gaussians, Higher Quality | ~12GB | Slower |

A full evaluation of Nerfstudio's implementation of Gaussian Splatting against the original Inria method can be found [here](https://docs.gsplat.studio/tests/eval.html).

A full evalaution of Nerfstudio's implementation of Gaussian Splatting against the original Inria method can be found [here](https://docs.gsplat.studio/main/tests/eval.html).


#### Quality and Regularization

Expand Down
51 changes: 51 additions & 0 deletions docs/nerfology/methods/splatw.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Splatfacto in the Wild

This is the implementation of [Splatfacto in the Wild: A Nerfstudio Implementation of Gaussian Splatting for Unconstrained Photo Collections](https://kevinxu02.github.io/splatfactow). The official code can be found [here](https://github.com/KevinXu02/splatfacto-w).

<video id="teaser" muted autoplay playsinline loop controls width="100%">
<source id="mp4" src="https://github.com/KevinXu02/splatfactow/blob/main/static/videos/interp_fountain2.mp4" type="video/mp4">

</video>

## Installation
This repository follows the nerfstudio method [template](https://github.com/nerfstudio-project/nerfstudio-method-template/tree/main)

### 1. Install Nerfstudio dependencies
Please follow the Nerfstudio [installation guide](https://docs.nerf.studio/quickstart/installation.html) to create an environment and install dependencies.

### 2. Install the repository
Run the following commands:
`pip install git+https://github.com/KevinXu02/splatfacto-w`

Then, run `ns-install-cli`.

### 3. Check installation
Run `ns-train splatfacto-w --help`. You should see the help message for the splatfacto-w method.

## Downloading data
You can download the phototourism dataset from running.
```
ns-download-data phototourism --capture-name <capture_name>
```

## Running Splafacto-w
To train with it, download the train/test tsv file from the bottom of [nerf-w](https://nerf-w.github.io/) and put it under the data folder (or copy them from `./splatfacto-w/dataset_split`). For instance, for Brandenburg Gate the path would be `your-data-folder/brandenburg_gate/brandenburg.tsv`. You should have the following structure in your data folder:
```
|---brandenburg_gate
| |---dense
| | |---images
| | |---sparse
| | |---stereo
| |---brandenburg.tsv
```

Then, run the command:
```
ns-train splatfacto-w --data [PATH]
```

If you want to train datasets without nerf-w's train/test split or your own datasets, we provided a light-weight version of the method for general cases. To train with it, you can run the following command
```
ns-train splatfacto-w-light --data [PATH] [dataparser]
```
For phototourism, the `dataparser` should be `colmap` and you need to change the colmap path through the CLI because phototourism dataparser does not load 3D points.
8 changes: 8 additions & 0 deletions docs/quickstart/existing_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ ns-download-data blender
ns-download-data nerfstudio --capture-name nerfstudio-dataset

# Download a few room-scale scenes from the EyefulTower dataset at different resolutions
pip install awscli # Install `awscli` for EyefulTower downloads.
ns-download-data eyefultower --capture-name riverview seating_area apartment --resolution-name jpeg_1k jpeg_2k

# Download the full D-NeRF dataset of dynamic synthetic scenes
Expand Down Expand Up @@ -87,3 +88,10 @@ In the tables below, each dataset was placed into a bucket based on the table's
[record3d]: https://record3d.app/
[sdfstudio]: https://github.com/autonomousvision/sdfstudio/blob/master/docs/sdfstudio-data.md#Existing-dataset
[sitcoms3d]: https://github.com/ethanweber/sitcoms3D/blob/master/METADATA.md

### Eyeful Tower
Downloading Eyeful Tower scenes requires installing the AWS CLI, an optional dependency. To do so, run:
```bash
conda activate nerfstudio
pip install awscli
```
67 changes: 62 additions & 5 deletions docs/quickstart/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ Install [Git](https://git-scm.com/downloads).

Install Visual Studio 2022. This must be done before installing CUDA. The necessary components are included in the `Desktop Development with C++` workflow (also called `C++ Build Tools` in the BuildTools edition).

Install Visual Studio Build Tools. If MSVC 143 does not work (usually will fail if your version > 17.10), you may also need to install MSVC 142 for Visual Studio 2019. Ensure your CUDA environment is set up properly.

Nerfstudio requires `python >= 3.8`. We recommend using conda to manage dependencies. Make sure to install [Conda](https://docs.conda.io/en/latest/miniconda.html) before proceeding.

:::::
Expand Down Expand Up @@ -76,14 +78,55 @@ conda install -c "nvidia/label/cuda-11.7.1" cuda-toolkit
:::
::::

### tiny-cuda-nn
### Install tiny-cuda-nn/gsplat

After pytorch and ninja, install the torch bindings for tiny-cuda-nn:
::::::{tab-set}
:::::{tab-item} Linux

After pytorch and ninja, install the torch bindings for tiny-cuda-nn:
```bash
pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
```

:::::
:::::{tab-item} Windows

Activate your Visual C++ environment:
Navigate to the directory where `vcvars64.bat` is located. This path might vary depending on your installation. A common path is:

```
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build
```

Run the following command:
```bash
./vcvars64.bat
```

If the above command does not work, try activating an older version of VC:
```bash
./vcvarsall.bat x64 -vcvars_ver=<your_VC++_compiler_toolset_version>
```
Replace `<your_VC++_compiler_toolset_version>` with the version of your VC++ compiler toolset. The version number should appear in the same folder.

For example:
```bash
./vcvarsall.bat x64 -vcvars_ver=14.29
```

Install `gsplat` from source:
```bash
pip install git+https://github.com/nerfstudio-project/gsplat.git
```

Install the torch bindings for tiny-cuda-nn:
```bash
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
```

:::::
::::::

## Installing nerfstudio

**From pip**
Expand Down Expand Up @@ -137,19 +180,33 @@ curl -fsSL https://pixi.sh/install.sh | bash
### Install Pixi Environmnent
After Pixi is installed, you can run
```bash
git clone https://github.com/nerfstudio-project/nerfstudio.git
cd nerfstudio
pixi run post-install
pixi shell
```
This will install all enviroment dependancies including colmap, tinycudann and hloc, and the active the conda environment
This will fetch the latest Nerfstudio code, install all enviroment dependencies including colmap, tinycudann and hloc, and then activate the pixi environment (similar to conda).
From now on, each time you want to run Nerfstudio in a new shell, you have to navigate to the nerfstudio folder and run `pixi shell` again.

you could also run
You could also run

```bash
pixi run post-install
pixi run train-example-nerf
```

to download an example dataset and run nerfacto straight away
to download an example dataset and run nerfacto straight away.

Note that this method gets you the very latest upstream Nerfstudio version, if you want to use a specific release, you have to first checkout a specific version or commit in the nerfstudio folder, i.e.:
```
git checkout tags/v1.1.3
```

Similarly, if you want to update, you want to update the git repo in your nerfstudio folder:
```
git pull
```
Remember that if you ran a checkout on a specific tag before, you have to manually specify a new tag or `git checkout main` to see the new changes.

## Use docker image

Expand Down
1 change: 1 addition & 0 deletions nerfstudio/cameras/cameras.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
"""
Camera Models
"""

import base64
import math
from dataclasses import dataclass
Expand Down
1 change: 1 addition & 0 deletions nerfstudio/cameras/lie_groups.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
"""
Helper for Lie group operations. Currently only used for pose optimization.
"""

import torch
from jaxtyping import Float
from torch import Tensor
Expand Down
7 changes: 3 additions & 4 deletions nerfstudio/cameras/rays.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
"""
Some ray datastructures.
"""

import random
from dataclasses import dataclass, field
from typing import Callable, Dict, Literal, Optional, Tuple, Union, overload
Expand Down Expand Up @@ -153,15 +154,13 @@ def get_weights(self, densities: Float[Tensor, "*batch num_samples 1"]) -> Float
@staticmethod
def get_weights_and_transmittance_from_alphas(
alphas: Float[Tensor, "*batch num_samples 1"], weights_only: Literal[True]
) -> Float[Tensor, "*batch num_samples 1"]:
...
) -> Float[Tensor, "*batch num_samples 1"]: ...

@overload
@staticmethod
def get_weights_and_transmittance_from_alphas(
alphas: Float[Tensor, "*batch num_samples 1"], weights_only: Literal[False] = False
) -> Tuple[Float[Tensor, "*batch num_samples 1"], Float[Tensor, "*batch num_samples 1"]]:
...
) -> Tuple[Float[Tensor, "*batch num_samples 1"], Float[Tensor, "*batch num_samples 1"]]: ...

@staticmethod
def get_weights_and_transmittance_from_alphas(
Expand Down
1 change: 0 additions & 1 deletion nerfstudio/configs/base_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@

"""Base Configs"""


from __future__ import annotations

from dataclasses import dataclass, field
Expand Down
15 changes: 15 additions & 0 deletions nerfstudio/configs/external_methods.py
Original file line number Diff line number Diff line change
Expand Up @@ -268,6 +268,21 @@ class ExternalMethod:
)
)

# Splatfacto-W
external_methods.append(
ExternalMethod(
"""[bold yellow]Splatfacto-W[/bold yellow]
For more information visit: https://docs.nerf.studio/nerfology/methods/splatw.html
To enable Splatfacto-W, you must install it first by running:
[grey]pip install git+https://github.com/KevinXu02/splatfacto-w"[/grey]""",
configurations=[
("splatfacto-w", "Splatfacto in the wild"),
],
pip_package="git+https://github.com/KevinXu02/splatfacto-w",
)
)


@dataclass
class ExternalMethodDummyTrainerConfig:
Expand Down
4 changes: 2 additions & 2 deletions nerfstudio/configs/method_configs.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@
method_configs: Dict[str, Union[TrainerConfig, ExternalMethodDummyTrainerConfig]] = {}
descriptions = {
"nerfacto": "Recommended real-time model tuned for real captures. This model will be continually updated.",
"nerfacto-huge": "Larger version of Nerfacto with higher quality.",
"depth-nerfacto": "Nerfacto with depth supervision.",
"instant-ngp": "Implementation of Instant-NGP. Recommended real-time model for unbounded scenes.",
"instant-ngp-bounded": "Implementation of Instant-NGP. Recommended for bounded real and synthetic scenes",
Expand All @@ -83,6 +84,7 @@
"neus-facto": "Implementation of NeuS-Facto. (slow)",
"splatfacto": "Gaussian Splatting model",
"depth-splatfacto": "Depth supervised Gaussian Splatting model",
"splatfacto-big": "Larger version of Splatfacto with higher quality.",
}

method_configs["nerfacto"] = TrainerConfig(
Expand Down Expand Up @@ -301,8 +303,6 @@
viewer=ViewerConfig(num_rays_per_chunk=1 << 12),
vis="viewer",
)
#
#
method_configs["mipnerf"] = TrainerConfig(
method_name="mipnerf",
pipeline=VanillaPipelineConfig(
Expand Down
Loading

0 comments on commit 7b699eb

Please sign in to comment.