Official repository of the paper "Adaptive and Temporally Consistent Gaussian Surfels for Multi-view Dynamic Reconstruction".
WACV 2025
Tested on: Ubuntu 22.04, CUDA 11.8, Python 3.10, PyTorch 2.3.1.
Create conda environment:
conda env create --file environment.yml
conda activate AT-GS
Pretrained model for Optical Flow Estimation using RAFT can be downloaded from google drive. Put the model to the path models/raft-things.pth
.
To test on NHR and DNA-Rendering datasets, please refer to 4K4D's guide to download these datasets. Alternatively, you can test on other custom datasets.
After downloading the datasets, like most Gaussian Splatting based methods, we need to convert the datasets to the COLMAP format:
<frame_000000>
|---images
| |---<image 0>
| |---<image 1>
| |---...
|---masks
| |---<mask 0>
| |---<mask 1>
| |---...
|---sparse
|---0
|---cameras.bin
|---images.bin
|---points3D.bin
<frame_000001>
...
-
Prepear a config file to specify parameters such as the input folder, output folder and more. For example, see
configs/sport_1.json
. Refer toarguments/__init__.py
for a comprehensive list of configurable hyper-parameters. -
Train the first frame separately:
python train_static.py --config_path {cfg_file}
-
Following 3DGStream, we initialize the NTC by:
python cache_warmup.py --config_path {cfg_file}
-
Train the full sequence:
python train.py --config_path {cfg_file}
-
Render images and extract dynamic meshes from the trianed models:
python render.py --config_path {cfg_file}
The meshes are in the folder
{output_path}/meshes/
This project is built upon gaussian_surfels and 3DGStream. We thank all the authors for their great work and repos.
If you find our code or paper helpful, please consider citing it:
@article{chen2024adaptive,
title={Adaptive and Temporally Consistent Gaussian Surfels for Multi-view Dynamic Reconstruction},
author={Chen, Decai and Oberson, Brianne and Feldmann, Ingo and Schreer, Oliver and Hilsmann, Anna and Eisert, Peter},
journal={arXiv preprint arXiv:2411.06602},
year={2024}
}