[Paper] [Project Page] [Jittor Version] [Demo]
If our work is helpful to you or gives you some inspiration, please star this project and cite our paper. Thank you!
- Source code of AR123.
- Pretrained weights of AR123.
- Rendered Dataset Under the Zero123plus Setting.
We recommend using Python>=3.10
, PyTorch>=2.1.0
, and CUDA>=12.1
.
conda create --name ar123 python=3.10
conda activate ar123
pip install -U pip
# Ensure Ninja is installed
conda install Ninja
# Install the correct version of CUDA
conda install cuda -c nvidia/label/cuda-12.1.0
# Install PyTorch and xformers
# You may need to install another xformers version if you use a different PyTorch version
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
pip install xformers==0.0.22.post7
# For Linux users: Install Triton
pip install triton
# Install other requirements
pip install -r requirements.txt
We provide our rendered objaverse subset under the Zero123++ configuration to facilitate reproducibility and further research.
Please download and place it into zero123plus_renders
.
Download checkpoints and put them into ckpts
.
To synthesize multiple new perspective images based on a single-view image, please run:
CUDA_VISIBLE_DEVICES=0 python infer.py --input_path examples/c912d471c4714ca29ed7cf40bc5b1717_0.png --mode nvs
To generate 3D asset based on the synthesized multiple new views, please run:
CUDA_VISIBLE_DEVICES=0 python infer.py --config_file configs/reconstruction.yaml --input_path examples/c912d471c4714ca29ed7cf40bc5b1717_0.png --mode mvto3d
You can also directly obtain 3D asset based on a single-view image by running:
CUDA_VISIBLE_DEVICES=0 python infer.py --config_file configs/reconstruction.yaml --input_path examples/c912d471c4714ca29ed7cf40bc5b1717_0.png --mode ito3d
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python train.py --base configs/ar123.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1
Please refer to eval_2d.py
.
Please refer to eval_3d.py
.
For beginners not familiar with the Blender software, we also provide mesh rendering codes that can run automatically on the cmd. Please refer to the render README for more details.
We thank the authors of the following projects for their excellent contributions to 3D generative AI!
In addition, we would like to express our sincere thanks to Jiale Xu for his invaluable assistance here.