This repo contains Pytorch implemented model definitions, pre-trained weights and training/sampling code for DLDMs.
- git clone this repo
- Install the pkgs and activate envorionment
$ git clone git@github.com:Yoonho-Na/DLDM.git
$ cd DLDM
$ conda env create -f environment.yaml
$ conda activate dldm
We provide pretrained weights.
$ python scripts/pretrained_dldm.py
$ python sample.py
- put your files (.jpg, .npy, .png, ...) in a folder
custom_folder
- create 2 text files a
xx_train.txt
andxx_valid.txt
that point to the files in your training and test set respectively
find $(pwd)/custom_folder/train -name "*.npy" > xx_train.txt
find $(pwd)/custom_folder/valid -name "*.npy" > xx_valid.txt
${pwd}/custom_folder/train/
├── class1
│ ├── filename1.npy
│ ├── filename2.npy
│ ├── ...
├── class2
│ ├── filename1.npy
│ ├── filename2.npy
│ ├── ...
├── ...
${pwd}/custom_folder/valid/
├── class1
│ ├── filename1.npy
│ ├── filename2.npy
│ ├── ...
├── class2
│ ├── filename1.npy
│ ├── filename2.npy
│ ├── ...
├── ...
- adapt
configs/custom_DAE.yaml
to point to these 2 files - run
python main.py --base configs/custom_DAE.yaml -t True --gpus 0,1
to train on two GPUs. Use--gpus 0,
(with a trailing comma) to train on a single GPU.