-
Refer to the INSTALL.md for instructions on preparing environment and dependencies.
-
You can also use your own smartphone-captured images for inference.
-
MUST using smartphone camera's Pro Mode. 👉 How to use Pro Mode?
-
MUST saved both in RAW and standard JPG format with the SAME prefix name.
Specifically, you save 2 files. RAW for denoising input, standard JPG for unknown ISP color alignment.
E.g. RAW:
'Xiaomi_0001.dng'
and standard JPG:'Xiaomi_0001.jpg'
.
-
-
Put training datasets in
'./datasets/fivek/Raw'
and testing datasets in'./datasets/real_capture'
, following:datasets/ ├── fivek/ │ └── list_file/ │ ├── invalid_list.txt │ ├── train_list.txt │ ├── val_list.txt │ └── Raw/ │ ├── a0001-jmac_DSC1459.dng │ ├── a0002-dgw_005.dng │ ├── ... │ ├── a5000-kme_0204.dng | ├── real_capture/ │ └── list_file/ │ ├── val_list.txt │ └── Raw/ │ ├── Xiaomi_0001.dng │ ├── ... │ └── ref_sRGB/ │ ├── Xiaomi_0001.jpg │ ├── ...
-
Run
python train_dualdn.py -opt ./options/DualDn_Big.yml
-
Unlike the training strategy used for Synthetic evaluation, we utilize the entire MIT-Adobe Fivek dataset (nearly RAW 5,000 images) to train our model for 300,000 iterations, enhancing the model's generalization ability when dealing with unseen real-captured images.
-
It's worth noting that previous SOTA denoising models on the DND benchmark, such as CycleISP and UPI, were trained with 1,000,000 RAW images. In comparison, the volume of training RAW images we use here is relatively small.
-
-
For fast validation, we validate 20 synthetic images instead of the real-captured images every 50,000 iterations, since real-captured images are typically 4K or 8K resolution.
- If you'd like to validate directly on real-captured images, open the
DualDn_Big.yml
file, setmode
indatasets/val/val_datasets/Real_captured
totrue
and setmode
indatasets/val/val_datasets/Synthetic
tofalse
. - We recommend evaluating on real-captured images after training using the following test or inference code.
- If you'd like to validate directly on real-captured images, open the
-
Find the training results in
'./experiments'
-
After training, you can test DualDn in various testing sets, here we test Real_captured images for example.
-
Run
python test_dualdn.py -opt [exp_option_path] --num_iters [iters] --val_datasets ['Synthetic', 'Real_captured', 'DND']
E.g. If you trained DualDn with
'DualDn_Big.yml'
for300000
iterations, and want to test it onReal_captured
datasets:python test_dualdn.py -opt ./experiments/DualDn_Big/DualDn_Big.yml --num_iters 300000 --val_datasets Real_captured
-
Find the testing results in
'./results'
-
For fast inference, you can use the pre-trained DualDn models to process your own noisy images captured by smartphones.
-
Download the pre-trained model from One Drive and place it in
'./pretrained_model'
-
Put your orginal raw files in
'./datasets/real_capture/Raw'
, corresponding JPEG files in'./datasets/real_capture/ref_sRGB'
. Each RAW and JPEG pair must have the same prefix name.-
MUST using smartphone camera's Pro Mode. 👉 How to use Pro Mode?
-
MUST saved both in RAW and standard JPG format with the SAME prefix name.
Specifically, you save 2 files. RAW for denoising input, standard JPG for unknown ISP color alignment.
E.g. RAW:
'Xiaomi_0001.dng'
and standard JPG:'Xiaomi_0001.jpg'
.
-
-
Add the correct filenames to
'./datasets/real_capture/list_file/val_list.txt'
, with one filename per line. -
Run
python inference_dualdn.py -opt ./options/DualDn_Big.yml --pretrained_model ./pretrained_model/DualDn_Big.pth --val_datasets Real_captured
-
Find the inferencing results in
'./results'
-
Toy example for inference:
I captured a scene using my smartphone’s Pro Mode with the RAW format option enabled. This generated two files:
0001.dng
and0002.jpg
. I rename it with the SAME prefix name and place0001.dng
in'./datasets/real_capture/Raw'
and0001.jpg
in'./datasets/real_capture/ref_sRGB'
. I then added a line with0001.dng
in'./datasets/real_capture/list_file/val_list.txt'
and run the inference code.
🌟TIPS🌟:
-
Limited with funding, we cannot test DualDn on latest smartphones, which may have different EXIF data in their raw files.
If your results seems worse than the ref_sRGB (smartphone results) or encounter issues like abnormal colors or overly dark images, please open an issue on our GitHub with the original raw and JPEG files.
Your data is valuable to us, and we’re always here to help! 😊 -
You may encounter some little black holes in certain areas. That's because we use BGU during inference for color alignment, which downsamples the original images by a default 8x ratio, potentially neglecting local areas.
To fix this, open'./options/DualDn_Big.yml'
and setbgu_ratio
innetwork'
to4
or even1
. But instead, this will slow down the inference speed to a certain extent. You can also speed up DualDn inference by disabling BGU. Open'./options/DualDn_Big.yml'
and setBGU
indatasets/val/Real_captured
tofalse
.