Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding training time #59

Closed
ShixuanGu opened this issue Nov 1, 2024 · 4 comments
Closed

Regarding training time #59

ShixuanGu opened this issue Nov 1, 2024 · 4 comments

Comments

@ShixuanGu
Copy link

Many thanks for the great work! I'm wondering how long does it take to train a single scene?

I'm training locally on a windows machine w. 4080s, cuda 12.4, the training on "flowers", the process slows down from 9%~10%:

Computing 3D filter [01/11 16:13:43]
Training progress: 7%|██▉ | 2100/30000 [00:25<08:09, 56.96it/s, Loss=0.1321220]Computing 3D filter [01/11 16:13:45]
Training progress: 7%|███ | 2200/30000 [00:27<08:26, 54.89it/s, Loss=0.1205735]Computing 3D filter [01/11 16:13:47]
Training progress: 8%|███▏ | 2300/30000 [00:29<08:48, 52.41it/s, Loss=0.0876850]Computing 3D filter [01/11 16:13:49]
Training progress: 8%|███▎ | 2400/30000 [00:31<09:05, 50.57it/s, Loss=0.1155078]Computing 3D filter [01/11 16:13:51]
Training progress: 8%|███▌ | 2500/30000 [00:33<09:32, 48.04it/s, Loss=0.1140291]Computing 3D filter [01/11 16:13:53]
Training progress: 9%|███▋ | 2600/30000 [00:37<15:38, 29.20it/s, Loss=0.0955869]Computing 3D filter [01/11 16:13:58]
Training progress: 9%|███▌ | 2700/30000 [03:09<10:30:46, 1.39s/it, Loss=0.1132683]Computing 3D filter [01/11 16:16:33]
Training progress: 9%|███▋ | 2800/30000 [06:56<17:12:19, 2.28s/it, Loss=0.1008146]Computing 3D filter [01/11 16:20:21]
Training progress: 10%|███▋ | 2860/30000 [09:39<20:47:34, 2.76s/it, Loss=0.1026932]

Is it normal?

@niujinshuchong
Copy link
Member

Hi, this is strange. The training should done in less than 1 hour. Please refer to #54. Could you please check how many gaussians are used?

@ShixuanGu
Copy link
Author

Many thanks for the rapid reply.

I'm training with mipnerf360, on flowers with factor = 8.

I followed the suggestions in #54 and remove selected_pts_mask = torch.logical_or(selected_pts_mask, selected_pts_mask_abs)

for the density setting, I set the threshold as 0.5: gaussians.densify_and_prune(opt.densify_grad_threshold, 0.5, scene.cameras_extent, size_threshold)

I check the Number of points at initialisation is 38347, at iteration 5100/30000, its 866647, 5200/30000 is 884200. Is that a normal setting?

@ShixuanGu
Copy link
Author

Yet the code for nerf_synthetic works well, do you have any idea what's the potential issue?

@niujinshuchong
Copy link
Member

Hi, is it happen only for the flowers scene or also on other mip-nerf 360 dataset?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants