-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions by using pre-optimized models #53
Comments
Hi, could you try this file? (need to save it as opts.log in the same directory as pre-trained models). I realized the one provided in the pre-trained model link is for quadruped animals. This runs without problem for me. |
Many thanks for your reply! I try the opts-human.log you provided, but the result is the same as the above. Ah, I know what causes the difference, it's due to missing the 'init-cam' folder. If 'init-cam' is absence, the inference results are like above. So could you please provide the 'init-cam' files matching the pre-trained models? |
It should work without the init-cam folder. That not being used at inference time. Was there a warning when executing the code with the following message? It would be best if you attach the a log from the console. Here's mine: Were you able to run the cat example without a problem? |
Many thanks for your reply! Yes, there is a warning as you mentioned. "!!!deleting video specific dicts due to size mismatch!!!" And I got the same warning and the same strange results when running the cat example. I attach my log here: log.txt I check the warning code here. For my situation, the states['near_far'].shape[0] is 583, while the self.model.near_far.shape[0] is 582. I don't know what causes this mismatch. Could you please tell me the potential reasons? |
Oh! It works now if I replace human-cap006 datasets with the corresponding datasets you provided. For the absent image issue, I retried preprocessed command below, but still, it extracts 00000-00037.jpg from human-cap_6.MOV, with 00038.jpg missing. (using ffmeng 3.4.8, pytorch 1.7.1, cuda 11.0) I am confused about what causes this difference. Could you please retry the command above to see if 00038.jpg could be extracted normally? |
Can confirm it works for me. In the preprocessing script, we first use ffmpeg to extract frames, and then the frames are moved to tmp/$seqname/images. Attaching the log of running mask.py below: Can you confirm ffmpeg or mask.py are producing expected outputs? |
Dear GengShan,
Hi! Many thanks for your awesome job! I am trying to use your pre-optimized models for evaluation.
bash scripts/render_mgpu.sh 0 human-cap logdir/human-cap/human-cap.pth "0" 64
However, the results seem very strange as below, including the viewing directions and model motions.
Could you please tell me what's happening to these questions?
Thanks!
The text was updated successfully, but these errors were encountered: