Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Good work! How can i run demo on Internet videos? #8

Open
jihg88 opened this issue May 8, 2024 · 5 comments
Open

Good work! How can i run demo on Internet videos? #8

jihg88 opened this issue May 8, 2024 · 5 comments

Comments

@jihg88
Copy link

jihg88 commented May 8, 2024

Thanks for releasing such an outstanding work, I wonder how to test it on Internet video?
Now i can get the init_motion with WHAM project (contains :world_pose_root\cam_pose_root、body_pose、transl_cam\trans_world、openpose25 joints),and i want to use your work to refine the init_motion (reduce foot sliding、jitter、kp2d consistency),will you provide the demo code for Internet videos?

@sanweiliti
Copy link
Owner

Hi,

Currently we do not have plans for a demo code, to test on internet videos, you can use any off-the-shelf method for the initialization, and format the initialized motion sequnece following the sample sequences' data format provided in 'Test and evaluate on PROX/EgoBody' in README, with z(or y)-axis.

@Era-Dorta
Copy link

I'm trying to run this on method on a custom dataset of only RGB videos. Preparing the dataset to run your model is becoming quite a challenge. Please reconsider releasing a demo script, easier reproducibility means more citations after all ;)

@areiner222
Copy link

Thank you for this impressive work!

+1 on some demo code. A documented colab would be SO useful even if it starts from something like pre-computed (from LEMO or GT data perhaps) smpl-x shape, pose, translation sequences without directly relying on images.

An idea for a possible flow that I'd personally find helpful!

  1. Prepare an input sequence
    a. show how to take per-frame inputs from GT datasets and add noise, occlude, etc.
    b. and/or, use an in-the-wild inference example
  2. Explain / demonstrate the pertinent inference modules and how they work together to produce a final smoothed output trajectory
    a. Usage breakdown of PoseNet, TrajNet, SpacedDiffusionPoseNet, SpacedDiffusionTrajNet, gaussian_diffusion_posenet, gaussian_diffusion_trajnet, create_gaussian_diffusion (or the pertinent subset for running the example!)
    b. Discuss / show how to convert to motion representation (i.e., X = (R, P))
    c. Perform inference
  3. Visualize input vs output sequences

@bring-nirachornkul
Copy link

Hi, we stuck on download SMPL-X in AMASS dataset as well.
The instruction is quite ambiguity. Can you tell me what dataset can we use so far?

@MelihDarcanxyz
Copy link

I'm trying to run this on method on a custom dataset of only RGB videos. Preparing the dataset to run your model is becoming quite a challenge. Please reconsider releasing a demo script, easier reproducibility means more citations after all ;)

I'm having the same difficulties too. A demo would be much appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants