Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation Error #21

Open
kentfuji opened this issue Nov 5, 2022 · 4 comments
Open

Evaluation Error #21

kentfuji opened this issue Nov 5, 2022 · 4 comments

Comments

@kentfuji
Copy link

kentfuji commented Nov 5, 2022

Hi, thanks for sharing the great work!

I tried to run the evaluation procedure following the instructions in the install.md file and the Evaluation section.
However, it seems that the process fails at FID calculation after telling me that there is a NaN value.
(The matching score is also nan for the ground truth data as well...)
I did download all the pretrained models and placed them where they should be.
Do you know what the cause could be? Thank you in advance!

@kentfuji
Copy link
Author

kentfuji commented Nov 5, 2022

P.S., this only seems to be an issue with t2m dataset. KIT seems to work fine.

@leinace1001
Copy link

Have you solved yet? I am encountering this problem, too.

@fyyakaxyy
Copy link

Me too,by printing gt,gt_mu,cov and gt_cov,they are all Nan,I guess the problem is the pretraind models!
And other projects with HumanML3D also can't evaluate FID.

@fyyakaxyy
Copy link

Me too,by printing gt,gt_mu,cov and gt_cov,they are all Nan,I guess the problem is the pretraind models!
And other projects with HumanML3D also can't evaluate FID.

Me too,by printing gt,gt_mu,cov and gt_cov,they are all Nan,I guess the problem is the pretraind models!
And other projects with HumanML3D also can't evaluate FID.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants