-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch.export.export_for_inference not available in current stable PyTorch #1514
Comments
Hm, if you can do without export you can side step this for now or you could wait for the next stable release. Unfortunately I can't offer more than that. So essentially you'd need to comment out this code locally for yourself or guard it on the version using Lines 366 to 374 in 5a0d662
|
@dongxiaolong Thank you for your interest in the project and code by the way :) |
Thanks for your suggestions on version handling. I've already resolved the CUDA 12.4 requirement issue for the pre-release version. |
@dongxiaolong Thanks! There's some documentation on what fast and furious mean under https://github.com/pytorch/ao/blob/49961013b2abc0c500c3cb516b00866d64938043/examples/sam2_amg_server/README.md |
Hi @cpuhrsch,
I noticed that the code uses
torch.export.export_for_inference
which is not available in the current stable version of PyTorch (2.5.1). This might cause compatibility issues for users who haven't switched to nightly builds yet.reproduce step
Current behavior:
torch.export.export_for_inference
Suggested solutions:
Would appreciate your thoughts on how to best handle this compatibility issue.
Related PR: #1468
The text was updated successfully, but these errors were encountered: