Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.export.export_for_inference not available in current stable PyTorch #1514

Open
dongxiaolong opened this issue Jan 7, 2025 · 4 comments
Labels

Comments

@dongxiaolong
Copy link
Contributor

Hi @cpuhrsch,

I noticed that the code uses torch.export.export_for_inference which is not available in the current stable version of PyTorch (2.5.1). This might cause compatibility issues for users who haven't switched to nightly builds yet.
reproduce step

python server.py ../../../ large --port 6006 --furious --fast --save-fast "./"

Saving compiled models under directory ./

Saving at path=PosixPath('sam2_image_encoder.pt2')

Traceback (most recent call last):
  File "/root/ao/examples/sam2_amg_server/server.py", line 578, in <module>
    fire.Fire(main)
  File "/root/miniconda3/envs/sam2/lib/python3.10/site-packages/fire/core.py", line 135, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/root/miniconda3/envs/sam2/lib/python3.10/site-packages/fire/core.py", line 468, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/root/miniconda3/envs/sam2/lib/python3.10/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/root/autodl-tmp/ao/examples/sam2_amg_server/server.py", line 437, in main
    export_model(mask_generator,
  File "/root/autodl-tmp/ao/examples/sam2_amg_server/compile_export_utils.py", line 153, in export_model
    aot_compile(model_directory,
  File "/root/ao/examples/sam2_amg_server/compile_export_utils.py", line 109, in aot_compile
    from torch.export import export_for_inference
ImportError: cannot import name 'export_for_inference' from 'torch.export' (/root/miniconda3/envs/sam2/lib/python3.10/site-packages/torch/export/__init__.py)

Current behavior:

  • The code assumes availability of torch.export.export_for_inference
  • This feature is only available in PyTorch nightly builds

Suggested solutions:

  1. Add version check and fallback behavior
  2. Document PyTorch version requirement
  3. Consider supporting both stable and nightly versions

Would appreciate your thoughts on how to best handle this compatibility issue.

Related PR: #1468

@cpuhrsch
Copy link
Contributor

cpuhrsch commented Jan 8, 2025

Hm, if you can do without export you can side step this for now or you could wait for the next stable release. Unfortunately I can't offer more than that. So essentially you'd need to comment out this code locally for yourself or guard it on the version using

ao/torchao/utils.py

Lines 366 to 374 in 5a0d662

def torch_version_at_least(min_version):
return is_fbcode() or compare_versions(torch.__version__, min_version) >= 0
TORCH_VERSION_AT_LEAST_2_6 = torch_version_at_least("2.6.0")
TORCH_VERSION_AT_LEAST_2_5 = torch_version_at_least("2.5.0")
TORCH_VERSION_AT_LEAST_2_4 = torch_version_at_least("2.4.0")
TORCH_VERSION_AT_LEAST_2_3 = torch_version_at_least("2.3.0")
TORCH_VERSION_AT_LEAST_2_2 = torch_version_at_least("2.2.0")

@cpuhrsch
Copy link
Contributor

cpuhrsch commented Jan 8, 2025

@dongxiaolong Thank you for your interest in the project and code by the way :)

@jcaip jcaip added the triaged label Jan 8, 2025
@dongxiaolong
Copy link
Contributor Author

@dongxiaolong Thank you for your interest in the project and code by the way :)

Thanks for your suggestions on version handling. I've already resolved the CUDA 12.4 requirement issue for the pre-release version.
The current challenge I'm facing is the lack of documentation for AOT features, particularly regarding parameters like 'load-fast' and 'save-fast'. Would it be possible to add some documentation about these parameters' usage and functionality?
Also, I really appreciate your help and this project - it has been incredibly helpful for my work. Thanks for maintaining it!

@cpuhrsch
Copy link
Contributor

cpuhrsch commented Jan 8, 2025

@dongxiaolong Thanks! There's some documentation on what fast and furious mean under https://github.com/pytorch/ao/blob/49961013b2abc0c500c3cb516b00866d64938043/examples/sam2_amg_server/README.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants