You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, VITA-1.5 looks great! When I run python -m web_demo.server --model_path demo_VITA_ckpt --ip 0.0.0.0 --port 8081 , I expect 2 gpus to be used (loading 2 models). But it appears that only one is being used? Can you help me on that?
Besides, there is another minor issues I met and I sort of fixed it by my self:
web_ability_demo.py and server.py has the line config_path = os.path.join(model_path, 'origin_config.json'), but there is no 'origin_config.json'. I guess one just need to rename the original config to origin_config and copy the vllm config.
The text was updated successfully, but these errors were encountered:
Hi, VITA-1.5 looks great! When I run
python -m web_demo.server --model_path demo_VITA_ckpt --ip 0.0.0.0 --port 8081
, I expect 2 gpus to be used (loading 2 models). But it appears that only one is being used? Can you help me on that?Besides, there is another minor issues I met and I sort of fixed it by my self:
config_path = os.path.join(model_path, 'origin_config.json')
, but there is no 'origin_config.json'. I guess one just need to rename the original config to origin_config and copy the vllm config.The text was updated successfully, but these errors were encountered: