You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’ve been using Onyx to chat with documents and recently integrated my self-hosted Qwen-2.5VL-7B model, which is running on vLLM’s OpenAI-compatible server. Everything works fine for processing PDFs and other document formats, but I’m running into issues when trying to upload images in the chatbot interface.
What’s Happening?
When I upload an image, I get an error from onyx/web/src/app/chat/ChatPage.tsx.
I tried adding qwen to MODEL_NAMES_SUPPORTING_IMAGE_INPUT in onyx/web/src/lib/llm/utils.ts, hoping that would enable image support.
But i am still getting error
Is adding qwen to MODEL_NAMES_SUPPORTING_IMAGE_INPUT the right approach? Does Onyx support image inputs for self-hosted models running on vLLM, or do I need to configure something else?
I am also facing a runtime error:
RuntimeError: No litellm entry found for openai/Qwen/Qwen2-VL-7B-Instruct-AWQ
How can I avoid the "No litellm entry found" error?
Would really appreciate any guidance on this! Thanks in advance.
The text was updated successfully, but these errors were encountered:
I’ve been using Onyx to chat with documents and recently integrated my self-hosted Qwen-2.5VL-7B model, which is running on vLLM’s OpenAI-compatible server. Everything works fine for processing PDFs and other document formats, but I’m running into issues when trying to upload images in the chatbot interface.
What’s Happening?
When I upload an image, I get an error from onyx/web/src/app/chat/ChatPage.tsx.
I tried adding qwen to MODEL_NAMES_SUPPORTING_IMAGE_INPUT in onyx/web/src/lib/llm/utils.ts, hoping that would enable image support.
But i am still getting error
Is adding qwen to MODEL_NAMES_SUPPORTING_IMAGE_INPUT the right approach?
Does Onyx support image inputs for self-hosted models running on vLLM, or do I need to configure something else?
I am also facing a runtime error:
RuntimeError: No litellm entry found for openai/Qwen/Qwen2-VL-7B-Instruct-AWQ
How can I avoid the "No litellm entry found" error?
Would really appreciate any guidance on this! Thanks in advance.
The text was updated successfully, but these errors were encountered: