You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I have never used LM Studio. If it is based on llama.cpp and can use gguf to inference, then my changes have not been merged into the official yet, and you may need to wait until the merge.
I am using the gguf version downloaded, and LM Studio is built-in with llama.cpp. The visual model of minicpm-v-2.6 is normal, and the language model of minicpm-o-2.6 is also normal. Only the visual model of minicpm-o-2.6 has an issue.
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
-DLLAMA_METAL=OFF
选项,会出现后端编译问题:-DLLAMA_METAL=OFF
选项绕开 metal 后端编译失败后,显示库找不到比较
MiniCPM-V-2.6
可以发现是在 load 的过程中发生段错误:换成
MiniCPM-V-2.6
用的mmproj-model-f16.gguf
就可以正常推理了。另,llama.cpp
中a813bad
的minicpmv-cli
使用这个视觉模型加上Minicpmv-o
可以正常推理。期望行为 | Expected Behavior
在本机通过添加
list(APPEND CMAKE_PREFIX_PATH "/path/to/ffmpeg")
及后可解决该依赖问题
3. 查看
mmproj-model-f16.gguf
文件是否有损坏问题(可能是我操作不当损坏,待确认,似乎在其他 issue 中也有人提出)复现方法 | Steps To Reproduce
No response
运行环境 | Environment
备注 | Anything else?
No response
The text was updated successfully, but these errors were encountered: