-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Issues: unslothai/unsloth
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
I can not run unsloth/qwen2.5-7b-bnb-4bit because it is not supported in my current Unsloth version
#1633
opened Feb 7, 2025 by
NguyenTrinh3008
ValueError: Some modules are dispatched on the CPU or the disk
#1629
opened Feb 7, 2025 by
Sweaterdog
Orpo trainer is reporting loss without batchsize/gradiend accumulation taken into account
currently fixing
Am fixing now!
#1619
opened Feb 6, 2025 by
Nazzaroth2
ModuleNotFoundError: No module named 'torch' - it's realy, module is installed
#1616
opened Feb 4, 2025 by
lexasub
[Docs Improvement] Improve documentation on how to export model from Colab
feature request
Feature request pending on roadmap
good first issue
help wanted
Help from the OSS community wanted!
#1615
opened Feb 4, 2025 by
gaspardc-met
[Question/Docs] How to use "Raw Corpus" text for the data prep
#1614
opened Feb 4, 2025 by
gaspardc-met
Weights only load failed when target_modules contain "embed_tokens", "lm_head"
#1606
opened Feb 3, 2025 by
anhnh2002
Should
quantization_config
section of merged_16bit
model be removed, otherwise it is causing errors
#1601
opened Feb 2, 2025 by
fzyzcjy
Mistral Small: ValueError: Blockwise quantization only supports 16/32-bit floats, but got torch.uint8
#1599
opened Jan 31, 2025 by
DaddyCodesAlot
[REQUEST] DeepSeek-R1-Distill-Qwen-32B-GGUF 1.58-bit version possible ?
#1594
opened Jan 30, 2025 by
Greatz08
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.