-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE]: Is it Possible to integrate Liger-Kernel? #6047
Comments
I think this is a good attempt. |
Does it compare with apex's implementation? We've integrate some apex cuda kernels and some of them are also implemented in Liger-kernel. |
Any good news? Thanks a lot |
I think they are short-handed in wrapping up Zero Bubble, hybrid seq parallel and then they will focus on accelerate intergration? |
Describe the feature
https://github.com/linkedin/Liger-Kernel
Liger Kernel is a collection of Triton kernels designed specifically for LLM training. It can effectively increase multi-GPU training throughput by 20% and reduce memory usage by 60%. We have implemented Hugging Face Compatible RMSNorm, RoPE, SwiGLU, CrossEntropy, FusedLinearCrossEntropy, and more to come.
The text was updated successfully, but these errors were encountered: