-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: Added LLM example #1545
docs: Added LLM example #1545
Conversation
Hi @ytjhai, thanks a lot for the PR. Would you be open to adding # linux-64 to the platforms
platforms = ["osx-arm64", "linux-64"]
# setting this avoids building scikit_learn from source (on arm)
[system-requirements]
macos = "12.0" |
Even better would be having separate features and environments. Where the system-requirement for |
avoid building sk-learn from source for ARM
I don't currently have access to a linux machine, so can't 100% verify if this lock file would work on Regarding older OSX environments, GPU strength was very patchy and my personal use case was running some LLMs locally since M1 has good GPU support in even base models (like the Mac Mini). Even llama.cpp doesn't support that architecture on Macs AFAIK. |
I tested it with an AMD GPU (I assume it is using the CPU though) and it seems to work fine :) |
Add linux-64 as a supported architecture
Updated description
Great! I made the above requested changes, adding |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome thanks!
Last, tiny small thing, could you maybe add a |
Don't worry I've added it :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
I noticed there had been a few issues like #206 and #261 for demoing Pixi with an ML stack. This example shows it's possible with the current feature set to run Llama-index with llama.cpp on a local machine with a Mistral LLM, also running locally with GPU support on an ARM M1 Mac. Hopefully this helps someone else get started with LLM inference.