-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement LLM pipeline at the AI runner side [60 LPT] #41
Comments
Hi, I'm interested in taking on this task. I've reviewed the spec thoroughly, and it seems achievable for me. I have a strong background in Python and am very familiar with FastAPI. Thanks. |
Hey @benya7, Thanks a lot for your interest in our bounties! 🚀 This particular bounty has already been taken on by one of our core contributors as part of a larger initiative. You can check out the details here: AI Worker PR #137 and Go Livepeer. However, we will be posting more bounties soon, so stay tuned! 👍🏻 In the meantime, don’t forget to join our Discord. It’s an excellent place to stay updated and connect with other developers in our open-source ecosystem. Also, we have a Livepeer Grants program that features some larger tasks (i.e., >$5k) which might be of interest to an experienced developer like yourself. Thanks again, and we look forward to your contributions 🙏🏻! |
This feature was implemented by @kyriediculous in livepeer/ai-worker#137, who spearheaded the creation of a Service Provider Entity (SPE) to drive Large Language Model (LLM) development on the Livepeer network. This SPE has just been successfully funded, as seen on the Livepeer Explorer here. |
Overview
Important
This can be viewed as a retroactive bounty since both this one and the subsequent go-livepeer bounty were already completed by Nico after a discussion in the community chats. Unfortunately, I didn't have time to post the bounty earlier.
To enhance the capabilities of our AI subnet, we aim to implement LLM (Large Language Model) capabilities. With the recent release of the fully open-source LLama 3.1 model, this is the perfect opportunity to introduce advanced language processing functionality. Implementing LLama 3.1 will allow applications to leverage sophisticated language processing on a fully permissionless and open decentralized network, benefiting various applications within the ecosystem.
As the core AI team is currently focused on video-centric pipelines and core network improvements, we are calling on the community to help implement this crucial pipeline on the AI-worker side of the AI subnet. This implementation will not only provide a new pipeline but also enhance existing functionalities by incorporating language understanding into our pipelines. We are excited to see this capability on the Livepeer network 🚀.
Required Skillset
Bounty Requirements
/llm
route and pipeline in the AI worker repository. This pipeline should be accessible on port 6007.Scope Exclusions
30-40
LPT.Implementation Tips
To understand how to create a new AI worker pipeline, you can refer to recent pull requests where new pipelines were added:
Additionally, make sure to:
runner/gen_openapi.py
file to generate the updated OpenAPI spec.make
command to generate the necessary go-livepeer bindings, ensuring your implementation works seamlessly with the go-livepeer repository.How to Apply
Thank you for your interest in contributing to our project 💛!
Warning
Please wait for the issue to be assigned to you before starting work. To prevent duplication of effort, submissions for unassigned issues will not be accepted.
The text was updated successfully, but these errors were encountered: