Why watch the haystack when LLMs can finds the needle for you!
The motivation behind creating this solution stemmed from a personal need to pinpoint where in an hour-long video, has what I am looking for. There I have it, my weekend plan all set up and I made this!
Saves you from watching an hour long video just to know one thing that you were looking for, Get summary on top of a youtube video
📢 You do not need OpenAI API KEYr: but you need one from HuggingFace
This makes use of HuggingFace Inference API for LLMs
- create a .env in the root of repository
- add HF_TOKEN (Get your token from hugging face API)
docker-compose build
docker-compose up
- LangChain for everything RAG + LLM wrapper
- Streamlit for frontend dashboard
- HuggingFace/MistralAI for LLMs via Inference API
- Docker Compose for evelopment / deployment
I prioritize maintaining my work as open source, but it's crucial to acknowledge that plagiarism is unacceptable. I spent a non-trivial amount of effort building and designing this, and I am proud of it! While forking this repository is permissible, I kindly ask for proper attribution by referencing https://techyaditya.xyz. Thank you!
You can report the bugs at the issue tracker
Built with ♥ by Aditya Kaushik under MIT License. If this code helps you in