LLM Weaver is a flexible library designed to interact with any LLM, with an emphasis on managing long conversations exceeding the maximum token limit of a model, ensuring a continuous and coherent user experience.
This library is a rust implementation of OpenAI's Tactic for handling long conversations with a token context bound LLM.
Once a certain threshold of context tokens is reached, the library will summarize the entire conversation and begin a new conversation with the summarized context appended to the system instructions.
Follow the crate level documentation for a detailed explanation of how to use the library.
If you are passioniate about this project, please feel free to fork the repository and submit pull requests for enhancements, bug fixes, or additional features.
LLM Weaver is distributed under the MIT License, ensuring maximum freedom for using and sharing it in your projects.