-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I use it in vllm deployment #3
Comments
Thank you for bringing this to our attention. Unfortunately, the current version of vLLM does not support the return of attention scores. However, we are pleased to inform you that this functionality is planned in the next release of the software. In the meantime, we are working diligently to implement paged attention—the key feature of vLLM—as well as Flash decoding. These enhancements aim to accelerate the generation process and decrease the GPU memory of the KV cache. we appreciate your patience while we work on these developments. |
@ChenxinAn-fdu OK, thanks for your response |
I have pushed the code for flash decoding and it significantly decreases the memory consumption for decoding with KV-cache. It may be helpful for you. |
looking forward to the support in vllm! |
@ChenxinAn-fdu Dose vllm support DCA now? We'd like to use this feature in the deployment. |
@Shuai-Xie Hi, I left an issue in their official repo, but it seems that the current version of vllm only supports returning the output tensor without If you do not need continual batching, the current repo has implemented |
How can I use this approach in vllm deployment without training,can you give me a specific example. thx
The text was updated successfully, but these errors were encountered: