Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: ollama_chat status code error gives stringified generator object as error #7699

Open
retrodaredevil opened this issue Jan 12, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@retrodaredevil
Copy link

What happened?

I'm doing some weird stuff with putting authorization in front of my ollama endpoint, but that's besides the point.

Point is, I have an endpoint that will always return a 401 status code with how I have it configured (that's my problem to fix). When a status code other than 200 is returned, ollama_chat.py uses response.iter_lines() as the parameter passed to the message of OllamaError. You can see the problem here:

status_code=response.status_code, message=response.iter_lines()

I think this is probably an easy fix, you just need to actually exhaust the iterator and convert its output into a readable string. However, I figured it's worth it to have some discussion around how to do that. Imagine that the response body is actually an entire web page (for whatever reason). We maybe don't want to include that as the error in the message body.

Additionally, this bug isn't really a big deal for me, but I figured someone out there might like a crack at an easy PR to merge in.

Relevant log output

Traceback (most recent call last):
  File "/home/lavender/.local/pipx/venvs/aider-chat/lib/python3.10/site-packages/aider/coders/base_coder.py", line 1255, in send_message
    yield from self.send(messages, functions=self.functions)
  File "/home/lavender/.local/pipx/venvs/aider-chat/lib/python3.10/site-packages/aider/coders/base_coder.py", line 1575, in send
    yield from self.show_send_output_stream(completion)
  File "/home/lavender/.local/pipx/venvs/aider-chat/lib/python3.10/site-packages/aider/coders/base_coder.py", line 1650, in show_send_output_stream
    for chunk in completion:
  File "/home/lavender/.local/pipx/venvs/aider-chat/lib/python3.10/site-packages/litellm/llms/ollama_chat.py", line 442, in ollama_completion_stream
    raise e
  File "/home/lavender/.local/pipx/venvs/aider-chat/lib/python3.10/site-packages/litellm/llms/ollama_chat.py", line 395, in ollama_completion_stream
    raise OllamaError(
litellm.llms.ollama_chat.OllamaError: <generator object Response.iter_lines at 0x7d42b036f7d0>

<generator object Response.iter_lines at 0x7d42b036f7d0>

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

Whatever aider-chat installed through pipx (as of 2025-01-11) has as its dependency

Twitter / LinkedIn details

No response

@retrodaredevil retrodaredevil added the bug Something isn't working label Jan 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant