-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate OpenAI #103
Integrate OpenAI #103
Conversation
Here I think it would be better if we could add all the other libraries and frameworks in |
|
||
def test_openai_chat_completion(): | ||
trace_id = create_uuid() | ||
completion = openai.ChatCompletion.create( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tested this out locally. openai is shown on my end as type any in the IDE, whereas when using the original openai SDK, i get autocompletions. Is there a way to solve that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't understand exactly, what's the issue @maxdeichmann
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here @maxdeichmann for me its same, so how do I fix it
setattr(api_resource_class, method, self.langfuse_modified(create_method)) | ||
|
||
|
||
modifier = OpenAILangfuse() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we can instantiate this as a Singleton? I fear our users will import this multiple times and then create multiple Langfuse objects under the hood.
I think one instantiation of the Langfuse object would be ideal. Could we also expose the flush function from there? If yes, we can remove it above in the _log_result
method and thereby save latencies for our users
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than this, i think we are good to go. I just tested it on my machine also - seems to work well :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the change on the singleton! What do you think of concurrency? I just read this here: https://stackabuse.com/creating-a-singleton-in-python/.
Also, can you add a flush functionality to the module? Otherwise we hide the langfuse object and our users are not able to flush before the program exits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could help here https://medium.com/analytics-vidhya/how-to-create-a-thread-safe-singleton-class-in-python-822e1170a7f6. I have made the change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't find a way to directly calling flush
using openai. In order to use it we need to create a object of OpenAILangfuse
. Do you need to do that??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry @maxdeichmann didn't saw the above comment. It's a useful approach but the languse
inside openai wrapper will flush the inner processes and while the outer langfuse
will handle processes defined out the class. For eg. processes like as you showed here span = langfuse.span(...)
. So there won't be any conflict with flush()
being used by two different instances of languse, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries :)
Yes, you can create two different Langfuse
objects in theory, they wont interact with each other. However, each of these will create their own worker thread to take care of network requests. I dont think this would be great for the performance of our users applications.
Also, the above approach gives us way more flexibility using the integration. Users could create generations in sub-spans of their trace which would not be possible without.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maxdeichmann Sure then we could go with this approach!
There won't be any change in the OpenAILangfuse
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think before merging we should have that in here. I can take that over on the weekend, if you want to.
langfuse = Langfuse()
span = langfuse.span(...)
completion = openai.ChatCompletion.create(
langfuse=span, model="gpt-3.5-turbo", messages=[{"role": "user", "content": "1 + 1 = "}], temperature=0
)
langfuse.flush()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, let me know if you need my help.
langfuse/integrations.py
Outdated
|
||
class OpenAILangfuse: | ||
def __init__(self): | ||
self.langfuse = Langfuse(os.environ["LF_PK"], os.environ["LF_SK"], os.environ["HOST"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will throw an error if key is not present afaik. Is this intended? Rather use getenv()
and provide fallback logic in case key is not present? It's a personal preference so feel free to ignore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maxdeichmann Tests are failing because Openai key is not found |
And @maxdeichmann Please do have a look for this as well. Thanks |
Yes, currently running all tests with missing API keys only locally. You can check the same behavior in the langchain tests and you can do the same. |
@maxdeichmann I have made the changes can you please run the tests again? Thanks |
Co-authored-by: Max Deichmann <[email protected]>
Resolves langfuse/langfuse#187