Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate OpenAI #103
Integrate OpenAI #103
Changes from 8 commits
6a4f2f5
f25219c
2732e32
3a23319
6ee05eb
7ba1926
926138e
ca38a38
d83a8b0
a985c00
c6dc1e6
0c4277c
c3a0199
000e08f
ab79850
043c00f
201bd13
fc3eb43
108e9dd
a1607b8
f3d2877
5060d7a
c447e72
72c0601
b944f17
501957c
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will throw an error if key is not present afaik. Is this intended? Rather use
getenv()
and provide fallback logic in case key is not present? It's a personal preference so feel free to ignore.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, If keys are not present it is handled in
Langfuse
classThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we can instantiate this as a Singleton? I fear our users will import this multiple times and then create multiple Langfuse objects under the hood.
I think one instantiation of the Langfuse object would be ideal. Could we also expose the flush function from there? If yes, we can remove it above in the
_log_result
method and thereby save latencies for our usersThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than this, i think we are good to go. I just tested it on my machine also - seems to work well :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the change on the singleton! What do you think of concurrency? I just read this here: https://stackabuse.com/creating-a-singleton-in-python/.
Also, can you add a flush functionality to the module? Otherwise we hide the langfuse object and our users are not able to flush before the program exits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could help here https://medium.com/analytics-vidhya/how-to-create-a-thread-safe-singleton-class-in-python-822e1170a7f6. I have made the change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't find a way to directly calling
flush
using openai. In order to use it we need to create a object ofOpenAILangfuse
. Do you need to do that??There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry @maxdeichmann didn't saw the above comment. It's a useful approach but the
languse
inside openai wrapper will flush the inner processes and while the outerlangfuse
will handle processes defined out the class. For eg. processes like as you showed herespan = langfuse.span(...)
. So there won't be any conflict withflush()
being used by two different instances of languse, right?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries :)
Yes, you can create two different
Langfuse
objects in theory, they wont interact with each other. However, each of these will create their own worker thread to take care of network requests. I dont think this would be great for the performance of our users applications.Also, the above approach gives us way more flexibility using the integration. Users could create generations in sub-spans of their trace which would not be possible without.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@maxdeichmann Sure then we could go with this approach!
There won't be any change in the
OpenAILangfuse
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think before merging we should have that in here. I can take that over on the weekend, if you want to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, let me know if you need my help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tested this out locally. openai is shown on my end as type any in the IDE, whereas when using the original openai SDK, i get autocompletions. Is there a way to solve that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't understand exactly, what's the issue @maxdeichmann
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
create is of type
any
here. If you importimport openai
intointegrations.py
, this should be typed. Could you add that?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here @maxdeichmann for me its same, so how do I fix it