Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate OpenAI #103

Merged
merged 26 commits into from
Oct 22, 2023
Merged

Integrate OpenAI #103

merged 26 commits into from
Oct 22, 2023

Conversation

Dev-Khant
Copy link
Contributor

@Dev-Khant
Copy link
Contributor Author

Dev-Khant commented Sep 23, 2023

Here I think it would be better if we could add all the other libraries and frameworks in integrations.py. Let me know if its needed to be change


def test_openai_chat_completion():
trace_id = create_uuid()
completion = openai.ChatCompletion.create(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just tested this out locally. openai is shown on my end as type any in the IDE, whereas when using the original openai SDK, i get autocompletions. Is there a way to solve that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't understand exactly, what's the issue @maxdeichmann

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

create is of type any here. If you import import openai into integrations.py, this should be typed. Could you add that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here @maxdeichmann for me its same, so how do I fix it
image
image

setattr(api_resource_class, method, self.langfuse_modified(create_method))


modifier = OpenAILangfuse()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think we can instantiate this as a Singleton? I fear our users will import this multiple times and then create multiple Langfuse objects under the hood.
I think one instantiation of the Langfuse object would be ideal. Could we also expose the flush function from there? If yes, we can remove it above in the _log_result method and thereby save latencies for our users

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than this, i think we are good to go. I just tested it on my machine also - seems to work well :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the change on the singleton! What do you think of concurrency? I just read this here: https://stackabuse.com/creating-a-singleton-in-python/.
Also, can you add a flush functionality to the module? Otherwise we hide the langfuse object and our users are not able to flush before the program exits.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't find a way to directly calling flush using openai. In order to use it we need to create a object of OpenAILangfuse. Do you need to do that??

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry @maxdeichmann didn't saw the above comment. It's a useful approach but the languse inside openai wrapper will flush the inner processes and while the outer langfuse will handle processes defined out the class. For eg. processes like as you showed here span = langfuse.span(...). So there won't be any conflict with flush() being used by two different instances of languse, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries :)

Yes, you can create two different Langfuse objects in theory, they wont interact with each other. However, each of these will create their own worker thread to take care of network requests. I dont think this would be great for the performance of our users applications.
Also, the above approach gives us way more flexibility using the integration. Users could create generations in sub-spans of their trace which would not be possible without.

Copy link
Contributor Author

@Dev-Khant Dev-Khant Oct 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@maxdeichmann Sure then we could go with this approach!
There won't be any change in the OpenAILangfuse.

Copy link
Member

@maxdeichmann maxdeichmann Oct 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think before merging we should have that in here. I can take that over on the weekend, if you want to.

    langfuse = Langfuse()

    span = langfuse.span(...)

    completion = openai.ChatCompletion.create(
        langfuse=span, model="gpt-3.5-turbo", messages=[{"role": "user", "content": "1 + 1 = "}], temperature=0
    )
    langfuse.flush()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, let me know if you need my help.


class OpenAILangfuse:
def __init__(self):
self.langfuse = Langfuse(os.environ["LF_PK"], os.environ["LF_SK"], os.environ["HOST"])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will throw an error if key is not present afaik. Is this intended? Rather use getenv() and provide fallback logic in case key is not present? It's a personal preference so feel free to ignore.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, If keys are not present it is handled in Langfuse class
image

@Dev-Khant
Copy link
Contributor Author

@maxdeichmann Tests are failing because Openai key is not found

@Dev-Khant
Copy link
Contributor Author

@maxdeichmann Tests are failing because Openai key is not found

And @maxdeichmann Please do have a look for this as well. Thanks

@maxdeichmann
Copy link
Member

@maxdeichmann Tests are failing because Openai key is not found

And @maxdeichmann Please do have a look for this as well. Thanks

Yes, currently running all tests with missing API keys only locally. You can check the same behavior in the langchain tests and you can do the same.

@Dev-Khant
Copy link
Contributor Author

@maxdeichmann I have made the changes can you please run the tests again? Thanks

@maxdeichmann maxdeichmann enabled auto-merge (squash) October 22, 2023 20:48
@maxdeichmann maxdeichmann merged commit ccd07ed into langfuse:main Oct 22, 2023
6 checks passed
maxdeichmann added a commit that referenced this pull request Oct 23, 2023

Co-authored-by: Max Deichmann <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[LFC-179] [Python-SDK] OpenAI SDK Wrapper/Integration
3 participants