Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change thread queue to 100 and fix headers parsing bug #265

Merged

Conversation

amitgalitz
Copy link
Member

Description

I changed the provision thread queue to 100. This is because currently the size queue of 10 is too low. Basically right now if all the threads are busy and If an 11th request comes, the request will be just lost completely if we have no extra queue or re-route logic of our own. Currently we have no extra re-route logic but I think expanding the queue to a size of 100 to allow more provisioning should be sufficient for now. We should probably come back to this after doing more intense load testing.

I also additionally added a StringToObjectMap parser and builder so we can handle cases where we encounter an Object[] that these objects can be either be Map<String, String> or Map<String, Map<String, String>> like in the case of create connector actions

    "actions": [
        {
            "action_type": "predict",
            "method": "POST",
            "url": "https://${parameters.endpoint}/v1/chat/completions",
            "headers": {
                "Authorization": "Bearer ${credential.openAI_key}"
            },
            "request_body": "{ \"model\": \"${parameters.model}\", \"messages\": ${parameters.messages} }"
        }
    ]

I tested manually through API calls, however I didn't add a UT for the StringToObject because I didn't want to use reflection to check the object type here, since its a generic object.

Issues Resolved

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@dbwiddis
Copy link
Member

dbwiddis commented Dec 9, 2023

the request will be just lost completely if we have no extra queue or re-route logic of our own.

What does this mean? Does it fail silently? I thought an ExecutorService just blocked until a thread was available, in which case these should eventually time out, right? Or am I missing something?

@joshpalis
Copy link
Member

Attaching this related issue : #61

@amitgalitz amitgalitz merged commit bde10f5 into opensearch-project:feature/agent_framework Dec 12, 2023
10 checks passed
dbwiddis pushed a commit to dbwiddis/flow-framework that referenced this pull request Dec 15, 2023
…oject#265)

change thread queue to 100 and fix headers bug

Signed-off-by: Amit Galitzky <[email protected]>
dbwiddis pushed a commit to dbwiddis/flow-framework that referenced this pull request Dec 15, 2023
…oject#265)

change thread queue to 100 and fix headers bug

Signed-off-by: Amit Galitzky <[email protected]>
dbwiddis pushed a commit that referenced this pull request Dec 18, 2023
change thread queue to 100 and fix headers bug

Signed-off-by: Amit Galitzky <[email protected]>
dbwiddis pushed a commit to dbwiddis/flow-framework that referenced this pull request Dec 18, 2023
…oject#265)

change thread queue to 100 and fix headers bug

Signed-off-by: Amit Galitzky <[email protected]>
dbwiddis pushed a commit that referenced this pull request Dec 18, 2023
change thread queue to 100 and fix headers bug

Signed-off-by: Amit Galitzky <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants