From e59db2bb10e2b98d9194f86d10c5cbf8279706f9 Mon Sep 17 00:00:00 2001 From: Silvio Vasiljevic Date: Thu, 14 Nov 2024 16:37:13 +0100 Subject: [PATCH] Add new bedrock docs --- content/en/references/configuration.md | 3 ++- content/en/user-guide/aws/bedrock/index.md | 17 ++++++++++++++--- 2 files changed, 16 insertions(+), 4 deletions(-) diff --git a/content/en/references/configuration.md b/content/en/references/configuration.md index 0c124dc119..5fecb0be49 100644 --- a/content/en/references/configuration.md +++ b/content/en/references/configuration.md @@ -94,7 +94,8 @@ This section covers configuration options that are specific to certain AWS servi | Variable | Example Values | Description | | - | - | - | -| `LOCALSTACK_ENABLE_BEDROCK` | `1` | Use the Bedrock provider | +| `BEDROCK_PREWARM` | `0` (default) \| `1` | Pre-warm the Bedrock engine directly on LocalStack startup instead of on demand. | +| `DEFAULT_BEDROCK_MODEL` | `qwen2.5:0.5b` (default) | The model to use to handle text model invocations in Bedrock. Any text-based model available for Ollama is usable. | ### BigData (EMR, Athena, Glue) diff --git a/content/en/user-guide/aws/bedrock/index.md b/content/en/user-guide/aws/bedrock/index.md index ef642e157c..4329d72bbb 100644 --- a/content/en/user-guide/aws/bedrock/index.md +++ b/content/en/user-guide/aws/bedrock/index.md @@ -25,7 +25,11 @@ We will demonstrate how to use Bedrock by following these steps: ### List available foundation models You can view all available foundation models using the [`ListFoundationModels`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_ListFoundationModels.html) API. -This will show you which models are available for use in your local environment. +This will show you which models are available on AWS Bedrock. +{{< callout "note">}} +The actual model that will be used for emulation will differ from the ones defined in this list. +You can define the used model with `DEFAULT_BEDROCK_MODEL` +{{< / callout >}} Run the following command: @@ -33,10 +37,17 @@ Run the following command: $ awslocal bedrock list-foundation-models {{< / command >}} +### Pre-warming the Bedrock engine + +The startup of the Bedrock engine can take some time. +Per default, we only start it once you send a request to one of the `bedrock-runtime` APIs. +However, if you want to start the engine when localstack starts to avoid long wait times on your first request you can set the flag `BEDROCK_PREWARM`. + ### Invoke a model You can use the [`InvokeModel`](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModel.html) API to send requests to a specific model. -In this example, we'll use the Llama 3 model to process a simple prompt. +In this example, we selected the Llama 3 model to process a simple prompt. +However, the actual model will be defined by the `DEFAULT_BEDROCK_MODEL` environment variable. Run the following command: @@ -75,5 +86,5 @@ $ awslocal bedrock-runtime converse \ ## Limitations -* LocalStack Bedrock implementation is mock-only and does not run any LLM model locally. +* LocalStack Bedrock currently only officially supports text-based models. * Currently, GPU models are not supported by the LocalStack Bedrock implementation.