-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect Model Selection in Anthropic API Integration #124
Comments
The issue you're encountering is that your system is ignoring the user-specified model ( 1. Ensure Proper Model Selection in API RequestThe most common reason this issue occurs is incorrect configuration or incorrect API request formatting. You should verify that you're correctly passing the desired model to the Anthropic API request. Steps:
Example API Request:When making an API request, the model parameter should look like this: import requests
headers = {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json',
}
data = {
'model': 'claude-3-haiku', # Ensure this is the model you want to use
'prompt': 'Your input prompt goes here...',
'max_tokens': 100,
}
response = requests.post('https://api.anthropic.com/v1/completions', json=data, headers=headers)
print(response.json())
2. Check for Hardcoded Defaults in Your CodeIf the model is still being overridden, you may have some part of your code or the library you're using that defaults to Steps:
Example:# Check if the model is set correctly in all cases
model = config.get('model', 'claude-3-haiku') # This will use 'claude-3-haiku' if no model is passed
# Ensure this is passed correctly in the API request
data = {
'model': model,
'prompt': 'your input prompt',
} 3. Audit Your API Usage in the Anthropic ConsoleSince you confirmed the issue through the Anthropic console, make sure you are correctly tracking API usage and metrics. Double-check that the model Steps:
4. Add Warnings for Expensive Model UseTo avoid unexpected costs in the future, it's a good idea to add a mechanism in your code to alert the user whenever a more expensive model ( Example:if model == 'claude-3-opus':
print("Warning: You are using the more expensive model (claude-3-opus). Consider switching to claude-3-haiku for lower costs.") Alternatively, you could raise an alert or log the event so users are aware. 5. Implement Configuration Options for Default ModelIf you want to make the default model configurable (i.e., use Example:# Configuration file or environment variable to set default model
DEFAULT_MODEL = 'claude-3-haiku'
def get_model_from_config():
# Retrieve model from config or environment
return os.getenv('MODEL', DEFAULT_MODEL)
# Make sure the correct model is being passed to the API
model = get_model_from_config()
data = {
'model': model,
'prompt': 'Your input prompt...',
}
response = requests.post('https://api.anthropic.com/v1/completions', json=data, headers=headers) In this example, you can configure the default model in an environment variable or a configuration file ( 6. Monitor Usage for Unexpected CostsAfter implementing the fix, it's important to keep monitoring the API usage to ensure that By following these steps, you should be able to fix the issue of incorrect model selection and prevent unexpected higher costs due to the use of the more expensive |
Hey @micedevai, you can see the PR here |
Hi @diekotto, thanks for pointing this out! |
* feat: Add model selection support for Anthropic handler The Anthropic handler currently uses a hardcoded model (claude-3-opus-20240229). This change allows model selection through configuration, matching the behavior of the OpenAI handler. Changes: - Remove hardcoded model string - Use config loader to get model name from configuration - Update model settings initialization * Fix anthropic tests
@micedevai @diekotto fixed is merged, thanks all! |
Incorrect Model Selection in Anthropic API Integration
Issue Description
During testing with Anthropic API integration, I encountered two critical issues:
Technical Details
Impact
Suggested Fixes
Additional Context
This issue was discovered through API usage monitoring in the Anthropic console, where the usage metrics showed unexpected model utilization.
The text was updated successfully, but these errors were encountered: