-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ImportError: "Please install google-generativeai and 'vertexai' to use Google's API" Despite Both Libraries Being Installed. #5132
Comments
We just released 0.4.3, where does the 0.7 come from? 😉 Please uninstall Also, Gemini has OpenAI Compatible Endpoint, it should be available through OpenAIChatCompletionClient, see example. https://microsoft.github.io/autogen/dev/reference/python/autogen_ext.models.openai.html#autogen_ext.models.openai.OpenAIChatCompletionClient For the use case, you can use a two-agent chat with RoundRobinGroupChat instead. |
For pip show pyautogen- Then i uninstalled pyautogen, and installed autogen (otherwise it throws module 'autogen' not found error) Also upgraded autogen-agentchat, then did pip show autogen-agentchat - Still getting the same error. |
A couple of things, first, we don't publish to For your code, you are using the v0.2 API. You need to pin your import autogen
from autogen import AssistantAgent
config_list = [
{
"model": "gemini-1.5-flash",
"api_type": "openai",
"base_url": "https://generativelanguage.googleapis.com/v1beta/openai/",
}
]
code_generator = AssistantAgent(
name="code_generator",
llm_config={
"config_list": config_list,
"seed": 42,
"temperature": 0.7
},
system_message="""You are an expert code generator. Generate clean and well-documented code based on the requirements provided.""",
max_consecutive_auto_reply=2
) We are using the OpenAI-compatible endpoint here. The native Gemini support in v0.2 is currently broken as you mentioned. We don't have the cycle to fix it at the moment. If you would like to contribute a fix would be great. For v0.4 API, update your code to the following to use the v0.4 API import asyncio
import os
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
model_client = OpenAIChatCompletionClient(
model="gemini-1.5-flash",
base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
api_key=os.environ["GEMINI_API_KEY"],
model_info={
"vision": True,
"function_calling": True,
"json_output": True,
"family": "unknown",
},
)
agent = AssistantAgent(
"assistant",
model_client=model_client,
)
print(await agent.run(task="Say 'Hello World!'"))
asyncio.run(main()) See Gemini's OpenAI Compatibility: https://ai.google.dev/gemini-api/docs/openai Native Gemini client support in v0.4 is underway. Follow #5118 Migration guide: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/migration-guide.html. Migrate to v0.4 is recommended. |
What happened?
While running the script autogen_code_reviewer.py that utilizes the autogen library, the following error occurred:
ImportError: Please install
google-generativeai
and 'vertexai' to use Google's API.This error persists even after installing both google-generativeai and vertexai in the virtual environment. I verified both libraries using the pip show command. Also, the script was activated in a fresh virtual environment with all packages installed. The same code was running 2 days back, but suddenly throwing the above error.
What did you expect to happen?
The script should have executed without throwing the ImportError, as the necessary dependencies (google-generativeai and vertexai) were installed successfully. Specifically:
How can we reproduce it (as minimally and precisely as possible)?
Activate the virtual environment.
Install the necessary dependencies:
pip install pyautogen google-generativeai vertexai autogen-agentchat~=0.2
Run the following script:
import autogen
from autogen import AssistantAgent, UserProxyAgent
config_list = [
{
"model": "gemini-1.5-pro-002",
"api_type": "google",
"project_id": "my_project_id",
"location": "my_location"
}
]
code_generator = AssistantAgent(
name="code_generator",
llm_config={
"config_list": config_list,
"seed": 42,
"temperature": 0.7
},
system_message="""You are an expert code generator. Generate clean and well-documented code based on the requirements provided.""",
max_consecutive_auto_reply=2
)
Note: I authenticated with GCP using :gcloud auth application-default login, to use the Gemini model
AutoGen version
pyautogen version : 0.7.1
Which package was this bug in
Core
Model used
gemini-1.5-pro-002
Python version
3.12.5
Operating system
Windows 11
Any additional info you think would be helpful for fixing this bug
No response
The text was updated successfully, but these errors were encountered: