Skip to content

Commit

Permalink
Doc/update readme (#115)
Browse files Browse the repository at this point in the history
* chore: to modify crewai LLM setting

* fix:Fix the modelselect component, and ensure that the API key is no longer exposed in the frontend data.

* feat(crewai): integrate CrewAI node with model providers

- Add CrewAI node implementation with sequential/hierarchical process support
- Add init_crewai_model to model providers for CrewAI LLM initialization
- Remove temperature parameter from CrewAI model initialization
- Align model provider configuration with existing LLM node pattern
- Use consistent model info retrieval from database

* fix:'tuple' object has no attribute 'ai_model_name'

* doc:update readme

* doc:add tools.md to show "How to Add Custom Tools"

* doc: Highlights tools introduce

* doc:add Add_New_Model_Providers_Guide

* doc:update layout
  • Loading branch information
Onelevenvy authored Nov 11, 2024
1 parent 434656b commit 8fbacf8
Show file tree
Hide file tree
Showing 5 changed files with 470 additions and 14 deletions.
221 changes: 221 additions & 0 deletions Add_New_Model_Providers_Guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,221 @@
# 🤖 Models Guide

## Supported Models

Flock currently supports various model providers and their models:

### OpenAI
- GPT-4 Series
- gpt-4
- gpt-4-0314
- gpt-4-32k
- gpt-4-32k-0314
- GPT-3.5 Series
- gpt-3.5-turbo
- gpt-3.5-turbo-16k
- Others
- gpt-4o-mini

### ZhipuAI
- GLM-4 Series
- glm-4-alltools
- glm-4-flash
- glm-4-0520
- glm-4-plus
- glm-4
- Vision Models
- glm-4v-plus
- glm-4v
- Embedding Models
- embedding-3

### Qwen
- Chat Models
- qwen2-57b-a14b-instruct
- qwen2-72b-instruct
- Vision Models
- qwen-vl-plus
- Embedding Models
- text-embedding-v1/v2/v3

### Siliconflow
- Qwen Series
- Qwen/Qwen2-7B-Instruct

### Ollama
- Llama Series
- llama3.1:8b

## How to Add New Model Support

You can easily add support for new model providers by following these steps:

### 1. Create Provider Directory

Create a new directory under `backend/app/core/model_providers/` with your provider name:

```bash
mkdir backend/app/core/model_providers/your_provider_name
```

### 2. Create Configuration File

Inside your provider directory, create a `config.py` file:

```python
from langchain_openai import ChatOpenAI # or other appropriate base class
from crewai import LLM
from app.models import ModelCategory, ModelCapability

# Basic provider configuration
PROVIDER_CONFIG = {
"provider_name": "Your Provider Name",
"base_url": "https://api.your-provider.com/v1",
"api_key": "fake_api_key", # Default placeholder
"icon": "provider_icon",
"description": "Your Provider Description",
}

# Define supported models
SUPPORTED_MODELS = [
{
"name": "model-name-1",
"categories": [ModelCategory.LLM, ModelCategory.CHAT],
"capabilities": [],
},
{
"name": "model-name-2",
"categories": [ModelCategory.LLM, ModelCategory.CHAT],
"capabilities": [ModelCapability.VISION], # For models with vision capabilities
},
{
"name": "embedding-model",
"categories": [ModelCategory.TEXT_EMBEDDING],
"capabilities": [],
},
]

def init_model(model: str, temperature: float, openai_api_key: str, openai_api_base: str, **kwargs):
"""Initialize a model for standard use"""
model_info = next((m for m in SUPPORTED_MODELS if m["name"] == model), None)
if model_info and ModelCategory.CHAT in model_info["categories"]:
return ChatOpenAI(
model=model,
temperature=temperature,
openai_api_key=openai_api_key,
openai_api_base=openai_api_base,
**kwargs,
)
else:
raise ValueError(f"Model {model} is not supported as a chat model.")

def init_crewai_model(model: str, openai_api_key: str, openai_api_base: str, **kwargs):
"""Initialize a model for CrewAI use"""
model_info = next((m for m in SUPPORTED_MODELS if m["name"] == model), None)
if model_info and ModelCategory.CHAT in model_info["categories"]:
return LLM(
model=f"provider_name/{model}", # Format: provider/model
base_url=openai_api_base,
api_key=openai_api_key,
**kwargs,
)
else:
raise ValueError(f"Model {model} is not supported as a chat model.")
```

### 3. Model Categories and Capabilities

Available model categories:
```python
class ModelCategory(str, Enum):
LLM = "llm"
CHAT = "chat"
TEXT_EMBEDDING = "text-embedding"
RERANK = "rerank"
SPEECH_TO_TEXT = "speech-to-text"
TEXT_TO_SPEECH = "text-to-speech"
```

Available capabilities:
```python
class ModelCapability(str, Enum):
VISION = "vision"
```

### 4. Auto-Registration

The `ModelProviderManager` will automatically discover and register your new provider when Flock starts up. It:
- Scans the model_providers directory
- Loads provider configurations
- Registers initialization functions
- Makes models available in the system

### Best Practices

1. **Configuration**: Keep provider-specific configuration in the config.py file
2. **Model Support**: Clearly define which models are supported and their capabilities
3. **Error Handling**: Include proper error handling in initialization functions
4. **Documentation**: Provide clear descriptions for your provider and models
5. **Testing**: Test both standard and CrewAI initialization paths

### Example Implementation

Here's a complete example for a new provider:

```python
from langchain_openai import ChatOpenAI
from crewai import LLM
from app.models import ModelCategory, ModelCapability

PROVIDER_CONFIG = {
"provider_name": "NewAI",
"base_url": "https://api.newai.com/v1",
"api_key": "fake_api_key",
"icon": "newai_icon",
"description": "NewAI - Next Generation Language Models",
}

SUPPORTED_MODELS = [
{
"name": "newai-chat-large",
"categories": [ModelCategory.LLM, ModelCategory.CHAT],
"capabilities": [],
},
{
"name": "newai-vision",
"categories": [ModelCategory.LLM, ModelCategory.CHAT],
"capabilities": [ModelCapability.VISION],
},
]

def init_model(model: str, temperature: float, openai_api_key: str, openai_api_base: str, **kwargs):
model_info = next((m for m in SUPPORTED_MODELS if m["name"] == model), None)
if not model_info:
raise ValueError(f"Model {model} is not supported")

if ModelCategory.CHAT in model_info["categories"]:
return ChatOpenAI(
model=model,
temperature=temperature,
openai_api_key=openai_api_key,
openai_api_base=openai_api_base,
**kwargs,
)
else:
raise ValueError(f"Model {model} is not supported as a chat model")

def init_crewai_model(model: str, openai_api_key: str, openai_api_base: str, **kwargs):
model_info = next((m for m in SUPPORTED_MODELS if m["name"] == model), None)
if not model_info:
raise ValueError(f"Model {model} is not supported")

if ModelCategory.CHAT in model_info["categories"]:
return LLM(
model=f"newai/{model}",
base_url=openai_api_base,
api_key=openai_api_key,
**kwargs,
)
else:
raise ValueError(f"Model {model} is not supported as a chat model")
```
174 changes: 174 additions & 0 deletions Add_New_Tools_Guide.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
# 🛠️ Tools Guide

## Built-in Tools

Flock comes with several built-in tools:

### AI Service Tools
- **Web Search Pro**: Search the internet using ZhipuAI's web search capabilities
- **Qingyan Assistant**: A versatile AI assistant that can help with various tasks including:
- Data analysis
- Creating flowcharts
- Mind mapping
- Prompt engineering
- AI drawing
- AI search
- **Image Understanding**: Analyze and understand images using ZhipuAI's vision capabilities

### Image Generation Tools
- **Spark Image Generation**: Generate images using Spark's API
- **Siliconflow Image Generation**: Generate images using Siliconflow's API

### Utility Tools
- **Math Calculator**: Perform mathematical calculations locally using NumExpr
- **Google Translate**: Translate text between languages using Google Translate
- **Open Weather**: Get weather information for any city
- **Ask Human**: Request human intervention or input during execution

### External Search Tools
- **DuckDuckGo Search**: Web search using DuckDuckGo
- **Wikipedia**: Search and retrieve information from Wikipedia

## How to Add Custom Tools

You can easily add new tools to Flock by following these steps:

### 1. Create Tool Directory

Create a new directory under `backend/app/core/tools/` with your tool name:

```bash
mkdir backend/app/core/tools/your_tool_name
```

### 2. Create Tool Files

Inside your tool directory, create these files:

#### 2.1. `__init__.py`
```python
from .your_tool import your_tool_instance

__all__ = ["your_tool_instance"]
```

#### 2.2. `your_tool.py`
```python
from langchain.pydantic_v1 import BaseModel, Field
from langchain.tools import StructuredTool

class YourToolInput(BaseModel):
"""Input schema for your tool."""
param1: str = Field(description="Description of parameter 1")
param2: int = Field(description="Description of parameter 2")

def your_tool_function(param1: str, param2: int) -> str:
"""
Your tool's main functionality.
"""
# Implement your tool's logic here
result = f"Processed {param1} with {param2}"
return result

your_tool_instance = StructuredTool.from_function(
func=your_tool_function,
name="Your Tool Name",
description="Description of what your tool does",
args_schema=YourToolInput,
return_direct=True,
)
```

#### 2.3. `credentials.py` (Optional)
If your tool requires API keys or other credentials:

```python
from typing import Any, Dict

YOUR_TOOL_CREDENTIALS = {
"API_KEY": {
"type": "string",
"description": "API key for your service",
"value": "",
},
"API_SECRET": {
"type": "string",
"description": "API secret for your service",
"value": "",
}
}

def get_credentials() -> Dict[str, Any]:
return YOUR_TOOL_CREDENTIALS
```

### 3. Access Credentials in Your Tool

If your tool needs to use credentials:

```python
from app.core.tools.utils import get_credential_value

def your_tool_function(param1: str, param2: int) -> str:
api_key = get_credential_value("Your Tool Name", "API_KEY")
api_secret = get_credential_value("Your Tool Name", "API_SECRET")

if not api_key or not api_secret:
return "Error: Required credentials are not set."

# Use credentials in your implementation
...
```

### 4. Tool Registration

Your tool will be automatically registered when Flock starts up, thanks to the tool manager system. The tool manager:
- Scans the tools directory
- Loads all tools with proper `__all__` exports
- Makes them available in the system

### Best Practices

1. **Input Validation**: Use Pydantic models to validate input parameters
2. **Error Handling**: Always include proper error handling in your tool
3. **Documentation**: Provide clear descriptions for your tool and its parameters
4. **Credentials**: If your tool requires API keys, use the credentials system
5. **Return Values**: Return clear, structured responses that can be easily processed

### Example Tool Implementation

Here's a complete example of a simple weather tool:

```python
import requests
from langchain.pydantic_v1 import BaseModel, Field
from langchain.tools import StructuredTool
from app.core.tools.utils import get_credential_value

class WeatherInput(BaseModel):
"""Input for the weather tool."""
city: str = Field(description="Name of the city")

def get_weather(city: str) -> str:
"""Get weather information for a city."""
api_key = get_credential_value("Weather Tool", "API_KEY")

if not api_key:
return "Error: Weather API Key is not set."

try:
response = requests.get(
f"https://api.weather.com/data",
params={"city": city, "key": api_key}
)
return response.json()
except Exception as e:
return f"Error getting weather data: {str(e)}"

weather_tool = StructuredTool.from_function(
func=get_weather,
name="Weather Tool",
description="Get weather information for any city",
args_schema=WeatherInput,
return_direct=True,
)
Loading

0 comments on commit 8fbacf8

Please sign in to comment.