Skip to content

Commit

Permalink
Merge pull request #2277 from kqlio67/main
Browse files Browse the repository at this point in the history
Async Client Refactor, Documentation Updates, and Provider Fixes
  • Loading branch information
xtekky authored Oct 17, 2024
2 parents 2dcdce5 + d9892ce commit 66a3059
Show file tree
Hide file tree
Showing 14 changed files with 679 additions and 143 deletions.
36 changes: 19 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Written by [@xtekky](https://github.com/xtekky)
<div id="top"></div>

> [!IMPORTANT]
> By using this repository or any code related to it, you agree to the [legal notice](https://github.com/xtekky/gpt4free/blob/main/LEGAL_NOTICE.md). The author is **not responsible for the usage of this repository nor endorses it**, nor is the author responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
> By using this repository or any code related to it, you agree to the [legal notice](LEGAL_NOTICE.md). The author is **not responsible for the usage of this repository nor endorses it**, nor is the author responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
> [!WARNING]
> _"gpt4free"_ serves as a **PoC** (proof of concept), demonstrating the development of an API package with multi-provider requests, with features like timeouts, load balance and flow control.
Expand Down Expand Up @@ -126,13 +126,13 @@ By following these steps, you should be able to successfully install and run the

Run the **Webview UI** on other Platfroms:

- [/docs/guides/webview](https://github.com/xtekky/gpt4free/blob/main/docs/webview.md)
- [/docs/guides/webview](docs/webview.md)

##### Use your smartphone:

Run the Web UI on Your Smartphone:

- [/docs/guides/phone](https://github.com/xtekky/gpt4free/blob/main/docs/guides/phone.md)
- [/docs/guides/phone](docs/guides/phone.md)

#### Use python

Expand All @@ -148,17 +148,17 @@ pip install -U g4f[all]
```

How do I install only parts or do disable parts?
Use partial requirements: [/docs/requirements](https://github.com/xtekky/gpt4free/blob/main/docs/requirements.md)
Use partial requirements: [/docs/requirements](docs/requirements.md)

##### Install from source:

How do I load the project using git and installing the project requirements?
Read this tutorial and follow it step by step: [/docs/git](https://github.com/xtekky/gpt4free/blob/main/docs/git.md)
Read this tutorial and follow it step by step: [/docs/git](docs/git.md)

##### Install using Docker:

How do I build and run composer image from source?
Use docker-compose: [/docs/docker](https://github.com/xtekky/gpt4free/blob/main/docs/docker.md)
Use docker-compose: [/docs/docker](docs/docker.md)

## 💡 Usage

Expand All @@ -171,7 +171,7 @@ client = Client()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello"}],
...
# Add any other necessary parameters
)
print(response.choices[0].message.content)
```
Expand All @@ -187,20 +187,22 @@ from g4f.client import Client

client = Client()
response = client.images.generate(
model="gemini",
prompt="a white siamese cat",
...
model="dall-e-3",
prompt="a white siamese cat",
# Add any other necessary parameters
)

image_url = response.data[0].url
print(f"Generated image URL: {image_url}")
```

[![Image with cat](/docs/cat.jpeg)](https://github.com/xtekky/gpt4free/blob/main/docs/client.md)
[![Image with cat](/docs/cat.jpeg)](docs/client.md)

**Full Documentation for Python API**

- New AsyncClient API from G4F: [/docs/async_client](https://github.com/xtekky/gpt4free/blob/main/docs/async_client.md)
- Client API like the OpenAI Python library: [/docs/client](https://github.com/xtekky/gpt4free/blob/main/docs/client.md)
- Legacy API with python modules: [/docs/legacy](https://github.com/xtekky/gpt4free/blob/main/docs/legacy.md)
- AsyncClient API from G4F: [/docs/async_client](docs/async_client.md)
- Client API like the OpenAI Python library: [/docs/client](docs/client.md)
- Legacy API with python modules: [/docs/legacy](docs/legacy.md)

#### Web UI

Expand All @@ -221,7 +223,7 @@ python -m g4f.cli gui -port 8080 -debug

You can use the Interference API to serve other OpenAI integrations with G4F.

See docs: [/docs/interference](https://github.com/xtekky/gpt4free/blob/main/docs/interference.md)
See docs: [/docs/interference](docs/interference.md)

Access with: http://localhost:1337/v1

Expand Down Expand Up @@ -781,11 +783,11 @@ We welcome contributions from the community. Whether you're adding new providers

###### Guide: How do i create a new Provider?

- Read: [/docs/guides/create_provider](https://github.com/xtekky/gpt4free/blob/main/docs/guides/create_provider.md)
- Read: [/docs/guides/create_provider](docs/guides/create_provider.md)

###### Guide: How can AI help me with writing code?

- Read: [/docs/guides/help_me](https://github.com/xtekky/gpt4free/blob/main/docs/guides/help_me.md)
- Read: [/docs/guides/help_me](docs/guides/help_me.md)

## 🙌 Contributors

Expand Down
2 changes: 1 addition & 1 deletion docs/async_client.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ async def main():
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Say this is a test"}],
)
task2 = client.images.generate(
task2 = client.images.async_generate(
model="dall-e-3",
prompt="a white siamese cat",
)
Expand Down
3 changes: 1 addition & 2 deletions docs/client.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ You can use the `ChatCompletions` endpoint to generate text completions as follo

```python
from g4f.client import Client
client = Client()

client = Client()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Say this is a test"}],
Expand All @@ -77,7 +77,6 @@ Also streaming are supported:
from g4f.client import Client

client = Client()

stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say this is a test"}],
Expand Down
4 changes: 2 additions & 2 deletions docs/interference.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Send the POST request to /v1/chat/completions with body containing the `model` m
import requests
url = "http://localhost:1337/v1/chat/completions"
body = {
"model": "gpt-3.5-turbo-16k",
"model": "gpt-3.5-turbo",
"stream": False,
"messages": [
{"role": "assistant", "content": "What can you do?"}
Expand All @@ -66,4 +66,4 @@ for choice in json_response:
print(choice.get('message', {}).get('content', ''))
```

[Return to Home](/)
[Return to Home](/)
70 changes: 70 additions & 0 deletions g4f/Provider/Ai4Chat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
from __future__ import annotations

from aiohttp import ClientSession
import re

from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt


class Ai4Chat(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://www.ai4chat.co"
api_endpoint = "https://www.ai4chat.co/generate-response"
working = True
supports_gpt_4 = False
supports_stream = False
supports_system_message = True
supports_message_history = True

default_model = 'gpt-4'

@classmethod
def get_model(cls, model: str) -> str:
return cls.default_model

@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)

headers = {
'accept': '*/*',
'accept-language': 'en-US,en;q=0.9',
'cache-control': 'no-cache',
'content-type': 'application/json',
'cookie': 'messageCount=2',
'origin': 'https://www.ai4chat.co',
'pragma': 'no-cache',
'priority': 'u=1, i',
'referer': 'https://www.ai4chat.co/gpt/talkdirtytome',
'sec-ch-ua': '"Chromium";v="129", "Not=A?Brand";v="8"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Linux"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36'
}

async with ClientSession(headers=headers) as session:
payload = {
"messages": [
{
"role": "user",
"content": format_prompt(messages)
}
]
}

async with session.post(cls.api_endpoint, json=payload, proxy=proxy) as response:
response.raise_for_status()
response_data = await response.json()
message = response_data.get('message', '')
clean_message = re.sub('<[^<]+?>', '', message).strip()
yield clean_message
78 changes: 78 additions & 0 deletions g4f/Provider/AiMathGPT.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
from __future__ import annotations

from aiohttp import ClientSession

from ..typing import AsyncResult, Messages
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt

class AiMathGPT(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://aimathgpt.forit.ai"
api_endpoint = "https://aimathgpt.forit.ai/api/ai"
working = True
supports_stream = False
supports_system_message = True
supports_message_history = True

default_model = 'llama3'
models = ['llama3']

model_aliases = {"llama-3.1-70b": "llama3",}

@classmethod
def get_model(cls, model: str) -> str:
if model in cls.models:
return model
elif model in cls.model_aliases:
return cls.model_aliases[model]
else:
return cls.default_model

@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
model = cls.get_model(model)

headers = {
'accept': '*/*',
'accept-language': 'en-US,en;q=0.9',
'cache-control': 'no-cache',
'content-type': 'application/json',
'origin': cls.url,
'pragma': 'no-cache',
'priority': 'u=1, i',
'referer': f'{cls.url}/',
'sec-ch-ua': '"Chromium";v="129", "Not=A?Brand";v="8"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Linux"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/129.0.0.0 Safari/537.36'
}

async with ClientSession(headers=headers) as session:
data = {
"messages": [
{
"role": "system",
"content": ""
},
{
"role": "user",
"content": format_prompt(messages)
}
],
"model": model
}

async with session.post(cls.api_endpoint, json=data, proxy=proxy) as response:
response.raise_for_status()
response_data = await response.json()
filtered_response = response_data['result']['response']
yield filtered_response
Loading

0 comments on commit 66a3059

Please sign in to comment.