-
-
Notifications
You must be signed in to change notification settings - Fork 13.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update model configurations, provider implementations, and documentation #2577
Conversation
- Updated model names and aliases for Qwen QVQ 72B and Qwen 2 72B (@TheFirstNoob) - Revised HuggingSpace class configuration, added default_image_model - Added llama-3.2-70b alias for Llama 3.2 70B model in AutonomousAI - Removed BlackboxCreateAgent class - Added gpt-4o alias for Copilot model - Moved api_key to Mhystical class attribute - Added models property with default_model value for Free2GPT - Simplified Jmuz class implementation - Improved image generation and model handling in DeepInfra - Standardized default models and removed aliases in Gemini - Replaced model aliases with direct model list in GlhfChat (@TheFirstNoob) - Removed trailing slash from image generation URL in PollinationsAI (xtekky#2571) - Updated llama and qwen model configurations - Enhanced provider documentation and model details
…rror 'ResponseStatusError: Response 429: 文字过长,请删减后重试。'
…o DDG provider - Add custom exception classes for rate limits, timeouts, and conversation limits - Implement rate limiting with sleep between requests (0.75s minimum delay) - Add model validation method to check supported models - Add proper error handling for API responses with custom exceptions - Improve session cookie handling for conversation persistence - Clean up User-Agent string and remove redundant code - Add proper error propagation through async generator Breaking changes: - New custom exceptions may require updates to error handling code - Rate limiting affects request timing and throughput - Model validation is now stricter Related: - Adds error handling similar to standard API clients - Improves reliability and robustness of chat interactions
…r/DeepInfraChat.py)
Hi @TheFirstNoob! 👋 Thank you for bringing this to my attention. I've investigated the issues you mentioned and would like to provide some clarification:
Here's the complete list of currently available models in my G4F-GUI fork:
All these models, including qwen and llama, are working properly in my implementation. I've personally tested each one to ensure functionality. I'm looking forward to having these fixes merged into the main project branch, which will resolve the issues with the PollinationsAI provider. |
…Blackbox' provider
Hey! I check latest update. Minor moment for improve docs and infos.
Thanks for your work <3 |
@TheFirstNoob, Thank you for your detailed feedback! Let me address each point: 1. Regarding DeepInfraChat provider:
Here's my test code and results: from g4f.client import Client
from g4f.Provider import DeepInfraChat
def test_deepinfra_chat():
client = Client(provider=DeepInfraChat)
models = DeepInfraChat.models
for model in models:
print(f"Testing model: {model}")
try:
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": "Say this is a test"}],
web_search=False
)
print("Response:")
print(response.choices[0].message.content)
print("\n")
except Exception as e:
print(f"Error: {str(e)}\n")
test_deepinfra_chat() Test Results:
2. About PollinationalAI:
3. Documentation improvements:
4. Conversation memory:
Thank you for your contributions and suggestions! They're helping make the project better and more user-friendly. |
@kqlio67 Hi, HuggingChat new model with thinking Updated: P.s |
@TheFirstNoob maybe we can't do everythings in this PR. @kqlio67 can you fix the unittests and code review comments? |
@hlohaus Hi. I'm not asking to do this at the moment :) The question was that in principle this can be implemented because the functionality of direct thinking in such free access is implemented only by this provider. The idea itself does not seem bad to me and will allow users to get a new experience of using AI. As far as I understand, other providers that have models with thinking are used as regular chat models. This is clearly not what the authors of these models wanted. That's why I suggested this idea in the plans. @kqlio67 Before PR if possible add upper model to HuggingChat or add this model later :) |
Hey @TheFirstNoob, I'm thinking of adding this feature. I'll add a "thinking response" type and prompt it in the web UI. |
…nt.py g4f/models.py g4f/Provider/.
@hlohaus I've fixed the unittests, but I don't see any review comments in the PR. Could you please clarify which review comments need to be addressed? Thanks! |
Thank you |
Various updates to model configurations, provider implementations, and documentation:
These changes include model name updates, removal of deprecated classes and aliases, addition of new models and providers, improvements in image handling, and documentation updates.