Run any open-source LLMs, such as Llama, Mistral, as OpenAI compatible API endpoint in the cloud.
-
Updated
Dec 24, 2024 - Python
Run any open-source LLMs, such as Llama, Mistral, as OpenAI compatible API endpoint in the cloud.
A new DSL and server for AI agents and multi-step tasks
RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation
AIConfig is a config-based framework to build generative AI applications.
The collaborative spreadsheet for AI. Chain cells into powerful pipelines, experiment with prompts and models, and evaluate LLM responses in real-time. Work together seamlessly to build and iterate on AI applications.
Python SDK for running evaluations on LLM generated responses
An end-to-end LLM reference implementation providing a Q&A interface for Airflow and Astronomer
cluster/scheduler health monitoring for GPU jobs on k8s
Friendli: the fastest serving engine for generative AI
Miscellaneous codes and writings for MLOps
The prompt engineering, prompt management, and prompt evaluation tool for TypeScript, JavaScript, and NodeJS.
YAML-based infrastructure manager for LLM providers and endpoints.
🚀 Python framework for orchestrating AI agents across 100+ LLM APIs (OpenAI, Claude, Gemini, Bedrock, etc.) with a unified interface. Build anything from simple chatbots to complex agent swarms!
Add a description, image, and links to the llm-ops topic page so that developers can more easily learn about it.
To associate your repository with the llm-ops topic, visit your repo's landing page and select "manage topics."