A collection of awesome AI projects that you can use on your products as API, framework, plataform or self hosted.
- Image
- Audio
- Video
- Run models locally
- Model Language (Text/Chat)
- Code Completion
- Embeddings
- Platforms
- GPU Hosts
- Model Chaining
- OpenAI Image - Given a prompt and/or an input image, the model will generate a new image.
- Leonardo.Ai - Create production-quality visual assets for your projects with unprecedented quality, speed, and style-consistency.
- Stability Ai
- Google PalM 2 for text - foundation models are optimized for a variety of natural language tasks such as sentiment analysis, entity extraction, and content creation. The types of content that the PaLM 2 for Text models can create include document summaries, answers to questions, and labels that classify content
- Google PaLM 2 for Chat - foundation model is a large language model (LLM) that excels at language understanding, language generation, and conversations. This chat model is fine-tuned to conduct natural multi-turn conversations, and is ideal for text tasks about code that require back-and-forth interactions.
- Perplexity - Here, you'll find documentation and examples that will help you get started with blazingly fast LLM inference.
- OpenAI Chat - Given a list of messages comprising a conversation, the model will return a response.
- ElevenLabs - Elevate your projects with the fastest & most powerful text to speech & voice API. Quickly generate AI voices in multiple languages for your chatbots, agents, LLMs, websites, apps and more.
- DupDub - Text-to-speech open API.
- Open Ai - text-to-speech
- Open Ai - speech-to-text
- Bark with voice clone - About 🔊 Text-prompted Generative Audio Model - With the ability to clone voices
- Piper - A fast, local neural text to speech system
- Whisper.cpp - Port of OpenAI's Whisper model in C/C++
- GPT4All - A free-to-use, locally running, privacy-aware chatbot. No GPU or internet required.
- llama.cpp - Inference of Meta's LLaMA model (and others) in pure C/C++
- LM Studio - Run any LLaMa Falcon MPT Gemma Replit GPT-Neo-X gguf models from Hugging Face
- Ollama - Run Llama 2, Code Llama, and other models. Customize and create your own.
Platforms that provide more that just one API
- Hugging Face - The platform where the machine learning community collaborates on models, datasets, and applications.
- fal.ai - Fastest way to serve open source ML models to millions
- infermatic
- Replicate - Run AI with an API. Run and fine-tune open-source models. Deploy custom models at scale. All with one line of code.
- stability.ai - Open models in every modality, for everyone, everywhere.
- Banana - GPUs for Scale. Inference hosting for AI teams who ship fast and scale faster.
- Cloudflare Workers AI - Run machine learning models, powered by serverless GPUs, on Cloudflare's global network.
- Crusoe - GPU Compute.
- LambdaLabs - The GPU Cloud for AI. On-demand & reserved cloud GPUs for AI training & inference.
- RunPod - The Cloud Built for AI. Globally distributed GPU cloud built for production.
- LlamaIndex - Turn your enterprise data into production-ready LLM applications
- LangChain - Applications that can reason. Powered by LangChain.
- crew.ai - Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.