A lean open-source Python framework for building AI-powered automation workflows that run on your machine. Built for marketers who want to automate research, monitoring, and content tasks without cloud dependencies or complex setups.
Think of it as Zapier/n8n but for local machines, designed specifically for marketing workflows.
Pynions helps marketers automate:
- Content research and analysis
- SERP monitoring and tracking
- Content extraction and processing
- AI-powered content generation
- Marketing workflow automation
- Start small, ship fast
- Easy API connections to your existing tools
- AI-first but not AI-only
- Zero bloat, minimal dependencies
- Built for real marketing workflows
- Quick to prototype and iterate
- Local-first, no cloud dependencies
- Python for all code
- Pytest for testing
- LiteLLM for unified LLM access
- Jina AI for content extraction
- Serper for SERP analysis
- Playwright for web automation
- dotenv for configuration
- httpx for HTTP requests
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate
# Install Pynions
pip install .
# The installer will automatically:
# 1. Create .env from .env.example
# 2. Create pynions.json from pynions.example.json
# Add your API keys to .env
nano .env
import asyncio
from pynions.core import Workflow, WorkflowStep
from pynions.plugins import SerperWebSearch, JinaAIReader
from pynions.core.config import load_config
async def main():
# Load configuration (automatically reads from root .env and pynions.json)
config = load_config()
# Initialize plugins
serper = SerperWebSearch() # Automatically uses API key from .env
jina = JinaAIReader() # Automatically uses API key from .env
# Create workflow
workflow = Workflow(
name="content_research",
description="Research and analyze content"
)
# Add steps
workflow.add_step(WorkflowStep(
plugin=serper,
name="search",
description="Search for relevant content"
))
workflow.add_step(WorkflowStep(
plugin=jina,
name="extract",
description="Extract clean content"
))
# Execute workflow
results = await workflow.execute({
"query": "marketing automation trends 2024"
})
# Save results
save_result(
content=results,
project_name="trends_research",
status="research"
)
if __name__ == "__main__":
asyncio.run(main())
- SerperWebSearch: Google SERP data extraction using Serper.dev API
- JinaAIReader: Clean content extraction from web pages
- LiteLLMPlugin: Unified access to various LLM APIs
- FraseAPI: NLP-powered content analysis and metrics extraction
- PlaywrightPlugin: Web scraping and automation
- StatsPlugin: Track and display request statistics
- More plugins coming soon!
- Project Structure
- Installation Guide
- Configuration Guide
- Plugin Development
- Workflow Creation
- Debugging Guide
- Python 3.8 or higher
- pip and venv
- Required API keys:
- OpenAI API key
- Serper dev API key
- Perplexity API key (optional)
Required:
OPENAI_API_KEY
: Your OpenAI API key
Optional:
SERPER_API_KEY
: For search functionalityANTHROPIC_API_KEY
: For Claude modelsJINA_API_KEY
: For embeddings
See pynions.example.json for all available options.
- Use the "Don't Repeat Yourself" (DRY) principle
- Smart and safe defaults
- OpenAI's "gpt-4o-mini" is the default LLM
- Serper is the default search tool
- Perplexity is the default research tool
- No AI-only, always human in the loop
- Minimal dependencies
- No cloud dependencies
- All tools are local
- No need to sign up for anything (except for OpenAI API key, Serper dev API key, and Perplexity API key (optional))
- No proprietary formats
- No tracking
- No telemetry
- No bullshit
- Module not found errors
pip install -r requirements.txt
- API Key errors
- Check if
.env
file exists - Verify API keys are correct
- Remove quotes from API keys in
.env
- Permission errors
chmod 755 data
See Project Structure for:
- Code organization
- Testing requirements
- Documentation standards
MIT License - see LICENSE for details
If you encounter issues:
- Check the Debugging Guide
- Review relevant documentation sections
- Test components in isolation
- Use provided debugging tools
- Check common issues section
Standing on the shoulders of the open-source giants, built with ☕️ and dedication by a marketer who codes.
Workers are standalone task executors that combine multiple plugins for specific data extraction needs. Perfect for automated research and monitoring tasks.
- PricingResearchWorker: Extracts structured pricing data from any SaaS website
from pynions.workers import PricingResearchWorker async def analyze_pricing(): worker = PricingResearchWorker() result = await worker.execute({"domain": "example.com"}) print(json.dumps(result, indent=2))
- Task-specific implementations
- Automated data extraction
- Structured output
- Plugin integration
- Efficient processing
See Workers Documentation for more details.