Discord | Website | ⭐ the repo !
Prediction Prophet is an agent that specializes in making informed predictions, based on web research. To try it yourself head to predictionprophet.ai or build and run from source following these setup instructions.
Join our Discord community for support and discussions.
If you have questions or encounter issues, please don't hesitate to create a new issue to get support.
To elaborate further, given a question like "Will Twitter implement a new misinformation policy before the 2024 elections?"
, Prophet will:
- Generate
n
web search queries and re-ranks them using an LLM call, selecting only the most relevant ones - Search the web for each query, using Tavily
- Scrape and sanitize the content of each result's website
- Use Langchain's
RecursiveCharacterTextSplitter
to split the content of all pages into chunks and create embeddings. All chunks are stored with the content as metadata. - Iterate over the queries selected on step
1
and vector search for the most relevant chunks created in step4
. - Aggregate all relevant chunks and prepare a report.
- Make a prediction.
- Clone the repository
git clone https://github.com/agentcoinorg/predictionprophet
- Copy the
.env.template
file and rename it to.env
.cp .env.template .env
- Find the line that says OPENAI_API_KEY=, and add your unique OpenAI API Key
OPENAI_API_KEY=sk-...
- Find the line that says TAVILY_API_KEY=, and add your unique Tavily API Key
TAVILY_API_KEY=tvly-...
- Install all dependencies
poetry install
- Enter the python environment
poetry shell
Now you're ready to go!
poetry run predict "Will Twitter implement a new misinformation policy before the 2024 elections?"
poetry run research "Will Twitter implement a new misinformation policy before the 2024 elections?"
poetry run streamlit run ./prediction_prophet/app.py
- Using LLM re-ranking, like Cursor to optimize context-space and reduce noise
- Use self-consistency and generate several reports and compare them to choose the best, or even merge information
- Plan research using more complex techniques like tree of thoughts
- Implement a research loop, where research is performed and then evaluated. If the evaluation scores are under certain threshold, re-iterate to gather missing information or different sources, etc.
- Perform web searches under different topic or category focuses like Tavily does. For example, some questions benefit more from a "social media focused" research: gathering information from twitter threads, blog articles. Others benefit more from prioritizing scientific papers, institutional statements, and so on.
- Identify strong claims and perform sub-searches to verify them. This is the basis of AI powered fact-checkers like: https://fullfact.org/
- Evaluate sources credibility
- Further iterate over chunking and vector-search strategies
- Use HyDE
- Use self-consistency to generate several scores and choose the most repeated ones.
- Enhance the evaluation and reduce its biases through the implementation of more advanced techniques, like the ones described here https://arxiv.org/pdf/2307.03025.pdf and here https://arxiv.org/pdf/2305.17926.pdf
- Further evaluate biases towards writing-style, length, among others described here: https://arxiv.org/pdf/2308.02575.pdf and mitigate them
- Evaluate using different evaluation criteria