Deploy a Streamlit app to train, evaluate and optimize a Prophet forecasting model visually
Test the app online with shared computing resources & read introductory article
If you plan to use the app regularly, you should install the package and run it locally:
pip install -U streamlit_prophet
streamlit_prophet deploy dashboard
demo.mp4
- Main supported version : 3.7
- Other supported versions : 3.8 & 3.9
Please make sure you have one of these versions installed to be able to run the app on your machine.
Windows users have to install WSL2 to download the package. This is due to an incompatibility between Windows and Prophet's main dependency (pystan). Other operating systems should work fine.
We strongly advise to create and activate a new virtual environment, to avoid any dependency issue.
For example with conda:
pip install conda; conda create -n streamlit_prophet python=3.7; conda activate streamlit_prophet
Or with virtualenv:
pip install virtualenv; python3.7 -m virtualenv streamlit_prophet --python=python3.7; source streamlit_prophet/bin/activate
Install the package from PyPi (it should take a few minutes):
pip install -U streamlit_prophet
Or from the main branch of this repository:
pip install git+https://github.com/artefactory-global/streamlit_prophet.git@main
Once installed, run the following command from CLI to open the app in your default web browser:
streamlit_prophet deploy dashboard
Now you can train, evaluate and optimize forecasting models in a few clicks. All you have to do is to upload a time series dataset. This dataset should be a csv file that contains a date column, a target column and optionally some features, like on the example below:
Then, follow the guidelines in the sidebar to:
- Prepare data: Filter, aggregate, resample and/or clean your dataset.
- Choose model parameters: Default parameters are available but you can tune them. Look at the tooltips to understand how each parameter is impacting forecasts.
- Select evaluation method: Define the evaluation process, the metrics and the granularity to assess your model performance.
- Make a forecast: Make a forecast on future dates that are not included in your dataset, with the model previously trained.
Once you are satisfied, click on "save experiment" to download all plots and data locally.
All contributions, ideas and bug reports are welcome! We encourage you to open an issue for any change you would like to make on this project.
For more information, see CONTRIBUTING
instructions.
If you wish to containerize the app, see DOCKER
instructions.