This project aims to provide a Django workload based on a real-world large-scale production workload that serves mobile clients.
The project can be set up on a single machine, or on a cluster of machines to spread the load and to make it easier to gauge the impact of the Django workload on both Python and the hardware it runs on.
Documentation to set up each component of the cluster is provided for in each subdirectory. You'll need to follow the README.md file in each of the following locations:
-
3 services
- Cassandra - services/cassandra/README.md
- Memcached - services/memcached/README.md
- Monitoring - services/monitoring/README.md
-
Django and uWSGI - django-workload/README.md
-
A load generator - client/README.md
Once set up, access http://[uwsgi_host:uwsgi_port]/ to see an overview of the offered endpoints, or use the load generator to produce a high request load on the server.
The workload can also be deployed using Docker containers. The instructions can be found in docker-scripts/README.md.
Please note that running the workload using Docker containers might deliver less performance (transactions/second) than the bare-metal configuration and there might be more run-to-run variation. In order to obtain the most accurate performance comparison, please run the workload on bare-metal.
The default benchmarking parameters used for Siege, Memcached and uWSGI are suitable for driving high CPU utilization (>80%) in server environments:
uWSGI concurrent workers – 88
Memcached threads – 16
Siege Concurrency – 185
See the CONTRIBUTING file for how to help out.
Django Workload is BSD-licensed. We also provide an additional patent grant.