-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployment Questions #1
Comments
Wow...just wow and thanks for the really kind words @SpencerPinegar I hope all this helps you build stuff faster. To answer your question, some of this I imagine is pretty opinionated and will likely keep changing because it's how I develop + try to reduce overhead when building django/celery/python stacks... This repo and response may or may not be very useful for the django app/project deployment footprint you're looking for. Here's my generalized deployment workflow and some background on how I iterated to get here: BackgroundThis django application is the latest version of an evolving django project that started many years ago. This newest django 2.x is an upgrade from the original django project: https://github.com/jay-johnson/docker-django-nginx-slack-sphinx About a year ago I got tired of worrying about python 3, so I started with Jose Padilla's django 2.0+ repo: https://github.com/jpadilla/django-project-template and rolled the AntiNex Django REST API (which works for Django and Django REST Framework). Application Deployment Strategy - Docker ContainersI am pretty sold on docker containers for everything so I always try to containerize as soon as possible to prevent post-launch integration pain for a project/product. Docker also solves a lot of new user issues which helps when I am working with a team. Automate Container Builds - TravisI try to focus on getting an initial small, baseline feature set functional outside of docker first and then roll the first Dockerfile. I'm still on GitHub for the moment because I do not know how/have not taken to time to port the travis.yml file to handle the docker builds to work on GitLab's tools (or something like Travis on there). Another note, this repo's travis.yml file does not handle PR and PR-merge webhooks as gracefully as the one I'm using on the Stock Analysis Engine's travis.yml... I should fix that haha. Phase 1 - FunctionalNow that the container images are auto-pushed to docker hub (or a private registry), I start on the first full-container integration sandbox environment. I use Docker Compose files for managing my stack locally. I usually start with a general deployment use case like:
Moving components into Docker one at a timeWith the confidence that the app is container-ready, I work on putting the full stack together. For django, I focused on getting the layers above and below and on the side together and made sure the app could migrate + write to a database Postgres + pgAdmin and use a Redis server with manage.py and an environment variable-driven settings.py file. Phase 2 - Full Stack End to End IntegrationI usually consider this phase the cloud-ready milestone where all the lego blocks are ready for assembly on AWS/GCP/GKE/Azure/on-prem kubernetes. At this point, I am looking at end-to-end validation and running many containers at once (usually on a single-host to prevent debugging networking/cluster issues). I still want to be rapidly testing, so I utilize docker volumes heavily in a local development sandbox and mount the repos straight into the containers and build the newest pips/build artifacts on container startup with a start script. For this phase, I am looking at docker compose files that solve these general deployment use cases:
Phase 3 - Scheduler Integration - Kubernetes and OpenShiftThese days, I cannot believe how easy Kubernetes and OpenShift makes running a modern stack. For years, I was chasing the latest docker swarm patterns and spent a lot of time keeping the CaaS layer stable. As a comparison: I've spent a total of maybe 5 minutes of my own time fixing Kubernetes on 1.12.1 in the past week. The last backup I did was 13 days ago (note, I manually restarted Minio to test the Ceph volume persistence). Docker swarm was way more daily tuning and monitoring than k8 is (so far). How long has it been running?
I've been developing new jobs and engine deployments each day on this Kubernetes cluster. Nothing has even restarted due to a failure that wasn't me testing stuff. InfrastructureOnce you get Kubernetes/OpenShift running your stack, you can easily run on any cloud provider with a Kubernetes/OCP offering. I am tired of always running on expensive clouds, so I recently moved to a home server. Note, my home cluster is not super fault tolerant at this point it is just a single Dell r620 (32 core 128 gb) server with a 1 tb drive running the cluster's 3 vms. The bare metal server is Ubuntu 18.04 with kvm for managing the 3 Kubernetes CentOS 7 VMs. Each k8 vm is running CentOS 7 with about 80 gb hdd, 6 cpu and 30 gb ram. The ram's pretty necessary if you're planning on doing any kind of AI work, but it's definitely more than I've ever had without burning a serious hole in my AWS budget. I am also loving having a DNS server (bind9) for helping host all the Kubernetes Ingresses on real FQDNs across my house. Ingresses available using an external DNS server:
VM cluster membership:
Let me know if you have any questions... I love this stuff! |
Wow, Thanks for the full response Jay; Your communication skills are as excellent as your programming abilities! It's exciting to stumble upon a project like this so early so if there is anything I can do to help, please let me know; I plan on building upon this project either way and I would love to contribute to the open source community. Docker CLI Logging |
Hey Jay, I really love the guide - after 5 plus hours of research, it's the best thing I could find for Django web deployment!! Anyways, I was wondering what you do for deployment thnx!!
The text was updated successfully, but these errors were encountered: