You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Suppose we need to do something like run a database migration. Right now, we would probably write a script and then connect directly from our personal machine with production credentials to run it.
This works for now, but has a few drawbacks.
First, this requires sharing production credentials with anyone who wants to run a script. Second, it doesn't give us an easy way to have a record of who's done what in in prod. Finally, it doesn't provide an easy code review mechanism since there's no PR process involved.
To remedy these problems, create infrastructure to run scripts in production which are code reviewed and executed by a binary also running in production. Add a way to monitor what code has been executed.
A potential solution might look like this:
An engineer writes a script to do the desired action and creates a PR to commit it to a specific directory in the repo.
Script is code reviewed like any other PR. Author gets approval and merges the PR.
Using GitHub web hooks, UpSwyng Server is notified that a new PR has been merged. It checks to see if there are new scripts in the script directory. If so, a worker job is created. This job builds the script and eventually will execute it. In the meantime, it deletes the file from the repository and commits this merge via a GitHub app/plugin/bot/whatever. Once that's done it executes the script. Upon completion of the script, the worker then takes the script source and inserts some comments about the results of the script (can pipe from stdio). It takes the comments/source and does a second commit to the repo to add the file to a directory of executed scripts.
The text was updated successfully, but these errors were encountered:
Suppose we need to do something like run a database migration. Right now, we would probably write a script and then connect directly from our personal machine with production credentials to run it.
This works for now, but has a few drawbacks.
First, this requires sharing production credentials with anyone who wants to run a script. Second, it doesn't give us an easy way to have a record of who's done what in in prod. Finally, it doesn't provide an easy code review mechanism since there's no PR process involved.
To remedy these problems, create infrastructure to run scripts in production which are code reviewed and executed by a binary also running in production. Add a way to monitor what code has been executed.
A potential solution might look like this:
An engineer writes a script to do the desired action and creates a PR to commit it to a specific directory in the repo.
Script is code reviewed like any other PR. Author gets approval and merges the PR.
Using GitHub web hooks, UpSwyng Server is notified that a new PR has been merged. It checks to see if there are new scripts in the script directory. If so, a worker job is created. This job builds the script and eventually will execute it. In the meantime, it deletes the file from the repository and commits this merge via a GitHub app/plugin/bot/whatever. Once that's done it executes the script. Upon completion of the script, the worker then takes the script source and inserts some comments about the results of the script (can pipe from
stdio
). It takes the comments/source and does a second commit to the repo to add the file to a directory of executed scripts.The text was updated successfully, but these errors were encountered: