Skip to content

Latest commit

 

History

History
11 lines (8 loc) · 515 Bytes

README.md

File metadata and controls

11 lines (8 loc) · 515 Bytes

Deep Recurrent Q-Network for AutoScaling Functions

  • agent.py contains the agent training code
  • env.py contains the gymnasium supported integrated Kubernetes/OpenFaaS environment for interaction and feedback loop
  • test_agent.py & test_env.py contains the code for evaluation of the trained agent and its environment
  • requirements.txt contains the project requirements

Note:
To successfully run the agent please update the relevent placeholders marked with $PLACEHOLDER.