-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I train an Agent on multiple Environments? #65
Comments
Hi @lagidigu With Dopamine it is only possible to train using a single environment at a time. OpenAI Baselines on the other hand allows multiple environments concurrently for a number of their algorithms (including PPO). Here are instructions for running Unity environments with baselines: https://github.com/Unity-Technologies/ml-agents/tree/master/gym-unity#running-openai-baselines-algorithms. The difference is that you will want to replace |
If used like seen below, the environment cannot establish a connection to Python. The socket somehow fails. This is tested on two Windows machines. Surprisingly, if set to one environment, two instances of the Obstacle Tower build are launched.
|
Thanks a lot! |
Hi,
I read the GCP tutorial on how to set up dopamine, but I cannot find out how to train the agent/brain on multiple environments simultaneously like you did during the PPO/Rainbow training.
Would it just be a matter of creating several runners?
Thanks a lot in advance,
Cheers
Luc
The text was updated successfully, but these errors were encountered: