Skip to content

Latest commit

 

History

History
104 lines (73 loc) · 2.52 KB

README.md

File metadata and controls

104 lines (73 loc) · 2.52 KB

Deep RL Code style: black

Environments

  1. Navigation
  2. Continuous Control
  3. Tennis

Installation

To easily install the package, clone the repository and use a virtualenv to pip install the package in developer mode.

git clone https://github.com/daniel-m-campos/deep_rl.git
cd deep_rl
python -m venv venv # make sure Python 3.6
. venv/bin/activate
pip install -e .

Requirements

See requirements.txt and text-requiremnets.txt. These are installed during the pip install step.

Binary dependencies

The package depends on the Udacity's Unity Environments. See Environments for the binary download links.

The default binary paths are set in the Environment implementations and are of the form /usr/local/sbin/<ENVIRONMENT>.x86_64. See the Navigation class in environment.py for an example. You can either symlink the downloaded binaries to the default directories or pass the binary_path when running the package.

Usage

The package provides a Fire CLI for training and playing the agent. To see the basic commands:

cd deep_rl
. venv/bin/activate
python -m deep_rl <command> --help

Where <command> is either train or play. See deep_rl/__main__.py as well as the __init__ method of Agent implementations in deep_rl/agent.py

Train

To train an agent in the Navigation/Banana Unity environment with default parameters, run:

cd deep_rl
. venv/bin/activate
python -m deep_rl train navigation

To train with custom parameters, run for example:

python -m deep_rl train navigation \
  --n_episodes=100 \
  --save_path=None \
  --image_path=None \
  --learning_rate=5e-3

Play

Navigation

To play an agent in the Navigation/Banana environment with default parameters, run:

cd deep_rl
. venv/bin/activate
python -m deep_rl play navigation

To play with alternative network, run

python -m deep_rl play navigation --load_path="path_to_your/network.pth"

Continuous Control

To play an agent in the Continuous Control/Reacher environment with default parameters, run:

cd deep_rl
. venv/bin/activate
python -m deep_rl play continuous_control

Tennis

To play an agent in the Tennis environment with default parameters, run:

cd deep_rl
. venv/bin/activate
python -m deep_rl play tennis