Skip to content

v0.0.1-alpha

Pre-release
Pre-release
Compare
Choose a tag to compare
@vmoens vmoens released this 06 Jul 09:52
· 1587 commits to main since this release
ad92dd7

TorchRL Initial Alpha Release

TorchRL is the soon-to-be official RL domain library for PyTorch.
It contains primitives that are aimed at covering most of the modern RL research space.

Getting started with the library

Installation

The library can be installed through

$ pip install torchrl

Currently, torchrl wheels are provided for linux and macos (not M1) machines. For other architectures or for the latest features, refer to the README.md and CONTRIBUTING.md files for advanced installation instructions.

Environments

TorchRL currently supports gym and dm_control out-of-the-box. To create a gym wrapped environment, simply use

from torchrl.envs import GymEnv, GymWrapper
env = GymEnv("Pendulum-v1")
# similarly
env = GymWrapper(gym.make("Pendulum-v1"))

Environment can be transformed using the torchrl.envs.transforms module. See the environment tutorial for more information.
The ParallelEnv allows to run multiple environments in parallel.

Policy and modules

TorchRL modules interacts using TensorDict, a new data carrier class. Although it is not necessary to use it and one can find workarounds for it, we advise to use the TensorDictModule class to read tensordicts:

from torchrl.modules import TensorDictModule
>>> policy_module = nn.Linear(n_obs, n_act)
>>> policy = TensorDictModule(policy_module, 
...   in_keys=["observation"],  # keys to be read for the module input
...   out_keys=["action"],  # keys to be written with the module output
)
>>> tensordict = env.reset()
>>> tensordict = policy(tensordict)
>>> action = tensordict["action"]

By using TensorDict and TensorDictModule, you can make sure that your algorithm is robust to changes in configuration (e.g. usage of an RNN for the policy, exploration strategies etc.) TensorDict instances can be reshaped in several ways, cast to device, updated, shared among processes, stacked, concatenated etc.

Some specialized TensorDictModule are implemented for convenience: Actor, ProbabilisticActor, ValueOperator, ActorCriticOperator, ActorCriticWrapper and QValueActor can be found in actors.py.

Collecting data

DataColllectors is the TorchRL data loading class family. We provide single process, sync and async multiprocess loaders. We also provide ReplayBuffers that can be stored in memory or on disk using the various storage options.

Loss modules and advantage computation

Loss modules are provided for each algorithm class independently. They are accompanied by efficient implementations of value and advantage computation functions.
TorchRL is devoted to be fully compatible with functorch, the functional programming PyTorch library.

Examples

A bunch of examples are provided as well. Check the examples directory to learn more about exploration strategies, loss modules etc.