Skip to content

A reinforcement learning toolkit for compiler optimizations

License

Notifications You must be signed in to change notification settings

LearnCV/CompilerGym

 
 

Repository files navigation

CompilerGym


Documentation PyPI version CI status PyPi Downloads License

CompilerGym is a toolkit for exposing compiler optimization problems for reinforcement learning. It allows machine learning researchers to experiment with program optimization techniques without requiring any experience in compilers, and provides a framework for compiler developers to expose new optimization problems for AI.

Getting Started

Starting with CompilerGym is simple. If you not already familiar with the gym interface, refer to the getting started guide for an overview of the key concepts.

Installation

Install the latest CompilerGym release using:

$ pip install compiler_gym

The binary works on macOS and Linux (on Ubuntu 18.04, Fedora 28, Debian 10 or newer equivalents).

Building from Source

If you prefer, you may build from source. This requires a modern C++ toolchain. On macOS you can use the system compiler. On linux, install the required toolchain using:

$ sudo apt install clang libtinfo5 patchelf
$ export CC=clang
$ export CXX=clang++

We recommend using conda to manage the remaining build dependencies. First create a conda environment with the required dependencies:

$ conda create -n compiler_gym python=3.8 bazel=3.1.0 cmake pandoc
$ conda activate compiler_gym

Then clone the CompilerGym source code using:

$ git clone https://github.com/facebookresearch/CompilerGym.git
$ cd CompilerGym

Install the python development dependencies using:

$ make init

Then run the test suite to confirm that everything is working:

$ make test

To build and install the python package, run:

$ make install

When you are finished, you can deactivate and delete the conda environment using:

$ conda deactivate
$ conda env remove -n compiler_gym

Trying it out

In Python, import compiler_gym to use the environments:

>>> import gym
>>> import compiler_gym                     # imports the CompilerGym environments
>>> env = gym.make("llvm-autophase-ic-v0")  # starts a new environment
>>> env.require_dataset("npb-v0")           # downloads a set of programs
>>> env.reset()                             # starts a new compilation session with a random program
>>> env.render()                            # prints the IR of the program
>>> env.step(env.action_space.sample())     # applies a random optimization, updates state/reward/actions

See the documentation website for tutorials, further details, and API reference.

Contributing

We welcome contributions to CompilerGym. If you are interested in contributing please see this document.

Citation

If you use CompilerGym in any of your work, please cite:

@Misc{CompilerGym,
  author = {Cummins, Chris and Leather, Hugh and Steiner, Benoit and He, Horace and Chintala, Soumith},
  title = {{CompilerGym}: A Reinforcement Learning Toolkit for Compilers},
  howpublished = {\url{https://github.com/facebookresearch/CompilerGym/}},
  year = {2020}
}

About

A reinforcement learning toolkit for compiler optimizations

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 67.4%
  • C++ 22.5%
  • Starlark 8.5%
  • Makefile 1.3%
  • Dockerfile 0.3%