This repository contains code for Colour versus Shape Goal Misgeneralization in Reinforcement Learning: A Case Study, which appeared at ATTRIB: Workshop on Attributing Model Behavior at Scale at NeurIPS 2023.
We took one of the goal misgeneralization examples (Maze colour vs shape) from Di Langosco et al. (2022) and tried to understand how exactly it happens. We built on top of their code here and here. See the paper for full details of what was added, summary here:
- Maze environment simplified and new goal objects added.
- Code to run systematic evaluations of agents and measure capabilities and goal preferences.
- Code to produce plots, videos, and gifs.
To run the training code you will need to install requirements from both train-procgen and procgenAISC. To be able to run all notebooks and utils you will also need to run this command:
pip install -r requirements.txt
Training with default settings requires 14GB of GPU memory. See the Tips and Tricks in the end for ways to reduce it.
To train multiple (5 by default) agents to reach a yellow line with textured backgrounds, run this command:
. utils/train-many-with-backgrounds.sh
Same, but with black backgrounds:
. utils/train-many.sh
The trained agents will be located in train-procgen/logs/train/maze_pure_yellowline
. Each agent folder will have a screenshot from a training level for sanity checks. Training on other settings, like the white line or grey backgrounds is explained in the Tips and Tricks below. One agent takes about 40 minutes to train on consumer hardware.
To evaluate the agents on the same set of 1,000 levels in all the two-object combos from the paper, run:
. utils/run-maze-many-all-settings.sh
The results will be located in train-procgen/experiments/results-1000
. Each agent folder will have a screenshot of the first level. The first agent folder will have the screenshots of all 1,000 test levels. Evaluating 100 agents in each two-object combo takes between 1 and 10 hours, depending on the difficulty (yellow lines are easy, invisible objects are hard).
After evaluation, you can produce the plots from the paper by running the notebooks.
To make videos of agents solving the tasks, run:
. utils/run-maze-many-videos.sh
Before running it, you will have to replace the --model_file
in the .sh
file above with your own trained models.
The videos will be placed in videos
, to turn them into easily shareable screenshots and gifs, run:
cd utils
python videos-to-pngs-and-gifs.py
These will be placed in video-frames-and-gifs
.
Note that by default the screenshots and the gifs will be 6 times larger than the original 64x64 video. This is because many apps will introduce blurriness when resizing very small images. Adjust the 6x parameter according to your needs.
You can download all 1,000+ trained models and the results of over 10 million evaluations here.
Below is an assorted list of tips and tricks that you can use to make the code do what you want.
- Training to reach different objects: here.
- Changing background colour to grey: here.
- Changing training maze size: here.
- Adding back randomness to maze size: here.
- Change back mazegen algo: here.
- Reduce minibatch size to fit models on smaller GPUs: here.
- Make screenshots of human and agent view: here.
If you train an agent to reach a yellow line, will it prefer a yellow gem or a red line?
It depends on the random seed used for training! Below is a plot showing how training 100 agents (with just the random seed different) produces capable agents with different goal preferences.
See the paper for other results and more details.
Please cite the paper using the below BibTeX:
@article{ramanauskas2023colour,
title={Colour versus Shape Goal Misgeneralization in Reinforcement Learning: A Case Study},
author={Ramanauskas, Karolis and {\c{S}}im{\c{s}}ek, {\"O}zg{\"u}r},
journal={arXiv preprint arXiv:2312.03762},
url={https://arxiv.org/abs/2312.03762},
year={2023}
}