This repository will contain the code for the paper
B. Glocker, S. Winzeck. Assessing the inter-relationship of prediction tasks: Implications for algorithmic encoding of protected characteristics and its effect on AI performance. 2021.
The CheXpert imaging dataset together with the patient demographic information used in this work can be downloaded from https://stanfordmlgroup.github.io/competitions/chexpert/.
For running the code, we recommend setting up a dedicated Python environment.
Create and activate a Python 3 conda environment:
conda create -n pymira python=3
conda activate chexploration
Install PyTorch using conda:
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
Create and activate a Python 3 virtual environment:
virtualenv -p python3 <path_to_envs>/chexploration
source <path_to_envs>/chexploration/bin/activate
Install PyTorch using pip:
pip install torch torchvision
pip install matplotlib jupyter pandas seaborn pytorch-lightning scikit-learn scikit-image tensorboard tqdm openpyxl
In order to replicate the results presented in the paper, please follow these steps:
- Download the CheXpert dataset, copy the file
train.csv
to thedatafiles
folder - Download the CheXpert demographics data, copy the file
CHEXPERT DEMO.xlsx
to thedatafiles
folder - Run the notebook
chexpert.sample.ipynb
to generate the study data - Run the script
chexpert.disease.py
to train a disease detection model - Run the script
chexpert.sex.py
to train a sex classification model - Run the script
chexpert.race.py
to train a race classification model - Run the notebook
chexpert.predictions.ipynb
to evaluate all three prediction models - Run the notebook
chexpert.explorer.ipynb
for the unsupervised exploration of feature representations
Additionally, there are scripts chexpert.sex.split.py
and chexpert.race.split.py
to run SPLIT on the disease detection model. The default setting in all scripts is to train a DenseNet-121 using the training data from all patients. The results for models trained on subgroups only can be produced by changing the path to the datafiles (e.g., using full_sample_train_white.csv
and full_sample_val_white.csv
instead of full_sample_train.csv
and full_sample_val.csv
).
Note, the Python scripts also contain code for running the experiments using a ResNet-34 backbone which requires less GPU memory.
This work is supported through funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant Agreement No. 757173, Project MIRA, ERC-2017-STG) and by the UKRI London Medical Imaging & Artificial Intelligence Centre for Value Based Healthcare.
This project is licensed under the Apache License 2.0.