Skip to content

Commit

Permalink
Merge pull request #51 from sct-pipeline/ae/ohbm
Browse files Browse the repository at this point in the history
Training and evaluation scripts MICCAI 2023
  • Loading branch information
sandrinebedard authored Mar 9, 2023
2 parents c516d9d + c2ecc19 commit 2b93812
Show file tree
Hide file tree
Showing 46 changed files with 7,557 additions and 678 deletions.
40 changes: 20 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,30 +84,30 @@ Re-run the analysis: [Launch processing](#launch-processing)

## Training

### create_training_joblib.py
The function creates a joblib that allocates data from the testing set of the SCT model to the testing set of the ivadomed model. The output (new_splits.joblib) needs to be assigned on the config.json in the field "split_dataset": {"fname_split": new_splits.joblib"}.
Multiple datasets (BIDS folders) can be used as input for the creation of the joblib. The same list should be assigned on the config.json file in the path_data field.


### compare_with_sct_model.py
The comparison is being done by running `sct_deepseg_sc` on every subject/contrast that was used in the testing set on ivadomed.

One thing to note, is that the SCT scores have been marked after the usage of the function `sct_get_centerline` and cropping around this prior.
In order to make a fair comparison, the ivadomed model needs to be tested on a testing set that has the centerline precomputed.

The function `compare_with_sct_model.py` prepares the dataset for this comparison by using `sct_get_centerline` on the images and using this prior on the TESTING set.
### config_generator.py
The script helps create joblibs that are going to represent splits of our dataset. It will create a <code>joblibs</code> folder containing the data split for each sub-experiment (i.e. hard_hard, soft_soft ...). The way we leverage the aforementioned python script is by running the bash script <code>utils/create_joblibs.sh</code> that will execute the following command for each sub-experiment:
```
python config_generator.py --config config_templates/hard_hard.json \
--datasets path/to/data
--ofolder path/to/joblib \
--contrasts T1w T2w T2star rec-average_dwi \
--seeds 15
```
in which one has to specify the config template for the sub-experiment, the dataset path, the joblibs output folder, the contrasts used for the experiment and the random generation seed(s) respectively.

The output folder will contain as many folders as inputs are given to `compare_with_sct_model.py`, with the suffix SCT. These folders "siumulate" output folders from ivadomed (they contain evaluation3dmetrics.csv files) in order to use violinpolots visualizations from the script `visualize_and_compare_testing_models.py`
### training_scripts
Once the joblibs describing how the data is split are generated, one can start training the different models within a sub-experiment. Notice that there are 3 folders in <code>training_scripts</code>, 2 of them are related to a specific MTS contrast and the last one is used to train models with the other contrasts. This flaw is due to the incompatibility of ivadomed's dataloader dealing with MTS contrasts properly, at the time of writing. We expect to address this problem in the next months so we can have a single bash script executing all the training experiments smoothly.
For clarity, we go over a few examples about how to use the current training scripts.
1. One wants to train MTS contrast-specific models. Then choose the right MTS contrast <code>acq-MTon_MTS</code> or <code>acq-T1w_MTS</code> and run the associated bash script.
2. One wants to train contrast-specific (without MTS) models AND generalist models (including MTS) then run the bash script in <code>training_scripts/all/training_run.sh</code>.

All training runs are using the ivadomed's framework and logging training metrics in a <code>results</code> folder (optionally with wandb).

### inference.sh
Once the models are trained, one can use the <code>evaluation/inference.sh</code> bash script to segment SC for tests participants and qualitatively analyze the results. Again like in all bash scripts mentioned in this project, one has to change a few parameters to adapt to one's environment (e.g. dataset path ...).

Problems with this approach:
1. _centerline.nii.gz derivatives for the testing set files are created in the database
2. The order that processes need to be done might confuse people a bit:
i. Joblib needs to be created
ii. The ivadomed model needs to be trained
iii. compare_with_sct_model script needs to run
iv. The ivadomed model needs to be tested
### Evaluation on spine-generic-multi-subject (MICCAI 2023)
Once the inference is done for all models and to reproduce the results presented in our paper, one would have to run the <code>compute_evaluation_metrics.py</code> after specifying the experiment folder paths inside that python script. A <code>spine-generic-test-results</code> folder will be created, in which a json file with the DICE and Relative Volume Difference (RVD) metrics for each experiments on the test set. To obtain the aggregated results **per_contrast** and **all_contrast**, run the <code>miccai_results_models.py</code> script. It generates aggregated results by the aforementioned category of models and the associated Latex table used in the paper.

## Compute CSA on prediction masks

Expand Down
146 changes: 0 additions & 146 deletions config/contrast-specific/seg_sc_all.json

This file was deleted.

147 changes: 0 additions & 147 deletions config/contrast-specific/seg_sc_t2star.json

This file was deleted.

Loading

0 comments on commit 2b93812

Please sign in to comment.