You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update README to be comprehensive. Ex: "sct_run_batch -c parameters/<CONFIG_FILE>" --> unclear what script to run (in what order, explain what it does, blablab)
Run pipeline, evaluate results, if good create release 0.1 and upload results incl. exact version of dataset used, SCT used, etc.
The text was updated successfully, but these errors were encountered:
Update README to be comprehensive. Ex: "sct_run_batch -c parameters/<CONFIG_FILE>" --> unclear what script to run (in what order, explain what it does, blablab)
This is done, I think. :)
Run pipeline, evaluate results, if good create release 0.1 and upload results incl. exact version of dataset used, SCT used, etc.
I've been tracking TODOs for this in my personal TODO board, but just to give an update on what I've been working on + planning:
Find out which metrics were used to evaluate previous deep-learning approaches. (Read Reza/Lucas' Countception, Hourglass papers)
Brainstorm list of additional metrics to use, if needed
Tweak Python script for computing metrics (existing evaluation script was broken)
Fine-tune "smaller" version of sct-testing-large dataset (remove/add suitable subjects) for quick iteration when evaluating retrained models (existing subjects were not suitable for testing)
Retrain Hourglass using same data augmentation as Countception, reevaluate
Download full sct-testing-large suitable subjects, then run pipeline to compare models in-depth
Present findings in future ivadomed-internal meeting
in order:
The text was updated successfully, but these errors were encountered: