Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Next steps #6

Open
jcohenadad opened this issue Dec 11, 2020 · 1 comment
Open

Next steps #6

jcohenadad opened this issue Dec 11, 2020 · 1 comment

Comments

@jcohenadad
Copy link
Member

in order:

@joshuacwnewton
Copy link
Member

  • Update README to be comprehensive. Ex: "sct_run_batch -c parameters/<CONFIG_FILE>" --> unclear what script to run (in what order, explain what it does, blablab)

This is done, I think. :)

  • Run pipeline, evaluate results, if good create release 0.1 and upload results incl. exact version of dataset used, SCT used, etc.

I've been tracking TODOs for this in my personal TODO board, but just to give an update on what I've been working on + planning:

  • Find out which metrics were used to evaluate previous deep-learning approaches. (Read Reza/Lucas' Countception, Hourglass papers)
  • Brainstorm list of additional metrics to use, if needed
  • Tweak Python script for computing metrics (existing evaluation script was broken)
  • Fine-tune "smaller" version of sct-testing-large dataset (remove/add suitable subjects) for quick iteration when evaluating retrained models (existing subjects were not suitable for testing)
  • Retrain Hourglass using same data augmentation as Countception, reevaluate
  • Download full sct-testing-large suitable subjects, then run pipeline to compare models in-depth
  • Present findings in future ivadomed-internal meeting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants