Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strategy for model training #48

Open
rohanbanerjee opened this issue May 31, 2024 · 0 comments
Open

Strategy for model training #48

rohanbanerjee opened this issue May 31, 2024 · 0 comments

Comments

@rohanbanerjee
Copy link
Collaborator

rohanbanerjee commented May 31, 2024

My understanding of how model was trained in this project is the following. Let's consider dataset of 100 subjects.

  • model 1: training on subjects (0:19)
  • applied model 1 on subjects (20:99)
    • note: ground truths for (0:19) were not updated
  • selected best inferences, let's say, subjects (20:49)
  • manually correct GT for (20:49)
  • model 2a: fine tuning: model 1 --> model 2a using ONLY (20:49)
  • model 2b: training using subjects (0:49)

According to @rohanbanerjee, model 2a performs better than model 2b (as we discuss in issue #36)

However, one risk, is model drift towards images with 'bad' quality, given that as the new training rounds increase, the data quality is shifting towards 'bad' cases (ie: the 'good cases' were used for model 1, and might now be forgotten). We need a validation strategy to ensure this does not happen @rohanbanerjee

My suggestion (Julien writing here):

  • model 2c: fine tuning: model 1 --> model 2c using subjects (0:49)

Sources:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant