You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! First of all thank you for creating this library. It was very helpful for me.
I have a simple dataset consisting of balanced number of positive and negative labels. I noticed after running the influence function calculation with a fitted model (only the classifier layer of the neural network is unfrozen, but its a few layers and not a single linear layer), the influence of positive class datapoints are higher than any negative class datapoints. Is there a theoretical reason for this happening? I inspected the data points manually and cannot find any sensible explanations.
For the record, my dataset is quite clean (all labels are correct) and my model was classifying both classes with higher accuracy. Also, I noticed when I try to use a less fitted model (trained for only a few epochs), this issue kind of goes away.
The text was updated successfully, but these errors were encountered:
Hello! First of all thank you for creating this library. It was very helpful for me.
I have a simple dataset consisting of balanced number of positive and negative labels. I noticed after running the influence function calculation with a fitted model (only the classifier layer of the neural network is unfrozen, but its a few layers and not a single linear layer), the influence of positive class datapoints are higher than any negative class datapoints. Is there a theoretical reason for this happening? I inspected the data points manually and cannot find any sensible explanations.
For the record, my dataset is quite clean (all labels are correct) and my model was classifying both classes with higher accuracy. Also, I noticed when I try to use a less fitted model (trained for only a few epochs), this issue kind of goes away.
The text was updated successfully, but these errors were encountered: