Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Influence of positive class datapoints are all higher than influence of negative class datapoints #18

Open
chenzhiliang94 opened this issue Oct 22, 2024 · 0 comments

Comments

@chenzhiliang94
Copy link

chenzhiliang94 commented Oct 22, 2024

Hello! First of all thank you for creating this library. It was very helpful for me.

I have a simple dataset consisting of balanced number of positive and negative labels. I noticed after running the influence function calculation with a fitted model (only the classifier layer of the neural network is unfrozen, but its a few layers and not a single linear layer), the influence of positive class datapoints are higher than any negative class datapoints. Is there a theoretical reason for this happening? I inspected the data points manually and cannot find any sensible explanations.

For the record, my dataset is quite clean (all labels are correct) and my model was classifying both classes with higher accuracy. Also, I noticed when I try to use a less fitted model (trained for only a few epochs), this issue kind of goes away.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant