Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bias mitigation and fairness boosting in multi class classification use case using Autogluon-fair #9

Open
pradeepdev-1995 opened this issue Feb 9, 2023 · 2 comments

Comments

@pradeepdev-1995
Copy link

Is Autogluon-fair support mitigating bias and improving fairness in multi class classification problems?
If so please provide an example of doing the same in a sample multi class classification dataset.

@ChrisMRuss
Copy link
Contributor

No, it doesn't currently. Do you have a sample use case?

It would be possible to add it to the slow pathway, but there's not much discussion of it in the literature. and I had trouble thinking what it would be useful for.

@pradeepdev-1995
Copy link
Author

@ChrisMRuss Just a similar use case such as binary classification(where we have 2 target labels) with more than two target labels.

Eg: Customer segmentation, Segment a customer into different categories(more than 2) based on different feature values including some protected features such as Age, or Gender.

So we need to make sure that there is no bias due to this protected attribute in the model's decision and that fairness is maintained.

Such scenario.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants