You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is Autogluon-fair support mitigating bias and improving fairness in multi class classification problems?
If so please provide an example of doing the same in a sample multi class classification dataset.
The text was updated successfully, but these errors were encountered:
No, it doesn't currently. Do you have a sample use case?
It would be possible to add it to the slow pathway, but there's not much discussion of it in the literature. and I had trouble thinking what it would be useful for.
@ChrisMRuss Just a similar use case such as binary classification(where we have 2 target labels) with more than two target labels.
Eg: Customer segmentation, Segment a customer into different categories(more than 2) based on different feature values including some protected features such as Age, or Gender.
So we need to make sure that there is no bias due to this protected attribute in the model's decision and that fairness is maintained.
Is Autogluon-fair support mitigating bias and improving fairness in multi class classification problems?
If so please provide an example of doing the same in a sample multi class classification dataset.
The text was updated successfully, but these errors were encountered: