You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to Prof. Caliskan's works, when we are taking pre-trained model (for text or images), we inevitably will have some human biases in the model. In particular, some 21st century web surfers' biases. How can we deal with this when our research requires more unbiasedness. For example, I tried to construct a Nietzsche bot and a Plato bot in last week's homework. The generated texts did cover their main philosophical focus, but they all sounded contemporary. To reduce these biases, are we eventually required to train our own model from the ground up?
Do you debias the algorithms to reflect an unbiased version of our world or do you keep them as they are to represent our world accurately with all its inherent biases? It seems like there are arguments for both sides, what would be your take on this? Does it solely depend on our intended application?
Steed, Ryan and Aylin Caliskan. 2021. “Image Representations Learned with Unsupervised Pre-Training Contain Human-Like Biases.” ACM FAccT.
The text was updated successfully, but these errors were encountered: