Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Images, Art & Video (E1) - Steed and Caliskan 2021 #40

Open
HyunkuKwon opened this issue Jan 12, 2021 · 2 comments
Open

Images, Art & Video (E1) - Steed and Caliskan 2021 #40

HyunkuKwon opened this issue Jan 12, 2021 · 2 comments

Comments

@HyunkuKwon
Copy link
Collaborator

Steed, Ryan and Aylin Caliskan. 2021. “Image Representations Learned with Unsupervised Pre-Training Contain Human-Like Biases.” ACM FAccT.

@HyunkuKwon HyunkuKwon changed the title Images, Art & Video - (E1) Steed and Caliskan 2021 Images, Art & Video (E1) - Steed and Caliskan 2021 Jan 12, 2021
@romanticmonkey
Copy link

According to Prof. Caliskan's works, when we are taking pre-trained model (for text or images), we inevitably will have some human biases in the model. In particular, some 21st century web surfers' biases. How can we deal with this when our research requires more unbiasedness. For example, I tried to construct a Nietzsche bot and a Plato bot in last week's homework. The generated texts did cover their main philosophical focus, but they all sounded contemporary. To reduce these biases, are we eventually required to train our own model from the ground up?

@egemenpamukcu
Copy link

Do you debias the algorithms to reflect an unbiased version of our world or do you keep them as they are to represent our world accurately with all its inherent biases? It seems like there are arguments for both sides, what would be your take on this? Does it solely depend on our intended application?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants