-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Images, Art & Video (E2) - Wang and Kosinski 2021 #41
Comments
It seems to me this paper is a classic case of incidental correlation predicting a different label to what the researchers thought. The paper says its algorithm has 70% accuracy yet we could build a naive and much simpler model with features such as "use of eye-shadow", "reports on facial hair", etc that could perform as well or even out perform the algorithm in this paper. How do we know the algorithm is not picking up on something such as the orientation in which selfies are taken? As reported in this link. |
I was interested to see the article posted by @RobertoBarrosoLuque suggesting that some of the differences identified were not actually in facial features, but in the angles at which selfies were taken. This paper seems like an instance in which photographs were falsely understood as objective documents, with considerations of staging disregarded. Is that kind of error often a risk when using computational methods for image analysis? |
To what extent is interpreting these deep neural networks a solution for understanding the social theory (i.e. the relationship between sexual orientation and facial images)? e.g. instead of building a theoretically informed statistical model, can we just train a deep learning model then develop the theory by testing and interpreting that model? (Obviously this is somewhat uninformative for Wang and Kosinski, given the incidental factors in this model mentioned by Roberto, Theo, and many critics in the scientific and popular press.) |
I wonder if the algorithm in this paper works in cultures where the face pictures of men and women are more similar. In some cultures, heterosexual men tend to wear makeup, which adds difficulty in discerning the homosexual from the heterosexual. So I guess the accuracy of the algorithm is contingent on cultures and norms. In other words, there might not be a universal theory supporting this paper. |
This type of work merits deep criticism if we are to take its consequences seriously. Trying to discern traits from human faces seems like a dangerous game that could be weaponized against people. What are the main considerations in determining the ethics of these types of studies? |
What do you think of the ethical controversy of this research? |
This paper is very famous and I am sure it's one of the example papers in MACSS first Perspectives course--about the ethics of computational social science studies. But this paper has passed IRA review. What do you think about this? |
Ethical implications aside, I thought the paper made some bold claims. 'Prenatal hormone theory' is cited as a factor as to why the prototypical male gay face (constructed from the model) seemed to be more feminine than the male heterosexual face (this research borders on physiognomy). To what degree do you think this kind of research paradigm can be used to motivate investigation into the causes of the predicted phenomena? |
Might this paper experience the biased data problem described in Steed & Caliskan (2021)? Might the data from the dating site contains internal biases? |
Agree with the previous idea on the controversy in the ethics of computational social studies. I'm interested in whether there is a more related paper in terms of this issue? For example, what about detecting sexual orientation in the audio settings? (This may be again a contentious issue and even give rise to a poor fitting) |
It's interesting to see how deep learning and vision analysis are linked with sexual orientation, especially it detects nuances that human perception cannot capture, if we only look at this paper out of curiosity rather than ethical concerns. |
I find this research deeply disturbing in regards to how it employs and interprets neural network models. How does prediction accuracy make any sense in the social sciences setting if the researchers cannot verify the representativeness of their dataset? |
I think one interesting extension would be to consider sexual orientation on a spectrum instead of the binary setting. I am also wondering how robust the results are to different datasets? For example, people may experience changes in their facial appearances as they age, or we have people from different ethnic groups. Will the algorithm still be successfully detect the sexual orientations? |
How do we generate meaningful theoretical discussions from this type of research that essentially involves a black box? |
I love the indignation and thoughtful critique the paper created amongst the group. Illustrated well by the first five questions that I think are crucial to ask about this paper. |
Wang, Yilun, and Michal Kosinski. 2018. “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.” Journal of personality and social psychology 114(2): 246.
The text was updated successfully, but these errors were encountered: