Initial question: What are the ethical and epistemological concerns associated with algorithmic governmentality? What changes do they bring to public administration?
Benbouzid, B. (2017). Des crimes et des séismes. Réseaux, (206), 95–123. https://doi.org/10.3917/res.206.0095
Abstract : In the United States, predictive policing is rooted in a longstanding police reform project, aimed at creating a proactive police force: one which can intervene preventively rather than in emergency situations only, on its own initiative, without being mobilised by citizens’ appeals. In 2012 the dream of US police reform from the 1970s began to materialize as a machine, when the company Predpol launched a predictive analysis platform downloadable on a simple application, presented as a dashboard sharing real-time risks of crime occurrence with a precision of about 200 metres. The mathematicians who created this start-up drew inspiration from an algorithm developed by a French seismologist. As the source code of the Predpol platform is not accessible for trade secret reasons, the author turned directly to the seismologist in order to understand the predictions. Comparing the Earth scientist with applied mathematics researchers seeking to develop predictive machines shed light on the beings brought into existence by the algorithm, and drew the author’s attention towards the specific associations comprising Predpol. The analysis of the moral dimensions of prediction consists in studying not the specific uses of machine learning, but the transformations of the modalities of prediction from one social context to the next."
Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, January 31). 'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions.
Bovens, M., & Zouridis, S. (2002). From Street-Level to System-Level Bureaucracies: How Information and Communication Technology is Transforming Administrative Discretion and Constitutional Control - Bovens - 2002 - Public Administration Review - Wiley Online Library. Public Administration Review, 62(2), 174–184.
Review: This article explores the transformation of the structure of government agencies due to the use of ICT. It argues that there has been a shift from street-level bureaucracy, where officials had ample administrative discretion and dealt with individual citizens, to system-level bureaucracy, where the decision making is dealt with by computer programs that gather data. The article illustrates these concepts through examples from Dutch bureaucracy. It then examines the consequences of this shift and the issues it brings up, especially with regards to the newfound discretion of system analysts and software designers, and the rigidity of the process that can prevent special cases to be dealt with properly. The authors finish with a set of recommendation to solve these new issues. Although only focused on examples from the Netherlands, the article develops hypotheses that can be applied to other bureaucracies across the world. It has been cited by many recent academic works. Its recommendation to uphold transparency as an ideal in response to the new issues of system-level bureaucracy could however be confronted with arguments made by other works cited in this bibliography that demonstrate that transparency is not the ideal solution it might seem (such as Annany and Crawford, 2016). Overall, this work is a valuable contribution to the discussion on algorithms and ICT in public administration, because it emphasizes the shift in actors caused by the use of new instruments and processes.
Bucher, T. (2017). The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44. https://doi.org/10.1080/1369118X.2016.1154086
Review: This article focuses on users’ perception and understanding of Facebook algorithms, through tweets and interviews with 25 ordinary users. It develops the concept of “algorithmic imaginary”, defined as “the ways of thinking about what algorithms are, what they should be and how they function”. It argues that algorithms and people mutually shape each other: algorithms create different moods and reactions in people, but individuals mould the algorithm by the tactics they use to make advantage of it. Although the algorithms empirically studied (editorialization algorithms) do not correspond to those used in public administration for decision-making, the theoretical concept of “algorithmic imaginary” can be used and explored in the context of a research on public algorithms. The topic of this paper touches upon a topic not studied enough: that of the reception and perception of algorithms by their users. For example, Leila Frouillou, in her PhD thesis about French higher-education selection algorithms, explores how students imagine and “play around”, “game” the algorithms in order to be accepted in their preferred programs (see work cited below).
Christin, A. (2017). Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society, 4(2), 2053951717718855. https://doi.org/10.1177/2053951717718855
Review: This article aims to be “an important empirical check against the rhetoric of algorithmic power” often developed both by proponents and opponents of big data and algorithms, by focusing on the reception and use of two algorithmic programs. It presents the results of an multi-sited ethnographic fieldwork conducted between 2011 and 2016 in web newsrooms and criminal courts, exploring the use of algorithms by journalists (real-time analytics) and judges (risk-assessment tools). It finds that there is a decoupling between the way algorithms are meant to be used and the way they are actually used. The results reveal similarities in the ways algorithms are received by both expert fields, in particular in terms of resistance practices (foot-dragging, gaming, opposition). However, there remain differences between the two fields in terms of profit orientation, monopoly on their expertise, and stance towards digital technologies. Angèle Christin is an assistant professor in the Department of Communication and affiliated faculty in the Sociology Department and Program in Science, Technology, and Society at Stanford University. In this work, she proposes an original and enlightening way to study algorithms in practice, through what she calls “refraction ethnography”: a methodology that focuses on “how algorithms are refracted through organizational forms and work practices”. The methodology developed in this article could be used to define a more appropriate accountability system in terms of responsibilities and transparency, through a better understanding of algorithms in practice. It offers a very promising approach to study other public decision-making algorithmic systems ethnographically and empirically.
Danaher, J. (2016). The Threat of Algocracy: Reality, Resistance and Accommodation. Philosophy & Technology, 29(3), 245–268. https://doi.org/10.1007/s13347-015-0211-1
Abstract: One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes (bureaucratic, legislative and legal) on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a significant threat to the legitimacy of such processes. Modelling my argument on Estlund’s threat of epistocracy, I call this the ‘threat of algocracy’. The article clarifies the nature of this threat and addresses two possible solutions (named, respectively, ‘resistance’ and ‘accommodation’). It is argued that neither solution is likely to be successful, at least not without risking many other things we value about social decision-making. The result is a somewhat pessimistic conclusion in which we confront the possibility that we are creating decision-making processes that constrain and limit opportunities for human participation.
Elish, M. C. (2016). Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction (We Robot 2016) (SSRN Scholarly Paper No. ID 2757236). Rochester, NY: Social Science Research Network. Retrieved from https://papers.ssrn.com/abstract=2757236
Review: This paper explores issues of human and non-human accountability in robotic systems. Through two case studies (aviation and nuclear plants), M.C. Elish shows how the responsibility for the failure of a system can be deflected to the human part of the system. The author develops the concept of the moral crumple zone: “Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a robotic system may become simply a component — accidentally or intentionally — that is intended to bear the brunt of the moral and legal penalties when the overall system fails.” The author makes the case that the system of accountability has not changed, despite changes in technological systems, and that the articulation of accountability has to be revised in order to not shift all the blame onto humans in the system. Madeleine C. Elish is an anthropologist focusing on intersections of artificial intelligence, automation and culture. This work is a welcome addition to a reflection on the dilution and redistribution of accountability of algorithmic systems. A counterexample that would be interesting to study in this context is that of the failure of the French higher education selection system done by the “APB” algorithm, where the debate focused on the algorithm itself and not on the structural conditions of the French higher education system. Does the notion of “moral crumple zone” apply here, even though the situation does not entail only one machine and an identifiable number of humans? What is the difference between this case and those explored in this paper?
Feller, A., Pierson, E., Corbett-Davies, S., & Goel, S. (2016, October 17). A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. Washington Post. Retrieved from https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/
Friedman, B., & Nissenbaum, H. (1996). Bias in Computer Systems. ACM Trans. Inf. Syst., 14(3), 330–347. https://doi.org/10.1145/230538.230561
Abstract: From an analysis of actual cases, three categories of bias in computer systems have been developed: preexisting, technical, and emergent. Preexisting bias has its roots in social institutions, practices, and attitudes. Technical bias arises from technical constraints of considerations. Emergent bias arises in a context of use. Although others have pointed to bias in particular computer systems and have noted the general problem, we know of no comparable work that examines this phenomenon comprehensively and which offers a framework for understanding and remedying it. We conclude by suggesting that freedom from bias should be counted among the select set of criteria—including reliability, accuracy, and efficiency—according to which the quality of systems in use in society should be judged.
Frouillou, L. (2015, November 20). Les mécanismes d’une ségrégation universitaire francilienne : carte universitaire et sens du placement étudiant (phdthesis). université Paris 1 Panthéon-Sorbonne. Retrieved from https://halshs.archives-ouvertes.fr/tel-01274983/document
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679. https://doi.org/10.1177/2053951716679679
Review: This paper aims to clarify and structure the ethical issues raised by algorithms themselves (and not the technologies built upon them). It first establishes a map of 6 epistemic and normative concerns surrounding algorithms (inconclusive evidence, inscrutable evidence, misguided evidence, unfair outcomes, transformative effects, traceability). It then reviews the literature associated with each concern, based on which it identifies areas requiring further work to develop the ethics of algorithms. This work offers both a clear, comprehensive structure mapping the different ethical issues algorithms brings up, and a literature review of current scholarship on ethics of algorithms. Its map of concerns is an interesting framework to analyze particular algorithms more closely and identify which types of concern they are linked with. The value of the paper also lie in the astute remark that solving epistemic concerns does not address other normative concerns. “Better methods to produce evidence for some actions need not rule out all forms of discrimination for example, and can even be used to discriminate more efficiently.” This article is therefore a very good starting point for a discussion on ethics of algorithms and the different concerns their use raises.
Pasquale, F. (2015). The Black Box Society – The Secret Algorithms That Control Money and Information. Harvard University Press.
Rouvroy, A., & Berns, T. (2013). Gouvernementalité algorithmique et perspectives d’émancipation. Réseaux, (177), 163–196. https://doi.org/10.3917/res.177.0163
Review: This article explores the implications of the rise of datamining and profiling and the shift from statistical governance to algorithmic governance. This shift from traditional statistics that relied on common conventions (as explained by Desrosières in an article cited in this bibliography) to algorithms that rely on correlations and not on norms leads to a new “truth regime” (a term borrowed from Foucault), based on the false promise of objectivity and on correlations. They develop the concept of “algorithmic governmentality” : “a certain type of (a)normative or (a)political rationality founded on the automated collection, aggregation and analysis of big data so as to model, anticipate and pre-emptively affect possible behaviours.” They then draw from the works of philosophers Deleuze and Simondon to explore the consequences of this new governmentality on individuation and emancipation. The authors, philosophers, consciously depart from a science and technology studies approach of algorithms, not studying the co-construction of the technologies, to focus instead on the epistemology of algorithmic decision-making and on how algorithms shape our world. This work only focuses on datamining and profiling algorithms, which are not the only types of decision-making algorithms in public administration. However, the epistemological lens of “algorithmic governmentality” they propose is a complementary concept to other works adopting a sociotechnical take on algorithms. Their argument is similar to that of Danaher (see Danaher, 2012), who focuses on public participation in decision-making procedures, and argues that “Increasing reliance on algocratic systems limits the scope for active human participation in and comprehension of decision-making procedures”. Both approaches stay somewhat theoretical and need to be enriched with empirical approaches (for an example, see Christin, 2017, cited further in this bibliography).
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 30(3), 286–297. https://doi.org/10.1109/3468.844354
Abstract: We outline a model for types and levels of automation that provides a framework and an objective basis for deciding which system functions should be automated and to what extent. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision/action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design.
Tufekci, Z. (2015). Algorithmic Harms Beyond Facebook and Google: Emergent Challenges of Computational Agency. Retrieved from http://ctlj.colorado.edu/wp-content/uploads/2015/08/Tufekci-final.pdf
Zarsky, T. (2016). The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making. Science, Technology, & Human Values, 41(1), 118–132. https://doi.org/10.1177/0162243915605575
Abstract: We are currently witnessing a sharp rise in the use of algorithmic decision-making tools. In these instances, a new wave of policy concerns is set forth. This article strives to map out these issues, separating the wheat from the chaff. It aims to provide policy makers and scholars with a comprehensive framework for approaching these thorny issues in their various capacities. To achieve this objective, this article focuses its attention on a general analytical framework, which will be applied to a specific subset of the overall discussion. The analytical framework will reduce the discussion to two dimensions, every one of which addressing two central elements. These four factors call for a distinct discussion, which is at times absent in the existing literature. The two dimensions are (1) the specific and novel problems the process assumedly generates and (2) the specific attributes which exacerbate them. While the problems are articulated in a variety of ways, they most likely could be reduced to two broad categories: efficiency and fairness-based concerns. In the context of this discussion, such problems are usually linked to two salient attributes the algorithmic processes feature—its opaque and automated nature.