Editorial How should we theorize algorithms? Five ideal types in analyzing algorithmic normativities Francis Lee1 and Lotta Bjo¨rklund Larsen2 Abstract The power of algorithms has become a familiar topic in society, media, and the social sciences. It is increasingly common to argue that, for instance, algorithms automate inequality, that they are biased black boxes that reproduce racism, or that they control our money and information. Implicit in many of these discussions is that algorithms are permeated with normativities, and that these normativities shape society. The aim of this editorial is double: First, it contributes to a more nuanced discussion about algorithms by discussing how we, as social scientists, think about algorithms in relation to five theoretical ideal types. For instance, what does it mean to go under the hood of the algorithm and what does it mean to stay above it? Second, it introduces the contributions to this special theme by situating them in relation to these five ideal types. By doing this, the editorial aims to contribute to an increased analytical awareness of how algorithms are theorized in society and culture. The articles in the special theme deal with algorithms in different settings, ranging from farming, schools, and self-tracking to AIDS, nuclear power plants, and surveillance. The contributions thus explore, both theoretically and empirically, different settings where algorithms are intertwined with normativities. Keywords algorithms, theory, normativities, black boxing, infrastructures, actor-network theory This article is a part of special theme on Algorithmic Normativities. To see a full list of all articles in this special theme, please click here: https://journals.sagepub.com/page/bds/collections/algorithmic_normativities. The omnipresence of algorithms Algorithms are making an ever-increasing impact on our world. In the name of efficiency, objectivity, or sheer wonderment algorithms are becoming increasingly intertwined with society and culture. They seem pervasive in today’s society, materializing here, there, and everywhere. In some situations, algorithms are valued, or even treasured, in others they lead to anger and mistrust, sometimes they even seem threatening or dangerous. In the wake of this algorithmization of the world, social scientists have taken an increasing interest in how algorithms become intertwined with society and culture. The list of interventions and perspectives seems endless.1 Some researchers claim that algorithms control money and information (Pasquale, 2015) or shape our romantic endeavors (Roscoe and Chillas, 2015). Others highlight the inscrutability of algorithms and work to understand the effects of their opacity (Burrell, 2016; Diakopoulos, 2016; Fourcade and Healy, 2017; Pasquale, 2015). Still others argue that algorithms automate inequality (Eubanks, 2017; Noble, 2018; O’Neil, 2016), and reproduce existing social structures and biases (Angwin et al., 2016; Kirkpatrick, 2016; Sandvig et al., 2016). In line with 1 Uppsala Universitet, Uppsala, Sweden 2 Tax Administration Research Centre, University of Exeter Business School Corresponding author: Francis Lee, Uppsala Universitet, Uppsala 75105, Sweden. Email: francis@francislee.org Creative Commons CC-BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (http:// www.creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). Big Data & Society July–December 2019: 1–6 ! The Author(s) 2019 DOI: 10.1177/2053951719867349 journals.sagepub.com/home/bds this, many researchers have started asking questions about algorithmic decision-making (Zarsky, 2015), accountability (Diakopoulos, 2016), or ethics (Kraemer et al., 2010; Neyland, 2018). Implicitly, or sometimes very explicitly, many of these observe that algorithms are intertwined with different normativities and that these normativities come to shape our world. In our view, however, there is a need for a metadiscussion about how normativities become intertwined with algorithms. That is, there is today a bounty of approaches, as evident from above, and a growing body of analytical perspectives on algorithms. However, there are few, if any, meta-analytical discussions that attempt to deal with the strengths and weaknesses of different theoretical and analytical approaches. Consequently, this special theme seeks to provoke a meta-reflection on how social and cultural researchers come to theorize, analyze, and understand algorithmic normativities. Five ideal types in analyzing algorithms In essence, what we are asking is what alternatives there are to emphasizing opacity, black-boxing, and a seductive language of uncovering ‘biased’, ‘racist’ or ‘sexist’ algorithms? What other ways of approaching algorithms are there apart from going under the ‘opaque’ hood to understand the real politics of the ‘black boxed’ algorithms? To stimulate this meta-reflection, and help think these issues through, we want to draw playfully on the metaphor of the engine hood. We ask what positions, other than ‘going under the hood’ to uncover the hidden normativities of the algorithm, are there? The goal of this exercise is to reflect on and problematize current ways of analyzing algorithms, and to situate the articles in this special theme in relation to a meta-reflection about how we analyze algorithmic nor- mativities.2 In doing this, we outline five ideal-typical accounts of algorithms in society and situate them in relation to classical analytical positions in Science and Technology Studies (STS), where debates about the social analysis of technology abound. In outlining these analytical ideal types, we also summarize and situate each article in this special theme in relation to these ideal types. We are aware that this strategy risks foregrounding some perspectives while backgrounding others. There is also a risk that we become too harsh in our ideal typing by omitting and downplaying overlaps and similarities between different perspectives. So, bear with us if we do violence to more nuanced and multifaceted perspectives when we pull some things apart and push some other things together. Under the hood: The politics of algorithms Let us start by going under the hood. A number of researchers maintain that we must analyze algorithms themselves and go under the hood to understand their inherent politics (cf. Ruppert et al., 2017). Several social scientists interested in algorithms or Big Data gather in this camp. For instance, philosophers sometimes like to debate the algorithmic ethics of self-driving cars (Nyholm and Smids, 2016), or racial bias in facial recognition (Chander, 2017), while other analysts have dealt with the algorithmic biases of criminal risk prediction (Angwin et al., 2016). In this analytical ideal type, the logic of the algorithm appears like a deus ex machina impinging on society’s material politics. This analytical ideal type draws on similar logics to Langdon Winner’s (1980) classic article that deals with the politics of technological artifacts. In his analysis, it is the functioning of the technological system that is in focus, and Winner invites us to see artifacts as materialized laws that redefine how we can act for generations to come. Algorithms are also productively and provocatively understood in this way. For instance, in Christopher Miles contribution to this special theme, he fruitfully illustrates how algorithms become intertwined with specific normativities in American farming. Miles shows that although new digital practices are introduced, existing socio-economic normativities are often preserved or even amplified. Thus, the supposedly radical changes in farming practices, suggested by precision agriculture algorithms, only affect certain parts of farming but also seem to thwart other imagined futures. In this type of analysis of algorithms, just as in the case with Winner’s bridges, the politics, effects, and normativities that are designed into algorithms become foregrounded. A crucial task for the researcher thus becomes to analyze invisible algorithmic normativities to understand how they are imbued with power and politics. However, with this type of analysis, might we risk losing sight of the practices, negotiations, and human action that algorithms always are intertwined with? Might we become so seduced by the algorithms, that we forget the many social practices that surround them? Working above the hood: Algorithms in practice What would happen if we instead stay above the hood and never get our hands dirty with the nuts and bolts of the inscrutable algorithms? Here, on the other side of our constructed spectrum of ideal types, we could place 2 Big Data & Society ethnomethodological analyses of the achievement of social order. In this analytical position, algorithms would emerge as ‘contingent upshot of practices, rather than [as] a bedrock reality’ (Woolgar and Lezaun, 2013: 326). In terms of classic studies of materiality in social interaction, Charles Goodwin’s (2000) analysis of the classification of dirt might serve as emblematic. In his article, Goodwin analyzes the interaction, talk, pointing, disagreement, that happens when archaeologists attempt to classify dirt color using a so-called Munsell-chart. Goodwin shows how the human practices of interpreting and debating is crucial to understanding how the meaning of material artifacts is decided. In relation to algorithms, Malte Ziewitz’ (2017) account of an ‘algorithmic walk’ also humorously points us toward the negotiations that surround most algorithms. He highlights the constant work of interpreting, deciding, and debating about algorithms. In an auto-ethnographic experiment, Ziewitz takes a stroll with a friend in the streets of Oxford and makes it into an algorithmic walk. Before the walk, Ziewitz and his friend construct strict algorithmic rules for how to turn at intersections during the walk. But walking like an algorithm turns out to be a difficult task, and Ziewitz and his friend soon start debating the meaning of the algorithm. What direction does the algorithm want us to go at this intersection? In this vein, Lotta Bjo¨ rklund Larsen and Farzana Dudwhala’s contribution to this special theme also discusses how humans adapt to algorithmic outputs, an adaptation that includes normative ideas about how the outcomes of algorithms are interpreted as normal or abnormal. Their argument is that the output of algorithms trigger, or as they propose, recalibrate human responses. Sometimes accepting the algorithmic output, sometimes not, so that an understanding of a normal situation can be achieved. In another article in this special theme, Patricia DeVries and Willem Schinkel also undertake an analysis of the practical politics of algorithms. In their article, they analyze how three artists construct antifacial-recognition face masks to critique and resist facial recognition systems. They take these face masks as a starting point for exploring the social anxiety around surveillance and control. DeVries and Schinkel argue that there is a tendency, in these works, to defend a liberal modernist construction of the autonomous subject and the private self. The meaning of both the face masks and the surveillance algorithms are thus negotiated in practice. This analytical stance elegantly sidesteps the discussion about which nefarious or biased politics are designed into the algorithm. The human negotiations drawing on contexts, materialities, or even face masks, become foregrounded. Opening the opaque and black boxed algorithm to decode its inscrutable politics becomes almost irrelevant; it is the interpretation in practice that is in focus. However, do we then, by working above the hood, risk omitting what algorithms are actually constructed to do? And, we might ask, what then is the price of staying over the hood to focus on practice? Hoods in relations: Non-human agency and black boxes This brings us to the ideal type that approaches algorithms, and technology, through an analysis of nonhuman agency and relationality—perhaps a middle road between going under the hood and staying above it? This analytical ideal type focuses on the intertwining of human and non-human actors (cf. Callon and Law, 1995).3 Here, for instance, Actor-Network Theory (ANT) in its various guises zooms in on the effects of how both non-human and human actors are intertwined (Latour, 1987).4 Relationalities are, for example, the focus of the article by Francis Lee, Jess Bier, Jeffrey Christensen, Lukas Engelmann, Claes-Fredrik Helgesson, and Robin Williams. The authors criticize the current focus on fairness, bias, and oppression in algorithm studies as a step toward objectivism. Instead, they propose to pay attention to operations of folding to highlight how algorithms fold a multitude of things, such as data, populations, simulations, or normalities. For instance, they show how the algorithmic calculation of the spread of an epidemic can produce particular populations or countries as being close to an epidemic, while others seem safely distant. The algorithmic production of different relations thus having potentially far-reaching consequences for both individuals and nations. Similarly, in a comment on accountability and authorship, Elizabeth Reddy, Baki Cakici, and Andrea Ballestero highlight, through the algorithmic experiments of a comic book artist, how algorithms can do much of the work of assembling detective stories. Yet, accountability is still organized, normatively and legally, around human authorship and human agency. So, although algorithms might produce ‘the work,’ ideas about authorship and accountability are still organized around human subjectivity and agency. They observe that there seems to be a tendency to ascribe agency to humans over machines (cf. Callon and Law, 1995). In this ideal type, by focusing on the practices of relational ordering, we can come to understand the complex mixing of agencies and accountabilities Lee and Bjo¨rklund Larsen 3 between algorithms and humans. Instead of seeing black boxed algorithms as a delimiter for a study, this position sees it as the starting point for inquiry.5 Attempting to discern both how the algorithm functions and how it relates to human practice is the modus operandi. Perhaps one might describe this ideal type as combining ‘going under the hood’ with the practice oriented ‘staying above the hood’ ideal types. However, this relational perspective has also been criticized for being apolitical and blind to the powerstruggles of weaker actors, as well as the political effects that algorithms could have on the world (cf. Galis and Lee, 2014; Star, 1991; Winner, 1993). Lives around hoods: Torque, classification, and social worlds Let us widen the lens even more: From detailed studies of interaction, negotiation, and relationality to an analysis of neighborhoods (da-dum-tsch). This analytical position takes an interest in infrastructures of classification and their interaction with human biographies. Here, the politics of infrastructures and classification become the focus. These types of analyses highlight how people’s lives become ‘torqued’, or twisted out of shape, by classification systems.6 For instance, Bowker and Star (1999), in their classic work on infrastructures of classification, show how the socio-material race classification systems of the South African apartheid regime affected human lives in sometimes unpredictable ways. For instance, a light brown child born to dark brown parents could be forced to go to school in another district. But they also show how neighborhoods could come together to challenge the classification system—as the definition of race in apartheid South Africa was founded on social and legal negotiations. In moving this ideal type to the algorithmic arena, a thought-provoking approach might be Philip Roscoe’s (2015) study of a kidney transplant algorithm in the UK. He shows how the question ‘Who is a worthy recipient of a kidney?’ is answered in algorithmic form. But the algorithm—just as apartheid race classifications—is tied to the valuations of worthy recipients. Furthermore, just as neighbors could sometimes band together to challenge a color classification in South Africa, so can hospital staff today ‘game’ the algorithm for a ‘worthy’ recipient, while the algorithm is still used as an escape route from making heart-wrenching decisions on life and death. Such negotiated processes of classification are also the center of attention in Helene Gad Ratner and Evelyn Ruppert’s article in this special theme, which analyses the transformation of data for statistical purposes. In their text, they show how metadata and data cleaning as aesthetic practices have normative effects. Just as Bowker and Star (1999) dealt with the struggles of, for instance, medical, biological, or racist classifications, Ratner and Ruppert show how classification struggles happen through the practices of data cleaning. One instance they document is how absences and indeterminacies in data are resolved through both algorithmic and human interactions with the data. Importantly, these interactions determine what values the data can obtain. Thus, similar to how the apartheid regime performed the population of South Africa, Ratner and Ruppert bring analytical awareness to the normative and performative effects of classification work in relation to homeless and student populations and how they are enacted by infrastructures. In this ideal type, the work of classification and the relations between human lives and classification systems becomes foregrounded. Here the politics of twisting lives out of shape becomes the focus. However, perhaps a risk is that we lose sight of the detailed interactions of how social order is maintained in practice? Or does a focus on the algorithms of classification risk leading us back, full circle, to seeing algorithmic systems as having inherent politics again? The mobile mechanics: The power of analytical reflexivity and mobility Finally, we wish to highlight how a meta-reflexive and meta-analytical attitude toward algorithms opens new avenues for inquiry. By being attentive to how social scientists relate to algorithms as well as to those who work with them, our inherent normativities and presumptions come to the fore. In this special theme, two articles analyze such interventions. David Moats and Nick Seaver challenge our thinking about how computer scientists understand the work of social scientists. In the article, Moats and Seaver document their attempt to arrange an experiment with computer scientists to test ingrained boundaries: how can the quantitative tools of computer science be used for critical social analysis? As it turns out, the authors were instead confronted with their own normative assumptions. By sharing these insights, the authors provoke the reader’s assumptions about the normative disparities inherent in different scientific disciplines. Last, and in a similarly reflexive approach, Jeremy Grosman and Tyler Reigeluth propose a metaframework for understanding algorithmic normativities. Discussing the notion of normativity from the point of 4 Big Data & Society view of various analytical positions, algorithmic systems are said to produce three kinds of normativities: technical, sociotechnical, and behavioral. The authors argue that algorithmic systems are inhabited by normative tensions and that a fruitful approach is to explore the tensions instead of the normativities themselves. Their argument is that this approach allows them to show which norms get more traction than others and perhaps even suggest why this is so. Conclusion A point of departure for this special theme was that algorithms are intertwined with normativities at every step of their existence; in their construction, implementation, as well as their use in practice. The articles explore theoretically and empirically different settings where humans and non-humans engage in practices that are intertwined with normative positions or have normative implications. The array of theoretical approaches—anxieties, pluralities, recalibrations, folds, aesthetics, accountability—that implicate algorithms force us to engage with the multiple normative orders that algorithms are entangled with. In the articles, we get to see how algorithms are intertwined with, on the one hand, expectations of how things ought to be—normative expectations—and, on the other hand, how they enact the normal, the typical, as well as the abnormal and atypical. The articles thus scrutinize ideas of normativities in and around algorithms: how different normativities are enacted with algorithms, and how different normativities are handled when humans tinker with algorithms. With this brief editorial, we hope to entice the reader to explore the various contributions of this special theme. We also hope to have shed light on how algorithms are imbued with normativities at every step, and how these normativities both shape and are shaped by society. In this manner, the special theme seeks to contribute to a more nuanced discussion about algorithmic normativities in complex sociotechnical practices. Acknowledgments We wish to thank all the participants in the network ‘Algorithms as devices of power and valuation’ for the productive and insightful discussions that have proved indispensable for the production of this special theme. As the name of the network suggests, the network broadly explored the social aspects of algorithms in everyday life, business, government, and science. As the network developed, research about algorithms proliferated, if not exploded, and together we explored the various approaches towards understanding algorithms in society. The network and its collaborative milieu were thus essential for the development and publication of this special theme. Thanks all! Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Riksbankens jubileumsfond for their generous funding of the workshops and PhD summer school that led to this special theme. ORCID iD Francis Lee https://orcid.org/0000-0002-7206-2046 Notes 1. See for instance, Amoore (2013); Beer (2009); Dourish (2016); Gillespie (2014); Kitchin (2014); Neyland (2015); Schu¨ ll (2012); Seaver (2017); Striphas (2015); Totaro and Ninno (2014); Ziewitz (2017). See also the critical algorithm studies list: https://socialmediacollective.org/read- ing-lists/critical-algorithm-studies/ 2. The articles in this special theme build on conversations spanning three workshops and a PhD summer school on algorithms in society that were held in Stockholm, Sweden, between 2014 and 2017. 3. Also, Haraway’s (1992) metaphor of the ‘Coyote Trickster’ includes non-human actors in a material-semiotic analysis. 4. Classic ANT studies have highlighted for instance how humans ‘enroll’ microbes, scallops, or speed-bumps in their network to support their various causes. 5. Sometimes in critical algorithm studies, this challenge—to stay above the hood in our playful metaphor—is expressed as going ‘beyond opening the black box’ to study practice and culture (cf. Geiger, 2017). From an ANT perspective, this interpretation of the black box metaphor is misplaced. It is precisely the manifold and complex relations that the black box contains and relates to that is in focus in this analytical ideal type. Going ‘beyond’ the black box in this view reifies the idea that studying ‘the social’ is different than studying ‘the technical.’ 6. On torque see Bowker and Star (1999). References Amoore L (2013) The Politics of Possibility: Risk and Security beyond Probability. Durham, NC: Duke University Press. Angwin J, Larson J, Mattu S, et al. (2016) Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. Available at: www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (accessed 3 July 2019). Beer D (2009) Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society 11: 985–1002. Bowker GC and Star SL (1999) Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press. Lee and Bjo¨rklund Larsen 5 Burrell J (2016) How the machine thinks: Understanding opacity in machine learning algorithms. Big Data & Society 3: 1–12. Callon M and Law J (1995) Agency and the hybrid collectif. South Atlantic Quarterly 94: 481–507. Chander A (2017) The racist algorithm? Michigan Law Review 115: 1022–1045. Diakopoulos N (2016) Accountability in algorithmic decision making. Communications of the ACM 59: 56–62. Dourish P (2016) Algorithms and their others: Algorithmic culture in context. Big Data & Society 3: 1–11. Eubanks V (2017) Automating Inequality: How High-tech Tools Profile, Police, and Punish the Poor. New York, NY: St. Martin’s Press. Fourcade M and Healy K (2017) Categories all the way down. Historical Social Research/Historische Sozialforschung 42: 286–296. Galis V and Lee F (2014) A sociology of treason: The construction of weakness. Science, Technology, & Human Values 39: 154–179. Geiger RS (2017) Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture. Big Data & Society 4: 1–14. Gillespie T (2014) The relevance of algorithms. In: Gillespie T, Boczkowski PJ and Foot KA (eds) Media Technologies: Essays on Communication, Materiality, and Society. Cambridge, MA: MIT Press, pp. 167–194. Goodwin C (2000) Action and embodiment within situated human interaction. Journal of Pragmatics 32: 1489–1522. Haraway DJ (1992) The promises of monsters: A regenerative politics for inappropriate/d others. In: Grossberg L, Nelson C and Treichler PA (eds) Cultural Studies. New York, NY: Routledge, pp. 295–336. Kirkpatrick K (2016) Battling algorithmic bias: How do we ensure algorithms treat us fairly? Communications of the ACM 59: 16–17. Kitchin R (2014) Big data, new epistemologies and paradigm shifts. Big Data & Society 1: 1–12. Kraemer F, Overveld K and Peterson M (2010) Is there an ethics of algorithms? Ethics and Information Technology 13: 251–260. Latour B (1987) Science in Action: How to Follow Scientists and Engineers through Society. Cambridge, MA: Harvard University Press. Neyland D (2015) Bearing account-able witness to the ethical algorithmic system. Science, Technology & Human Values 41: 50–76. Neyland D (2018) Something and nothing: On algorithmic deletion, accountability and value. Science & Technology Studies 31: 13–29. Noble S (2018) Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press. Nyholm S and Smids J (2016) The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice 19: 1275–1289. O’Neil C (2016) Weapons of Math Destruction: How Big Data increases Inequality and Threatens Democracy. New York, NY: Crown Publishing Group. Pasquale F (2015) The Black Box Society: The Secret Algorithms that Control Money and Information. Cambridge, MA: Harvard University Press. Roscoe P (2015) A moral economy of transplantation: Competing regimes of value in the allocation of transplant organs. In: Dussauge I, Helgesson C-F and Lee F (eds) Value Practices in the Life Sciences and Medicine. Oxford: Oxford University Press, pp. 99–118. Roscoe P and Chillas S (2015) Pricing, prizing and praising in the dating services sector – A conceptual approach. EGOS, European Group for Organizational Studies, Athens, Greece. Ruppert E, Isin E and Bigo D (2017) Data politics. Big Data & Society 4: 1–7. Sandvig C, Hamilton K, Karahalios K, et al. (2016) When the algorithm itself is a racist. International Journal of Communication 10: 4972–4990. Schu¨ ll ND (2012) Addiction by Design: Machine Gambling in Las Vegas. Princeton, NJ: Princeton University Press. Seaver N (2017) Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society 4: 1–12. Star SL (1991) Power, technologies and the phenomenology of conventions: On being allergic to onions. In: Law J (ed.) A Sociology of Monsters: Essays on Power, Technology and Domination. London: Routledge, pp. 26–56. Striphas T (2015) Algorithmic culture. European Journal of Cultural Studies 18: 395–412. Totaro P and Ninno D (2014) The concept of algorithm as an interpretative key of modern rationality. Theory Culture & Society 31: 29–49. Winner L (1980) Do artifacts have politics? Daedaleus 109: 121–136. Winner L (1993) Upon opening the black box and finding it empty: Social constructivism and the philosophy of technology. Science, Technology & Human Values 18: 362–378. Woolgar S and Lezaun J (2013) The wrong bin bag: A turn to ontology in science and technology studies. Social Studies of Science 43: 321–340. Zarsky T (2015) The trouble with algorithmic decisions an analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology & Human Values 118–132. Ziewitz M (2017) A not quite random walk: Experimenting with the ethnomethods of the algorithm. Big Data & Society 4: 1–13. 6 Big Data & Society